segmentation fault on long running script (linux)

Discussion in 'Ruby' started by Vladimir Konrad, Feb 26, 2007.

  1. i have a long running script which takes a lot netcdf files and
    generates SQL which get's piped to another process for database import.
    the script failed about 1/2 way through (after about 5 days) with
    segmentation fault.

    is there a way to debug segfaulting script (i.e. is there a way to
    generate core file) so i can find out more?

    my understanding is that segmentation fault is usually caused by
    addressing memory which does not belong to the process (bad pointer and
    such).

    --- using
    ruby 1.8.2 (2005-04-11) [i386-linux]
    ---
    vlad

    --
    Posted via http://www.ruby-forum.com/.
    Vladimir Konrad, Feb 26, 2007
    #1
    1. Advertising

  2. Vladimir Konrad

    Jan Svitok Guest

    On 2/26/07, Vladimir Konrad <> wrote:
    >
    > i have a long running script which takes a lot netcdf files and
    > generates SQL which get's piped to another process for database import.
    > the script failed about 1/2 way through (after about 5 days) with
    > segmentation fault.
    >
    > is there a way to debug segfaulting script (i.e. is there a way to
    > generate core file) so i can find out more?
    >
    > my understanding is that segmentation fault is usually caused by
    > addressing memory which does not belong to the process (bad pointer and
    > such).
    >
    > --- using
    > ruby 1.8.2 (2005-04-11) [i386-linux]
    > ---
    > vlad


    I suppose that linux makes core dumps unless it is told not to do
    (using ulimit). You can inspect the core file with gdb. It may be
    helpful to have symbols table available (i.e. not stripped ruby).

    Now, don't take this as too accurate... Last time I debugged a core
    file was in 1999...
    Jan Svitok, Feb 26, 2007
    #2
    1. Advertising

  3. On Tue, Feb 27, 2007 at 12:21:08AM +0900, Jan Svitok wrote:
    > I suppose that linux makes core dumps unless it is told not to do
    > (using ulimit).


    Or the process is setuid, unless you enable dumping of setuid processes with
    sysctl. By default:

    kernel.suid_dumpable = 0

    Note that many Linux systems have the core size ulimit set to 0 by default.
    I get

    $ ulimit -a | grep core
    core file size (blocks, -c) 0

    on both Ubuntu 6.06 and CentOS 4.4.

    Regards,

    Brian.
    Brian Candler, Feb 26, 2007
    #3
  4. > $ ulimit -a | grep core
    > core file size (blocks, -c) 0


    this settings is the "culprit" i think, (this is on debian sarge).

    thank you all very much for the pointers.

    vlad

    --
    Posted via http://www.ruby-forum.com/.
    Vladimir Konrad, Feb 26, 2007
    #4
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Alex Hunsley
    Replies:
    17
    Views:
    852
  2. Gregor Rot
    Replies:
    5
    Views:
    6,768
    Chris Dams
    Jan 7, 2004
  3. John
    Replies:
    9
    Views:
    484
    Dan Pop
    Nov 10, 2003
  4. Maxx
    Replies:
    17
    Views:
    1,090
    James Kuyper
    Jan 2, 2012
  5. Bob Hutchison
    Replies:
    26
    Views:
    490
    Yukihiro Matsumoto
    Dec 4, 2006
Loading...

Share This Page