It is a Perl script that calls C code to tune bounds for the nth Ramanujan
prime.  All the code is in my directory.  No threads, forks, execs, signal
handling code, shell outs, file I/O other than what running any Perl script
with XS code does, etc.  The process had run successfully many times, and
outputs to stdout as it runs.  In this case the bounds were quite high,
resulting in a *lot* of memory (the point being to find the bounds so when
users run it they can use less memory).  The memory use stayed constant
after doing the initial allocations.  I had top running in another window.

At some point (a few hours in) I decided it had run too long without
intermediate output, so did a control C.  With that much memory, I thought
it might take a while, but nothing.  Control-Z, Control-backslash, no
response.  Kill -1 in another window did nothing, nor did kill -15 later,
nor kill -9 after a while.  I thought perhaps if I quit that ssh, then did
a ps looking for my processes and killing them that perhaps that might move
things along.  No effect -- everything died but that PID, which stayed in
run state reporting 100% of a single thread and the same memory use.  I
tried a number of times to kill it, including a few times each day after
sending the report.

Dana

On Sat, Jan 2, 2016 at 1:23 PM, Jeffrey Walton <noloa...@gmail.com> wrote:

> On Sat, Jan 2, 2016 at 1:08 PM, Dana Jacobsen <dana.jacob...@gmail.com>
> wrote:
> > I tried to kill it with various signals and it ignores -1, -15, -9, etc.
> > Even if it ignored signals (which nothing in the process should do), it
> > should have finished within ~12 hours, so it's borked.
> >
> > I sent a message to OSUOSL support on Dec 30 about it but got no reply
> other
> > than a ticket number (28099).
>
> Out of curiosity, do you recall what you did?
>
> Maybe a fork/exec got out of control?
>
> Jeff
>
_______________________________________________
Gcc-cfarm-users mailing list
Gcc-cfarm-users@gna.org
https://mail.gna.org/listinfo/gcc-cfarm-users

Reply via email to