On Mon, Sep 25, 2000 at 10:33:06AM +0200, Ingo Molnar wrote:
> On Mon, 25 Sep 2000, Ted Deppner wrote:
> 
> > I ask because on my perl-threads test case, I can't create more than 1023
> > threads, but I get a kernel crash when I've _attempted_ to create more
> > than 1023 and hit ctrl-c.
> 
> could you test this with the kernel/signal.c:max_queued_signals
> initialization change i suggested? Does it still crash?

With max_queued_signals=4096, I can still only create 1022 threads under
perl-5.005-threads.  

With more than 1023 threads the process no longer responds to ctrl-c, or a
kill -INT on it.  A kill -9 will kill it however with no kernel lockup.

Under 1023 threads the process responds to ctrl-c.

It seems like the bug is definately involved in signal handling, and that
max_queued_signals affects it in some way...


My ulimit -a from bash... you can see open files at 1024, but I'm not
doing open files stuff in my test program (threadcrash.pl).

core file size (blocks)     0
data seg size (kbytes)      unlimited
file size (blocks)          unlimited
max locked memory (kbytes)  unlimited
max memory size (kbytes)    unlimited
open files                  1024
pipe size (512 bytes)       8
stack size (kbytes)         8192
cpu time (seconds)          unlimited
max user processes          4093
virtual memory (kbytes)     unlimited

I upped my open-files to 2048 and still was unable to get more than 1022
threads running.  I wonder if perl-5.005-threads might have a static limit
set somewhere inside it.  Maybe I'll try to recompile it tonight and see
what happens.

-- 
Ted Deppner
http://www.psyber.com/~ted/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/

Reply via email to