I discovered another interesting feature in my tests:
If 'fork' exceeds nproc value, kernel panics.
If 'fork' reaches the available memory limit, it is blocked until some
memory is released (e.g. by the finish of some process).
Pavel
2014/1/10 erik quanstrom
> > launching 32000 processes was
> launching 32000 processes was not possible. the kernel got stuck.
sloppy statement. it's not clear if the kernel was really stuck or just
hit something exponential.
> here's one thing that's not immediately obvious, even when running the
> kernel. conv.nmach must be less than 0x7fff/(100
> 20,000 did not work because it ran out of kernel physical memory. That
> preallocation could be adjusted, but at some point the available kernel
> virtual address space will limit what it can allocate.
at the cost of moving KZERO down 256MB on the pae kernel,
ivey# ps|wc
15961
Good work. As my good friend Boyd once said "Don't give me bullshit
speculation. Measure something!".
brucee
On 10 January 2014 20:15, Charles Forsyth wrote:
>
> On 10 January 2014 09:11, Charles Forsyth wrote:
>
>> At that point I decided to quite while I was still ahead.
>
>
> 20,000 did not
On 10 January 2014 09:11, Charles Forsyth wrote:
> At that point I decided to quite while I was still ahead.
20,000 did not work because it ran out of kernel physical memory. That
preallocation could be adjusted, but at some point the available kernel
virtual address space will limit what it ca
On 9 January 2014 08:08, Pavel Klinkovský wrote:
> By the hard limit I consider something like "maximal capacity of GDT, LDT"
> or something similar, if exists.
GDT and even Tss are per-processor; ldt isn't used.
As to soft limits, apart from the few linear searches mentioned, which
could be el
suck it and see, the answerers didn't understand the question. add
nproc=XXX to plan9.ini and use the environment, or hard code code it. i'd
like to see your results for nproc=50 and nproc=5000.
brucee
On 9 January 2014 19:08, Pavel Klinkovský wrote:
> Hi Steven,
>
>
> conf.nproc = 100 + ((
Hi Steven,
conf.nproc = 100 + ((conf.npage*BY2PG)/MB)*5;
> if(cpuserver)
> conf.nproc *= 3;
> if(conf.nproc > 2000)
> conf.nproc = 2000;
>
> In general, you will find that 2000 is the highest allowable due to
> limits imposed by proc.c.
but if I understand it
> In general, you will find that 2000 is the highest allowable due to
> limits imposed by proc.c. Other architectures can (and will) place
> additional restrictions. A non-FCSE ARM implementation could elect to
> only support 256 processes to avoid additional switching overhead for
> example.
i ha
On Wed, Jan 8, 2014 at 2:28 AM, Pavel Klinkovský
wrote:
> Hi all,
>
> I would like to know whether there is any hard (based on CPU architecture)
> limit of maximal number of processes in Plan9 on Intel or ARM.
>
> I do not think the soft limit like the lack of memory... ;)
>
> Thanks in advance fo
> I would like to know whether there is any hard (based on CPU architecture)
> limit of maximal number of processes in Plan9 on Intel or ARM.
>
> I do not think the soft limit like the lack of memory... ;)
there is not.
- erik
Hi all,
I would like to know whether there is any hard (based on CPU architecture)
limit of maximal number of processes in Plan9 on Intel or ARM.
I do not think the soft limit like the lack of memory... ;)
Thanks in advance for any hint.
Pavel
12 matches
Mail list logo