On Feb 20, 9:58 pm, John Nagle <na...@animats.com> wrote:
> sjdevn...@yahoo.com wrote:
> > On Feb 18, 2:58 pm, John Nagle <na...@animats.com> wrote:
> >>     Multiple processes are not the answer.  That means loading multiple
> >> copies of the same code into different areas of memory.  The cache
> >> miss rate goes up accordingly.
>
> > A decent OS will use copy-on-write with forked processes, which should
> > carry through to the cache for the code.
>
>     That doesn't help much if you're using the subprocess module.  The
> C code of the interpreter is shared, but all the code generated from
> Python is not.

Of course.  Multithreading also fails miserably if the threads all try
to call exec() or the equivalent.

It works fine if you use os.fork().
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to