Dennis Lee Bieber wrote:
On Sun, 23 Aug 2009 22:14:17 -0700, John Nagle <na...@animats.com>
declaimed the following in gmane.comp.python.general:

     Multiple Python processes can run concurrently, but each process
has a copy of the entire Python system, so the memory and cache footprints are
far larger than for multiple threads.

        One would think a smart enough OS would be able to share the
executable (interpreter) code, and only create a new stack/heap
allocation for data.
That's what fork is all about. (See os.fork(), available on most Unix/Linux) The two processes start out sharing their state, and only the things subsequently written need separate swap space.

In Windows (and probably Unix/Linux), the swapspace taken by the executable and DLLs(shared libraries) is minimal. Each DLL may have a "preferred location" and if that part of the address space is available, it takes no swapspace at all, except for static variables, which are usually allocated together. I don't know whether the standard build of CPython (python.exe and the pyo libraries) uses such a linker option, but I'd bet they do. It also speeds startup time.

On my system, a minimal python program uses about 50k of swapspace. But I'm sure that goes way up with lots of imports.


DaveA
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to