Loren Wilton wrote:
I've read that Python supports 'threads', and I'd assumed (maybe incorrectly) that these were somewhat separate environments that could be operating concurrently (modulo the GC lock).
Not really. Python threads are just a thin wrapper around OS threads, and don't provide any more isolation than the OS's native threads do. All the threads in a Python process share one set of imported modules, and therefore one set of global data. For example, there will only be one sys module, and thus things like sys.stdin and sys.stdout will be shared between threads. If one user pointed his sys.stdin somewhere else, it would affect all the others. There is *some* support in CPython for multiple interpeters within a process, but I don't know how much isolation that provides, or whether it has even kept up enough with all the changes over the years to still be usable. I know that subinterpreters are not fully isolated, e.g. built-in constants and type objects are shared. There is also the GIL to consider, which will prevent more than one thread from running Python code at the same time. Does Python really need access to data in the VM's memory, or just to data in its disk files, etc? If the latter, you might not need to run Python in the same process as the VM as long as you can arrange some kind of I/O bridge. Does the OS running on the VM have any notion of a remote file access protocol a la NFS that an external Python process could use to access data? -- Greg -- https://mail.python.org/mailman/listinfo/python-list