Tom Plunket wrote: > Carl J. Van Arsdall wrote: > > >> Because of the GIL only one thread can actually run at a time. >> > > I've recently been wondering about this, since in the work I do, a lot > of time is spent doing disk I/O. So if I want the UI to remain > responsive, I could spawn an IO thread to handle requests, and do a > pretty simple "just whack new requests onto the queue" without locks > since I'm guaranteed to not have the IO thread read at the same time > as the requestor thread? > > ...what exactly constitutes an atomic operation in Python, anyway? > >
Well, although only one thread can run at a time due to the GIL you can't accurately predict when the GIL is going to be released and therefore you don't know when another thread is going to pick up and start going (GIL is released on every so many byte instructs - correct me if i'm wrong, certain operations that have a bit to do with IO, and modules you wrote yourself where you manually release the GIL using macros provided in the C API). If you have your own data structure that is shared among threads you can use the threading modules synchronization constructs to get the job done, using locks, conditions, and events. Queue.Queue is also a good way to go about communicating with threads. > e.g. > > class IoThread: > # ... > > # called from the other thread... > def RequestFile(self, name): > self.fileQueue.append(name) > > # called during the IO thread > def GetNextFile(self); > next = self.fileQueue[0] > self.fileQueue.pop(0) > return next > > ? > -tom! > -- Carl J. Van Arsdall [EMAIL PROTECTED] Build and Release MontaVista Software -- http://mail.python.org/mailman/listinfo/python-list