On Sun, 25 Oct 2009, Gabriel Genellina wrote: > En Sat, 24 Oct 2009 06:40:08 -0300, John O'Hagan <resea...@johnohagan.com> > > escribió: > > I have several instances of the same generator function running > > simultaneously, some within the same process, others in separate > > processes. I > > want them to be able to share data (the dictionaries passed to them as > > arguments), in such a way that instances designated as "leaders" send > > their > > dictionaries to "follower" instances. > > > > I'm trying to use sockets to relay the dicts in pickled form, like this: > > > > from socket import socket > > > > PORT = 2050 > > RELAY = socket() > > RELAY.bind(('', PORT)) > > RELAY.listen(5) > > > > PICKLEDICT = '' > > while 1: > > INSTANCE = RELAY.accept()[0] > > STRING = INSTANCE.recv(1024) > > if STRING == "?": > > INSTANCE.send(PICKLEDICT) > > else: > > PICKLEDICT = STRING > > > > What I was hoping this would do is allow the leaders to send their dicts > > to > > this socket and the followers to read them from it after sending an > > initial > > "?", and that the same value would be returned for each such query until > > it > > was updated. > > > > But clearly I have a fundamental misconception of sockets, as this logic > > only > > allows a single query per connection, new connections break the old > > ones, and > > a new connection is required to send in a new value. > > You may use sockets directly, but instead of building all infrastructure > yourself, use a ThreadingTCPServer (or ForkingTCPServer), they allow for > simultaneous request processing. Even setting up a SimpleXMLRPCServer > (plus either ThreadingMixIn or ForkingMixIn) is easy enough. > > > Are sockets actually the best way to do this? If so, how to set it up to > > do > > what I want? If not, what other approaches could I try? > > See the wiki page on distributed systems: > http://wiki.python.org/moin/DistributedProgramming > Thanks for that, I didn't realize this was such a complex problem until reading the SocketServer docs and the above link. I think I'll aim for using something like Pyro when I can get a handle on it; but this quote from the socket how-to docs was interesting:
"If you need fast IPC between two processes on one machine, you should look into whatever form of shared memory the platform offers. A simple protocol based around shared memory and locks or semaphores is by far the fastest technique." So I'll also look into that approach - ICBW but I get the feeling the threaded client/server network model may be too much baggage for what is really an IPC problem. In fact, while looking into this I've just been using a simple temp file to share the data and that's working fine, although it's relatively slow. Thanks for the pointers, Regards, John -- http://mail.python.org/mailman/listinfo/python-list