I am a complete ignoramus and newbie when it comes to designing and coding networked clients (or servers for that matter). I have a copy of Goerzen (Foundations of Python Network Programming) and once pointed in the best direction should be able to follow my nose and get things sorted... but I am not quite sure which is the best path to take and would be grateful for advice from networking gurus.
I am writing a program to display horse racing tote odds in a desktop client program. I have access to an HTTP (open one of several URLs, and I get back an XML doc with some data... not XML-RPC.) source of XML data which I am able to parse and munge with no difficulty at all. I have written and successfully tested a simple command line program which allows me to repeatedly poll the server and parse the XML. Easy enough, but the real world production complications are: 1) The data for the race about to start updates every (say) 15 seconds, and the data for earlier and later races updates only every (say) 5 minutes. There is no point for me to be hammering the server with requests every 15 seconds for data for races after the upcoming race... I should query for this perhaps every 150s to be safe. But for the upcoming race, I must not miss any updates and should query every ~7s to be safe. So... in the middle of a race meeting the situation might be: race 1 (race done with, no-longer querying), race 2 (race done with, no longer querying) race 3 (about to start, data on server for this race updating every 15s, my client querying every 7s), races 4-8 (data on server for these races updating every 5 mins, my client querying every 2.5 mins) 2) After a race has started and betting is cut off and there are consequently no more tote updates for that race (it is possible to determine when this occurs precisely because of an attribute in the XML data), I need to stop querying (say) race 3 every 7s and remove race 4 from the 150s query group and begin querying its data every 7s. 3) I need to dump this data (for all races, not just current about to start race) to text files, store it as BLOBs in a DB *and* update real time display in a wxpython windowed client. My initial thought was to have two threads for the different update polling cycles. In addition I would probably need another thread to handle UI stuff, and perhaps another for dealing with file/DB data write out. But, I wonder if using Twisted is a better idea? I will still need to handle some threading myself, but (I think) only for keeping wxpython happy by doing all this other stuff off the main thread + perhaps also persisting received data in yet another thread. I have zero experience with these kinds of design choices and would be very happy if those with experience could point out the pros and cons of each (synchronous/multithreaded, or Twisted) for dealing with the two differing sample rates problem outlined above. Many TIA! -- http://mail.python.org/mailman/listinfo/python-list