Fredrik Lundh wrote: > Steve Holden wrote: > > > You will need to import the socket module and then call socket.setdefaulttimeout() to ensure that > > communication with non-responsive servers results in a socket exception that you can trap. > > or you can use asynchronous sockets, so your program can keep processing > the sites that do respond at once while it's waiting for the ones that don't. for > one way to do that, see "Using HTTP to Download Files" here: > > http://effbot.org/zone/effnews-1.htm > > (make sure you read the second and third article as well) > Dear Fredrik Lundh, Thank you for the link. I checked it. But I have not found an answer to my question. My problem is that I can not finish( sometimes) to download all pages. Sometimes my script freezes and I can not do nothing but restart the script from the last successfully downloaded web page. There is no error saying that was an error. I do not know why; maybe the server is programed to reduce the numbers of connection or there maybe different reasons.So, my idea was two threads. One master ,suprevising the slave thread that would do downloading and if the slave thread stopped, master thread would start another slave. Is it a good solution? Or is there a better solution? Thanks for help Lad
-- http://mail.python.org/mailman/listinfo/python-list