Thats awesome. Its time I migrate to 3 :) On Fri, Apr 8, 2011 at 11:29 PM, Raymond Hettinger <pyt...@rcn.com> wrote:
> On Apr 8, 12:25 am, Chris Angelico <ros...@gmail.com> wrote: > > On Fri, Apr 8, 2011 at 5:04 PM, Abhijeet Mahagaonkar > > > > <abhijeet.mano...@gmail.com> wrote: > > > I was able to isolate that major chunk of run time is eaten up in > opening a > > > webpages, reading from them and extracting text. > > > I wanted to know if there is a way to concurrently calling the > functions. > > > > So, to clarify: you have code that's loading lots of separate pages, > > and the time is spent waiting for the internet? If you're saturating > > your connection, then this won't help, but if they're all small pages > > and they're coming over the internet, then yes, you certainly CAN > > fetch them concurrently. As the Perl folks say, There's More Than One > > Way To Do It; one is to spawn a thread for each request, then collect > > up all the results at the end. Look up the 'threading' module for > > details: > > > > http://docs.python.org/library/threading.html > > The docs for Python3.2 have a nice example for downloading multiple > webpages in parallel: > > > http://docs.python.org/py3k/library/concurrent.futures.html#threadpoolexecutor-example > > Raymond > -- > http://mail.python.org/mailman/listinfo/python-list >
-- http://mail.python.org/mailman/listinfo/python-list