On 4/28/19 6:26 PM, nathan tech wrote: > Hello everyone, > > Most recently, I have started work using feedparser. > > I noticed, almost straight away, it's a bit slow. > > For instance: > > url="http://feeds.bbci.co.uk/news/rss.xml" > > f1=feedparser.parse(url) > > > On some feeds, this can take a few seconds, on the talk python to me > feed, it takes almost 10! > > This, obviously, is not ideal when running a program which checks for > updates every once in a while. Talk about slooooow! > > > I tried using etag, and modified, but none of the feeds seem to ever > have them! > > Similarly, this doesn't seem to work: > > f2=feedparser.parse(url, f.headers["date"]) > > What am I doing wrong? > > Any help appreciated. > > A greatly frustrated Nate
This is just an aside... programs which depend on fetching things from the Web are candidates for various advanced programming techniques. One is to not write synchronously, where you ask for some data, process the data, and present results, as if everything happened instantly. Instead various techniques like using callbacks, or multiple threads, or multiprocessing, or Python's asynchronous facilities can be employed. Documentation for several of those techniques use web communications as their examples :) In other words, think of writing your code so other work can happen while waiting for a particular response (for example firing off requests to other feeds), and thing of how your update checking can happen in the background so the data is there when you want to look at it. Another is when you write your unit tests, mock the responses from the internet servers so your tests don't suffer the same delays as interactive use of the program will see. _______________________________________________ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor