Haha! My mistake.
The error is that when a web server is chunking a web page only the
first chunk appears to be acquired by the urllib2.urlopen call. If you
check the headers, there is no 'Content-length' (as expected) and
instead there is 'transfer-encoding' = 'chunked'. I am getting about
the
I am having errors which appear to be linked to a previous bug in
urllib2 (and urllib) for v2.4 and v2.5 of Python. Has this been fixed?
Has anyone established a standard workaround? I keep finding old
posts about it, that basically give up and say "well it's a known bug."
Any help would be gre
I am fetching different web pages (never the same one) from a web
server. Does that make a difference with them trying to block me?
Also, if it was only that site blocking me, then why does the internet
not work in other programs when this happens in the script. It is
almost like something is see
I have a script that uses urllib2 to repeatedly lookup web pages (in a
spider sort of way). It appears to function normally, but if it runs
too long I start to get 404 responses. If I try to use the internet
through any other programs (Outlook, FireFox, etc.) it will also fail.
If I stop the scri