> I didn't try looking at your example, but I think it's likely a bug
> both in that site's HTTP server and in httplib. If it's the same one
> I saw, it's already reported, but nobody fixed it yet.
>
> http://python.org/sf/1411097
>
>
> John
Thanks. I tried the example in the link you gave, and
Hello All,
I've ran into this problem on several sites where urllib2 will hang
(using all the CPU) trying to read a page. I was able to reproduce it
for one particular site. I'm using python 2.4
import urllib2
url = 'http://www.wautomas.info'
request = urllib2.Request(url)
opener = urllib2.build
> depending on your application, a bloom filter might be a good enough:
>
> http://en.wikipedia.org/wiki/Bloom_filter
>
Thanks (everyone) for the comments. I like the idea of the bloom
filter or using an md5 hash, since a rare collision will not be a
show-stopper in my case.
--
http://mail
[EMAIL PROTECTED] wrote:
> Hello,
> I am using some very large dictionaries with keys that are long strings
> (urls). For a large dictionary these keys start to take up a
> significant amount of memory. I do not need access to these keys -- I
> only need to be able to retrieve the value associate
Hello,
I am using some very large dictionaries with keys that are long strings
(urls). For a large dictionary these keys start to take up a
significant amount of memory. I do not need access to these keys -- I
only need to be able to retrieve the value associated with a certain
key, so I do not w