On 14 нояб, 18:12, Steven D'Aprano <[EMAIL PROTECTED]
cybersource.com.au> wrote:
> On Fri, 14 Nov 2008 06:35:27 -0800, konstantin wrote:
> > Hi,
>
> > I wonder if there is a safe way to download page with urllib2. I've
> > constructed following method to catch all possible exceptions.
>
> See here:
On Fri, 14 Nov 2008 06:35:27 -0800, konstantin wrote:
> Hi,
>
> I wonder if there is a safe way to download page with urllib2. I've
> constructed following method to catch all possible exceptions.
See here:
http://niallohiggins.com/2008/04/05/python-and-poor-documentation-
urllib2urlopen-except
I mean I don't want to catch all unexpected errors with empty
"except:" :).
--
http://mail.python.org/mailman/listinfo/python-list
Hi,
I wonder if there is a safe way to download page with urllib2. I've
constructed following method to catch all possible exceptions.
def retrieve(url):
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
headers = {'User-Agent':user_agent}
request = urllib2.Request(url, he