In message <[EMAIL PROTECTED]>, John Nagle
wrote:

>     For some reason, Python's parser for "robots.txt" files
> doesn't like Wikipedia's "robots.txt" file:
> 
>  >>> import robotparser
>  >>> url = 'http://wikipedia.org/robots.txt'
>  >>> chk = robotparser.RobotFileParser()
>  >>> chk.set_url(url)
>  >>> chk.read()
>  >>> testurl = 'http://wikipedia.org'
>  >>> chk.can_fetch('Mozilla', testurl)
> False
>  >>>

    >>> chk.errcode
    403

Significant?

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to