In article <[EMAIL PROTECTED]>,
John Nagle <[EMAIL PROTECTED]> wrote:
>I asked over at Webmaster World, and over there, they recommend against
> using redirects on robots.txt files, because they questioned whether all of
> the major search engines understand that. Does a redirect for
> "foo
Nikita the Spider wrote:
> In article <[EMAIL PROTECTED]>,
> John Nagle <[EMAIL PROTECTED]> wrote:
>
>
>>Nikita the Spider wrote:
>>
>>
>>>Hi John,
>>>Are you sure you're not confusing your sites? The robots.txt file at
>>>www.ibm.com contains the double slashed path. The robots.txt file at
>>
In article <[EMAIL PROTECTED]>,
John Nagle <[EMAIL PROTECTED]> wrote:
> Nikita the Spider wrote:
>
> >
> > Hi John,
> > Are you sure you're not confusing your sites? The robots.txt file at
> > www.ibm.com contains the double slashed path. The robots.txt file at
> > ibm.com is different and c
Nikita the Spider wrote:
>
> Hi John,
> Are you sure you're not confusing your sites? The robots.txt file at
> www.ibm.com contains the double slashed path. The robots.txt file at
> ibm.com is different and contains this which would explain why you
> think all URLs are denied:
> User-agent: *
In article <[EMAIL PROTECTED]>,
John Nagle <[EMAIL PROTECTED]> wrote:
>Python's "robots.txt" file parser may be misinterpreting a
> special case. Given a robots.txt file like this:
>
> User-agent: *
> Disallow: //
> Disallow: /account/registration
> Disallow: /accoun
Python's "robots.txt" file parser may be misinterpreting a
special case. Given a robots.txt file like this:
User-agent: *
Disallow: //
Disallow: /account/registration
Disallow: /account/mypro
Disallow: /account/myint
...
the python library "robo