Problem with Python's "robots.txt" file parser in module robotparser

J

John Nagle

Python's "robots.txt" file parser may be misinterpreting a
special case. Given a robots.txt file like this:

User-agent: *
Disallow: //
Disallow: /account/registration
Disallow: /account/mypro
Disallow: /account/myint
...

the python library "robotparser.RobotFileParser()" considers all pages of the
site to be disallowed. Apparently "Disallow: //" is being interpreted as
"Disallow: /". Even the home page of the site is locked out. This may be incorrect.

This is the robots.txt file for "http://ibm.com".
Some IBM operating systems recognize filenames starting with "//"
as a special case like a network root, so they may be trying to
handle some problem like that.

The spec for "robots.txt", at

http://www.robotstxt.org/wc/norobots.html

says "Disallow: The value of this field specifies a partial URL that is not to
be visited. This can be a full path, or a partial path; any URL that starts with
this value will not be retrieved." That suggests that "//" should only disallow
paths beginning with "//".

John Nagle
SiteTruth
 
N

Nikita the Spider

John Nagle said:
Python's "robots.txt" file parser may be misinterpreting a
special case. Given a robots.txt file like this:

User-agent: *
Disallow: //
Disallow: /account/registration
Disallow: /account/mypro
Disallow: /account/myint
...

the python library "robotparser.RobotFileParser()" considers all pages of the
site to be disallowed. Apparently "Disallow: //" is being interpreted as
"Disallow: /". Even the home page of the site is locked out. This may be
incorrect.

This is the robots.txt file for "http://ibm.com".

Hi John,
Are you sure you're not confusing your sites? The robots.txt file at
www.ibm.com contains the double slashed path. The robots.txt file at
ibm.com is different and contains this which would explain why you
think all URLs are denied:
User-agent: *
Disallow: /

I don't see the bug to which you're referring:
I'll use this opportunity to shamelessly plug an alternate robots.txt
parser that I wrote to address some small bugs in the parser in the
standard library.
http://NikitaTheSpider.com/python/rerp/

Cheers
 
J

John Nagle

Nikita said:
Hi John,
Are you sure you're not confusing your sites? The robots.txt file at
www.ibm.com contains the double slashed path. The robots.txt file at
ibm.com is different and contains this which would explain why you
think all URLs are denied:
User-agent: *
Disallow: /
Ah, that's it. The problem is that "ibm.com" redirects to
"http://www.ibm.com", but but "ibm.com/robots.txt" does not
redirect. For comparison, try "microsoft.com/robots.txt",
which does redirect.

John Nagle
 
N

Nikita the Spider

John Nagle said:
Ah, that's it. The problem is that "ibm.com" redirects to
"http://www.ibm.com", but but "ibm.com/robots.txt" does not
redirect. For comparison, try "microsoft.com/robots.txt",
which does redirect.

Strange thing for them to do, isn't it? Especially with two such
different robots.txt files.
 
J

John Nagle

Nikita said:
Strange thing for them to do, isn't it? Especially with two such
different robots.txt files.

I asked over at Webmaster World, and over there, they recommend against
using redirects on robots.txt files, because they questioned whether all of
the major search engines understand that. Does a redirect for
"foo.com/robots.txt" mean that the robots.txt file applies to the domain
being redirected from, or the domain being redirected to?

John Nagle
 
N

Nikita the Spider

John Nagle said:
I asked over at Webmaster World, and over there, they recommend against
using redirects on robots.txt files, because they questioned whether all of
the major search engines understand that. Does a redirect for
"foo.com/robots.txt" mean that the robots.txt file applies to the domain
being redirected from, or the domain being redirected to?

Good question. I'd guess the latter, but it's a little ambiguous. I
agree that redirecting a request for robots.txt is probably not a good
idea. Given that the robots.txt standard isn't as standard as it could
be, I think it's a good idea in general to apply the KISS principle when
dealing with things robots.txt-y.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top