uses of robots.txt

M

Math

Hi,

There is something I really don't understand ; and I would like your
advises...

1. Some websites, (for instance news.google.fr) contains a
syndication feed (like http://news.google.fr/nwshp?topic=po&output=atom).

2. Theses websites have a robots.txt file preventing some robots
(declared by user-agents) from indexation.
For example : http://news.google.fr/robots.txt contains (extract) :
User-agent: *
Disallow: /nwshp

3. I've developped an syndication aggregator, and I woul'd like to
respect these robots.txt files. but as I can see and understand, my
user-agent isn't authorized to acces /nwshp?topic=po&output=atom
because of this robots.txt...

So, is it normal ? robots.txt files are only for indexation robots ?
to sum up, my syndication aggregator should respect these files or
not ?

Thanks.
 
N

Nikita the Spider

Math said:
Hi,

There is something I really don't understand ; and I would like your
advises...

1. Some websites, (for instance news.google.fr) contains a
syndication feed (like http://news.google.fr/nwshp?topic=po&output=atom).

2. Theses websites have a robots.txt file preventing some robots
(declared by user-agents) from indexation.
For example : http://news.google.fr/robots.txt contains (extract) :
User-agent: *
Disallow: /nwshp

3. I've developped an syndication aggregator, and I woul'd like to
respect these robots.txt files. but as I can see and understand, my
user-agent isn't authorized to acces /nwshp?topic=po&output=atom
because of this robots.txt...

So, is it normal ? robots.txt files are only for indexation robots ?
to sum up, my syndication aggregator should respect these files or
not ?

Hi Math,
It's hard to say, but if they prefer to keep this content from being
copied to other sites, robots.txt is the way to do it. In other words,
you can't assume they just want to keep indexing bots out, they might
want to keep all bots out.

If your aggregator is only being used by you and a few friends, then
probably Google et al wouldn't care if your bot visits them once per
hour or so. But if you want this aggregator to be used by lots of
people, then I'd say you need to respect robots.txt.

BTW the closest thing there is to a standard for robots.txt is here:
http://www.robotstxt.org/wc/norobots-rfc.html

When describing robots, it focuses on indexing bots. But it was written
at a time when Web robots were less varied then they are now, so the
author may not have considered your case.

Good luck
 
N

Newsgroups

Thanks for your answers Nikita the Spider,

If your aggregator is only being used by you and a few friends,
Currently, yes ;-( but I developped it also for anybody who want to use
it. :)
But if you want this aggregator to be used by lots of
people, then I'd say you need to respect robots.txt.
The problem is : where is the limit between "few friends" and "lots of
people"...

When describing robots, it focuses on indexing bots. But it was written
at a time when Web robots were less varied then they are now, so the
author may not have considered your case.
Yes, I agree. It's another debate, and I'm not used to reed rfc, so what
mean "Expires June 4, 1997" on this rfc ? Mean that Comments are not
considered after this date ? If not, I could comment this rfc. :)
 
K

Ken Sims

Hi Math,
It's hard to say, but if they prefer to keep this content from being
copied to other sites, robots.txt is the way to do it. In other words,
you can't assume they just want to keep indexing bots out, they might
want to keep all bots out.

If your aggregator is only being used by you and a few friends, then
probably Google et al wouldn't care if your bot visits them once per
hour or so. But if you want this aggregator to be used by lots of
people, then I'd say you need to respect robots.txt.

I missed the original message because it was posted from Google
Gropes, but my opinion is that *all* automated software should
retrieve and respect robots.txt. I enforce it on my server by
blocking the IP addresses of bad software at the router.
 
N

Nikita the Spider

Newsgroups said:
Thanks for your answers Nikita the Spider,


Currently, yes ;-( but I developped it also for anybody who want to use
it. :)

The problem is : where is the limit between "few friends" and "lots of
people"...

That's where it gets tricky. =) But consider this -- if you obey
robots.txt 100% from the start, you'll always be doing the right thing
no matter how many people use your aggregator.
Yes, I agree. It's another debate, and I'm not used to reed rfc, so what
mean "Expires June 4, 1997" on this rfc ? Mean that Comments are not
considered after this date ? If not, I could comment this rfc. :)

That RFC was only a draft and it expired before it was approved.
However, no other RFC governing the use of robots.txt has ever been
approved or even written as far as I know, so that RFC is the closest
thing we have to a official standard.
 
N

Newsgroups

That's where it gets tricky. =) But consider this -- if you obey
robots.txt 100% from the start, you'll always be doing the right thing
no matter how many people use your aggregator.

I agree ; but, If I obeyrobots.txt, my aggregator won't aggregate lots
of RSS. Who want to use an aggregator which do not aggregate :)

For information : There is currently about 70 users that use my
aggregator... It's difficult for me to recruite :) But i really wants
to be 100% conform with rules and standards...

Thanks for your help and opinion.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,011
Latest member
AjaUqq1950

Latest Threads

Top