Having problems with urlparser concatenation

I

i80and

I'm working on a basic web spider, and I'm having problems with the
urlparser.
This is the effected function:
------------------------------
def FindLinks(Website):
WebsiteLen = len(Website)+1
CurrentLink = ''
i = 0
SpliceStart = 0
SpliceEnd = 0

LinksString = ""
LinkQueue = open('C:/LinkQueue.txt', 'a')

while (i < WebsiteLen) and (i != -1):

#Debugging info
#print '-----'
#print 'Length = ' + str(WebsiteLen)
#print 'SpliceStart = ' + str(SpliceStart)
#print 'SpliceEnd = ' + str(SpliceEnd)
#print 'i = ' + str(i)

SpliceStart = Website.find('<a href="', (i+1))
SpliceEnd = (Website.find('">', SpliceStart))

ParsedURL =
urlparse((Website[SpliceStart+9:(SpliceEnd+1)]))
robotparser.set_url(ParsedURL.hostname + '/' +
'robots.txt')
robotparser.read()
if (robotparser.can_fetch("*",
(Website[SpliceStart+9:(SpliceEnd+1)])) == False):
i = i - 1
else:
LinksString = LinksString + "\n" +
(Website[SpliceStart+9:(SpliceEnd+1)])
LinksString = LinksString[:(len(LinksString) - 1)]
#print 'found ' + LinksString
i = SpliceEnd

LinkQueue.write(LinksString)
LinkQueue.close()
------------------------------
Sorry if it's uncommented. When I run my program, I get this error:
-----
Traceback (most recent call last):
File "C:/Documents and Settings/Andrew/Desktop/ScoutCode-0.09.py",
line 120, in <module>
FindLinks(Website)
File "C:/Documents and Settings/Andrew/Desktop/ScoutCode-0.09.py",
line 84, in FindLinks
robotparser.read()
File "C:\Program Files\Python25\lib\robotparser.py", line 61, in read
f = opener.open(self.url)
File "C:\Program Files\Python25\lib\urllib.py", line 190, in open
return getattr(self, name)(url)
File "C:\Program Files\Python25\lib\urllib.py", line 451, in
open_file
return self.open_local_file(url)
File "C:\Program Files\Python25\lib\urllib.py", line 465, in
open_local_file
raise IOError(e.errno, e.strerror, e.filename)
IOError: [Errno 2] The system cannot find the path specified:
'en.wikipedia.org\\robots.txt'

Note the last line 'en.wikipedia.org\\robots.txt'. I want
'en.wikipedia.org/robots.txt'! What am I doing wrong?

If this has been answered before, please just give me a link to the
proper thread. If you need more contextual code, I can post more.
 
M

Marc 'BlackJack' Rintsch

return self.open_local_file(url)
File "C:\Program Files\Python25\lib\urllib.py", line 465, in
open_local_file
raise IOError(e.errno, e.strerror, e.filename)
IOError: [Errno 2] The system cannot find the path specified:
'en.wikipedia.org\\robots.txt'

Note the last line 'en.wikipedia.org\\robots.txt'. I want
'en.wikipedia.org/robots.txt'! What am I doing wrong?

You don't have that file on you local computer. :)

If you look at the messages above you'll see there's a function
`open_local_file()` involved. This function is chosen by `urllib` because
your path looks like a local file, i.e. it lacks the protocol information.
You don't want 'en.wikipedia.org/robots.txt', you want
'http://en.wikipedia.org/robots.txt'!

Ciao,
Marc 'BlackJack' Rintsch
 
G

Gabriel Genellina

I'm working on a basic web spider, and I'm having problems with the
urlparser.
[...]
SpliceStart = Website.find('<a href="', (i+1))
SpliceEnd = (Website.find('">', SpliceStart))

ParsedURL =
urlparse((Website[SpliceStart+9:(SpliceEnd+1)]))
robotparser.set_url(ParsedURL.hostname + '/' +
'robots.txt')
-----
Traceback (most recent call last):
File "C:/Documents and Settings/Andrew/Desktop/ScoutCode-0.09.py",
line 120, in <module>
FindLinks(Website)
File "C:/Documents and Settings/Andrew/Desktop/ScoutCode-0.09.py",
line 84, in FindLinks
robotparser.read()
File "C:\Program Files\Python25\lib\robotparser.py", line 61, in read
f = opener.open(self.url)
File "C:\Program Files\Python25\lib\urllib.py", line 190, in open
return getattr(self, name)(url)
File "C:\Program Files\Python25\lib\urllib.py", line 451, in
open_file
return self.open_local_file(url)
File "C:\Program Files\Python25\lib\urllib.py", line 465, in
open_local_file
raise IOError(e.errno, e.strerror, e.filename)
IOError: [Errno 2] The system cannot find the path specified:
'en.wikipedia.org\\robots.txt'

Note the last line 'en.wikipedia.org\\robots.txt'. I want
'en.wikipedia.org/robots.txt'! What am I doing wrong?

No, you don't want 'en.wikipedia.org/robots.txt'; you want
'http://en.wikipedia.org/robots.txt'
urllib treats the former as a file: request, here the \\ in the
normalized path.
You are parsing the link and then building a new URI using ONLY the
hostname part; that's wrong. Use urljoin(ParsedURL, '/robots.txt') instead.

You may try Beautiful Soup for a better HTML parsing.

--
Gabriel Genellina
Softlab SRL

__________________________________________________
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis!
¡Abrí tu cuenta ya! - http://correo.yahoo.com.ar
 
I

i80and

Thank you! Fixed my problem perfectly!
Gabriel said:
I'm working on a basic web spider, and I'm having problems with the
urlparser.
[...]
SpliceStart = Website.find('<a href="', (i+1))
SpliceEnd = (Website.find('">', SpliceStart))

ParsedURL =
urlparse((Website[SpliceStart+9:(SpliceEnd+1)]))
robotparser.set_url(ParsedURL.hostname + '/' +
'robots.txt')
-----
Traceback (most recent call last):
File "C:/Documents and Settings/Andrew/Desktop/ScoutCode-0.09.py",
line 120, in <module>
FindLinks(Website)
File "C:/Documents and Settings/Andrew/Desktop/ScoutCode-0.09.py",
line 84, in FindLinks
robotparser.read()
File "C:\Program Files\Python25\lib\robotparser.py", line 61, in read
f = opener.open(self.url)
File "C:\Program Files\Python25\lib\urllib.py", line 190, in open
return getattr(self, name)(url)
File "C:\Program Files\Python25\lib\urllib.py", line 451, in
open_file
return self.open_local_file(url)
File "C:\Program Files\Python25\lib\urllib.py", line 465, in
open_local_file
raise IOError(e.errno, e.strerror, e.filename)
IOError: [Errno 2] The system cannot find the path specified:
'en.wikipedia.org\\robots.txt'

Note the last line 'en.wikipedia.org\\robots.txt'. I want
'en.wikipedia.org/robots.txt'! What am I doing wrong?

No, you don't want 'en.wikipedia.org/robots.txt'; you want
'http://en.wikipedia.org/robots.txt'
urllib treats the former as a file: request, here the \\ in the
normalized path.
You are parsing the link and then building a new URI using ONLY the
hostname part; that's wrong. Use urljoin(ParsedURL, '/robots.txt') instead.

You may try Beautiful Soup for a better HTML parsing.

--
Gabriel Genellina
Softlab SRL

__________________________________________________
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis!
¡Abrí tu cuenta ya! - http://correo.yahoo.com.ar
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top