Can not get urllib.urlopen to work

P

Pater Maximus

I am trying to implement the recipe listed at
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/211886

However, I can not get to first base. When I try to run

import urllib
fo=urllib.urlopen("http://www.dictionary.com/")
page = fo.read()

I get:

Traceback (most recent call last):
File "C:/Program Files/Python/Lib/idlelib/testurl", line 2, in -toplevel-
fo=urllib.urlopen("http://www.dictionary.com/")
File "C:\PROGRA~1\PYTHON\lib\urllib.py", line 76, in urlopen
return opener.open(url)
File "C:\PROGRA~1\PYTHON\lib\urllib.py", line 181, in open
return getattr(self, name)(url)
File "C:\PROGRA~1\PYTHON\lib\urllib.py", line 297, in open_http
h.endheaders()
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 712, in endheaders
self._send_output()
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 597, in _send_output
self.send(msg)
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 564, in send
self.connect()
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 548, in connect
raise socket.error, msg
IOError: [Errno socket error] (10061, 'Connection refused')
 
P

Pater Maximus

More information:

I am running Python 2.3.4, Windows 2000 Pro, along with Norton Internet
Security Professional. I do not get any messages from Norton.
 
P

Peter Hansen

Pater said:
I am trying to implement the recipe listed at
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/211886

However, I can not get to first base. When I try to run

import urllib
fo=urllib.urlopen("http://www.dictionary.com/")
page = fo.read()

I get:
Traceback (most recent call last):
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 548, in connect
raise socket.error, msg
IOError: [Errno socket error] (10061, 'Connection refused')

Do you have any reason to think that this site is currently
hosting a web server?

Attempting to connect with IE or Telnet on port 80 results in
a similar response.

(Telnet is always a good thing to try, before posting about
server problems...)

-Peter
 
A

Andrew Dalke

Pater said:
I am trying to implement the recipe listed at
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/211886

However, I can not get to first base. When I try to run

import urllib
fo=urllib.urlopen("http://www.dictionary.com/")
page = fo.read()

I can't even connect to it with my web browser. Can you?
If you can, they are probably checking the user-agent sent
by urllib, to make it harder to do this sort of automated
screen scraping.

See the docs at
http://www.python.org/doc/current/lib/module-urllib.html

for an example of how to change the default user-agent.

Here's one for MSIE under Win2K

Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)

Andrew
(e-mail address removed)
 
S

Sean Berry

I am trying to implement the recipe listed at
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/211886

However, I can not get to first base. When I try to run

import urllib
fo=urllib.urlopen("http://www.dictionary.com/")
page = fo.read()

I get:

Traceback (most recent call last):
File "C:/Program Files/Python/Lib/idlelib/testurl", line 2, in -toplevel-
fo=urllib.urlopen("http://www.dictionary.com/")
File "C:\PROGRA~1\PYTHON\lib\urllib.py", line 76, in urlopen
return opener.open(url)
File "C:\PROGRA~1\PYTHON\lib\urllib.py", line 181, in open
return getattr(self, name)(url)
File "C:\PROGRA~1\PYTHON\lib\urllib.py", line 297, in open_http
h.endheaders()
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 712, in endheaders
self._send_output()
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 597, in _send_output
self.send(msg)
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 564, in send
self.connect()
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 548, in connect
raise socket.error, msg
IOError: [Errno socket error] (10061, 'Connection refused')

Connection refused is the key. Apparently dictionary.com figured out that
people were trying to use their resourses without giving them credit. I
have done some research on how to accomplish this so that people can not use
the cgi-bin programs (and others) that I write.

A few years back I needed to get information about the weather based on zip
code. I used to use weather.com but they fixed the hole. They made any
port 80 request forward to another address, which in turn forwarded you to
the original request. So if you west to www.weather.com, it would forward
you to www2.weather.com, then back to www.weather.com. The index page at
www.weather.com would then check the referrer to see if it came from
www2.weather.com. If the referrer was correct, then you got the page
content. If it was wrong, there was no page.

Similarly, www2.weather.com would check to see that the referrer was
www.weather.com... so there was no way around it.

I use another method to protect my programs... but it still does the same
thing ultimatly... stops people from using my programs.

Like it has been mentioned, a good starting place is Telnet. I tried
telnetting when I first read this post and got a connection refused, but now
I can get through - weird. I also tried a source dump from lynx... which 10
minutes ago did not work, but now it does.

# telnet www.dictionary.com 80

# lynx -source -preparse -dump http://www.dictionary.com

These both work to show the source of the index page. And the python recipe
even works now.... go figure.
 
S

Sean Berry

By the way, this does not work as well as it could.

It only lists the definitions if they are presented in listed <LI> tags.

So for the word greedy... it works. But, for the word greed, it does not.

I am sure some better parsing of the pages could be done for those
definitions lot listed in li tags.

Maybe I will play around with it and see what I can come up with.
 
E

Erik Johnson

Sean said:
A few years back I needed to get information about the weather based on zip
code. I used to use weather.com but they fixed the hole. They made any
port 80 request forward to another address, which in turn forwarded you to
the original request. So if you west to www.weather.com, it would forward
you to www2.weather.com, then back to www.weather.com. The index page at
www.weather.com would then check the referrer to see if it came from
www2.weather.com. If the referrer was correct, then you got the page
content. If it was wrong, there was no page.

Similarly, www2.weather.com would check to see that the referrer was
www.weather.com... so there was no way around it.

Au contraire, monsieur... there is no reason I can't set the referrer
header (using httplib).
I use another method to protect my programs... but it still does the same
thing ultimatly... stops people from using my programs.

It is a level of discouragement, and may be sufficient to stop the
simplest of abuses, but the reality is that, given enough work, one can have
their Python program recreate any HTTP transaction or series of transactions
needed. If you want to redirect me to a dynamically generated URL, fine...
I'll read the Location header and go there, if you want to set a cookie
there, fine, I'll read the Set-Cookie header, set it, then return it to you,
just as a browser does. If you want to check the User-Agent, I can send
headers that look exactly like what MSIE or any other browser would send.
You can make it difficult, no doubt, and doing so may be a wise thing to do,
but it is wrong to say that just becasue A sends you to B, and B refers you
back to A, there is no way around it.

I am currently doing exactly this sort of thing, but not to abuse
others' work. We pay for a service where data is published on a web site. We
need to get at this data several times an hour, 24X7. There is a login page,
and one or more HTML forms that must be filled out and submitted, and
cookies set and checked to get to the pages that contain the data we need.
It was not designed to be machine read, but that doesn't mean reading it in
an automated manner is "wrong" or abusive - we're paying for that data, they
just happen to present it in a format that's not all that conducive to
automated machine parsing. Fine, I'll do the extra work to get at it in that
manner.

I suppose you could meter the number of requests that a particular IP is
making or something on the server side, and there isn't much that can be
done on the requester's end. With access to several machines, though, even
this hurdle could be cleared.

So, anyway, if the OP wants to get more sophisticated about automated
surfing, check out the httplib module:
http://docs.python.org/lib/module-httplib.html

FYI... www.dictionary.com loads (for me, now) through a simple telnet
request:

ej@sand:~/src/python> telnet www.dictionary.com 80
Trying 66.161.12.81...
Connected to www.dictionary.com.
Escape character is '^]'.
GET / HTTP/1.0

HTTP/1.1 200 OK
Date: Wed, 27 Oct 2004 22:57:49 GMT
Server: Apache
Cache-Control: must-revalidate
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Last-Modified: Wed, 27 Oct 2004 22:58:07 GMT
Pragma: no-cache
Connection: close
Content-Type: text/html

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/1999/REC-html401-19991224/loose.dtd">
<html>

<snip!>

HTH,
-ej
 
S

Steve Holden

Pater said:
I am trying to implement the recipe listed at
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/211886

However, I can not get to first base. When I try to run

import urllib
fo=urllib.urlopen("http://www.dictionary.com/")
page = fo.read()

I get:

Traceback (most recent call last):
File "C:/Program Files/Python/Lib/idlelib/testurl", line 2, in -toplevel-
fo=urllib.urlopen("http://www.dictionary.com/")
File "C:\PROGRA~1\PYTHON\lib\urllib.py", line 76, in urlopen
return opener.open(url)
File "C:\PROGRA~1\PYTHON\lib\urllib.py", line 181, in open
return getattr(self, name)(url)
File "C:\PROGRA~1\PYTHON\lib\urllib.py", line 297, in open_http
h.endheaders()
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 712, in endheaders
self._send_output()
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 597, in _send_output
self.send(msg)
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 564, in send
self.connect()
File "C:\PROGRA~1\PYTHON\lib\httplib.py", line 548, in connect
raise socket.error, msg
IOError: [Errno socket error] (10061, 'Connection refused')
Suspect the action of a firewall or just assume you were unlucky and the
server was down when you hit it. I had no problem just now:

$ python
Python 2.3.4 (#1, Jun 13 2004, 11:21:03)
[GCC 3.3.1 (cygming special)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
regards
Steve
 
P

Pater Maximus

Problem solved!!

First, thanks to all who responded.

It turns out the villain was "Norton Personal Firewall". Since it did not
warn me that it was blocking an attempt to reach the internet, I assumed it
was not the problem.

Bad assumption.

Everything worked when I configured Norton to allow pythonw.exe to contact
the internet.
 
P

Peter Hansen

Pater said:
Problem solved!!

First, thanks to all who responded.

It turns out the villain was "Norton Personal Firewall". Since it did not
warn me that it was blocking an attempt to reach the internet, I assumed it
was not the problem.

Bad assumption.

Everything worked when I configured Norton to allow pythonw.exe to contact
the internet.

While that might have been *another* of your problems,
there definitely was a problem connecting to that host
at the time you posted.

The most difficult problems to troubleshoot are those that
involve more than one cause. I can see how this one would
have been a real bitch to find. :)

-Peter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

urllib.urlopen 7
urllib.urlopen: Errno socket error 6
Waiting for receiving data 4
urllib problem 0
socket timeout error? 0
urllib and urllib2, with proxies 0
urllib on windows machines 4
Error with SOAPpy 0

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,020
Latest member
GenesisGai

Latest Threads

Top