begin to parse a web page not entirely downloaded

K

k0mp

Hi,

Is there a way to retrieve a web page and before it is entirely
downloaded, begin to test if a specific string is present and if yes
stop the download ?
I believe that urllib.openurl(url) will retrieve the whole page before
the program goes to the next statement. I suppose I would be able to
do what I want by using the sockets module, but I'm sure there's a
simpler way to do it.
 
L

Leif K-Brooks

k0mp said:
Is there a way to retrieve a web page and before it is entirely
downloaded, begin to test if a specific string is present and if yes
stop the download ?
I believe that urllib.openurl(url) will retrieve the whole page before
the program goes to the next statement.

Use urllib.urlopen(), but call .read() with a smallish argument, e.g.:
'<html><head> ...

foo.read(512) will return as soon as 512 bytes have been received. You
can keep caling it until it returns an empty string, indicating that
there's no more data to be read.
 
K

k0mp

Use urllib.urlopen(), but call .read() with a smallish argument, e.g.:

'<html><head> ...

foo.read(512) will return as soon as 512 bytes have been received. You
can keep caling it until it returns an empty string, indicating that
there's no more data to be read.

Thanks for your answer :)

I'm not sure that read() works as you say.
Here is a test I've done :

import urllib2
import re
import time

CHUNKSIZE = 1024

print 'f.read(CHUNK)'
print time.clock()

for i in range(30) :
f = urllib2.urlopen('http://google.com')
while True: # read the page using a loop
chunk = f.read(CHUNKSIZE)
if not chunk: break
m = re.search('<html>', chunk )
if m != None :
break

print time.clock()

print

print 'f.read()'
print time.clock()
for i in range(30) :
f = urllib2.urlopen('http://google.com')
m = re.search('<html>', f.read() )
if m != None :
break

print time.clock()


It prints that :
f.read(CHUNK)
0.1
0.31

f.read()
0.31
0.32


It seems to take more time when I use read(size) than just read.
I think in both case urllib.openurl retrieve the whole page.
 
L

Leif K-Brooks

k0mp said:
It seems to take more time when I use read(size) than just read.
I think in both case urllib.openurl retrieve the whole page.

Google's home page is very small, so it's not really a great test of
that. Here's a test downloading the first 512 bytes of an Ubuntu ISO
(beware of wrap):

$ python -m timeit -n1 -r1 "import urllib"
"urllib.urlopen('http://ubuntu.cs.utah.edu/releases/6.06/ubuntu-6.06.1-desktop-i386.iso').read(512)"
1 loops, best of 1: 596 msec per loop
 
?

=?iso-8859-1?b?Qmr2cm4=?= Steinbrink

Thanks for your answer :)

I'm not sure that read() works as you say.
Here is a test I've done :

import urllib2
import re
import time

CHUNKSIZE = 1024

print 'f.read(CHUNK)'
print time.clock()

for i in range(30) :
f = urllib2.urlopen('http://google.com')
while True: # read the page using a loop
chunk = f.read(CHUNKSIZE)
if not chunk: break
m = re.search('<html>', chunk )
if m != None :
break

print time.clock()

print

print 'f.read()'
print time.clock()
for i in range(30) :
f = urllib2.urlopen('http://google.com')
m = re.search('<html>', f.read() )
if m != None :
break

A fair comparison would use "pass" here. Or a while loop as in the
other case. The way it is, it compares 30 times read(CHUNKSIZE)
against one time read().

Björn
 
K

k0mp

A fair comparison would use "pass" here. Or a while loop as in the
other case. The way it is, it compares 30 times read(CHUNKSIZE)
against one time read().

Björn

That's right my test was false. I've replaced http://google.com with
http://aol.com
And the 'break' in the second loop with 'continue' ( because when the
string is found I don't want the rest of the page to be parsed.

I obtain this :
f.read(CHUNK)
0.1
0.17

f.read()
0.17
0.23


f.read() is still faster than f.read(CHUNK)
 
K

k0mp

Google's home page is very small, so it's not really a great test of
that. Here's a test downloading the first 512 bytes of an Ubuntu ISO
(beware of wrap):

$ python -m timeit -n1 -r1 "import urllib"
"urllib.urlopen('http://ubuntu.cs.utah.edu/releases/6.06/ubuntu-6.06.1-desktop-i386.is...)"
1 loops, best of 1: 596 msec per loop

OK, you convince me. The fact that I haven't got better results in my
test with read(512) must be because what takes most of the time is the
response time of the server, not the data transfer on the network.
 
M

MRAB

Thanks for your answer :)

I'm not sure that read() works as you say.
Here is a test I've done :

import urllib2
import re
import time

CHUNKSIZE = 1024

print 'f.read(CHUNK)'
print time.clock()

for i in range(30) :
f = urllib2.urlopen('http://google.com')
while True: # read the page using a loop
chunk = f.read(CHUNKSIZE)
if not chunk: break
m = re.search('<html>', chunk )
if m != None :
break
[snip]
I'd just like to point out that the above code assumes that the
'<html>' is entirely within one chunk; it could in fact be split
across chunks.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top