Fetching a clean copy of a changing web page

J

John Nagle

I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?

John Nagle
 
D

Diez B. Roggisch

John said:
I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?

Make them fix the obvious bug they have would be the best of course.

Apart from that - the only thing you could try is to apply a SAX parser
on the input stream immediatly, so that at least if the XML is non-valid
because of the way they serve it you get to that ASAP. But it will only
shave off a few moments.

Diez
 
M

Miles

I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?

John Nagle

Sounds like that's the host's problem--they should be using atomic
writes, which is usally done be renaming the new file on top of the
old one. How "bogus" are the bad files? If it's just incomplete,
then since it's XML, it'll be missing the "</output>" and you should
get a parse error if you're using a suitable strict parser. If it's
mixed old data and new data, but still manages to be well-formed XML,
then yes, you'll probably have to read it twice.

-Miles
 
A

Amit Khemka

I'm reading the PhishTank XML file of active phishing sites,
at "http://data.phishtank.com/data/online-valid/" This changes
frequently, and it's big (about 10MB right now) and on a busy server.
So once in a while I get a bogus copy of the file because the file
was rewritten while being sent by the server.

Any good way to deal with this, short of reading it twice
and comparing?
If you have:
1. Ball park estimate of the size of XML
2. Some footers or "last tags" in the XML

May be you can use the above to check the xml and catch the "bogus" ones !

cheers,

--
----
Amit Khemka
website: www.onyomo.com
wap-site: www.owap.in
Home Page: www.cse.iitd.ernet.in/~csd00377

Endless the world's turn, endless the sun's Spinning, Endless the quest;
I turn again, back to my own beginning, And here, find rest.
 
S

Stefan Behnel

Diez said:
Apart from that - the only thing you could try is to apply a SAX parser
on the input stream immediatly, so that at least if the XML is non-valid
because of the way they serve it you get to that ASAP.

Sure, if you want to use lxml.etree, you can pass the URL right into
etree.parse() and it will throw an exception if parsing from the URL fails to
yield a well-formed document.

http://codespeak.net/lxml/
http://codespeak.net/lxml/dev/parsing.html

BTW, parsing and serialising it back to a string is most likely dominated by
the time it takes to transfer the document over the network, so it will not be
much slower than reading it using urlopen() and the like.

Stefan
 
J

John Nagle

Miles said:
Sounds like that's the host's problem--they should be using atomic
writes, which is usally done be renaming the new file on top of the
old one. How "bogus" are the bad files? If it's just incomplete,
then since it's XML, it'll be missing the "</output>" and you should
get a parse error if you're using a suitable strict parser. If it's
mixed old data and new data, but still manages to be well-formed XML,
then yes, you'll probably have to read it twice.

-Miles

Yes, they're updating it non-atomically.

I'm now reading it twice and comparing, which works.
Actually, it's read up to 5 times, until the same contents
appear twice in a row. Two tries usually work, but if the
server is updating, it may require more.

Ugly, and doubles the load on the server, but necessary to
get a consistent copy of the data.

John Nagle
 
J

John Nagle

Miles said:
Sounds like that's the host's problem--they should be using atomic
writes, which is usally done be renaming the new file on top of the
old one. How "bogus" are the bad files? If it's just incomplete,
then since it's XML, it'll be missing the "</output>" and you should
get a parse error if you're using a suitable strict parser. If it's
mixed old data and new data, but still manages to be well-formed XML,
then yes, you'll probably have to read it twice.

The files don't change much from update to update; typically they
contain about 10,000 entries, and about 5-10 change every hour. So
the odds of getting a seemingly valid XML file with incorrect data
are reasonably good.

John Nagle
 
S

Steve Holden

John said:
The files don't change much from update to update; typically they
contain about 10,000 entries, and about 5-10 change every hour. So
the odds of getting a seemingly valid XML file with incorrect data
are reasonably good.
I'm still left wondering what the hell kind of server process will start
serving one copy of a file and complete the request from another. Oh, well.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------
 
C

Carsten Haese

The files don't change much from update to update; typically they
contain about 10,000 entries, and about 5-10 change every hour. So
the odds of getting a seemingly valid XML file with incorrect data
are reasonably good.

Does the server return a reliable last-modified timestamp? If yes, you
can do something like this:

prev_last_mod = None
while True:
u = urllib.urlopen(theUrl)
if prev_last_mod==u.headers['last-modified']:
break
prev_last_mod = u.headers['last-modified']
contents = u.read()
u.close()

That way, you only have to re-read the file if it actually changed
according to the time stamp, rather than having to re-read in any case
just to check whether it changed.

HTH,
 
S

star.public

Sure, if you want to use lxml.etree, you can pass the URL right into
etree.parse() and it will throw an exception if parsing from the URL fails to
yield a well-formed document.

http://codespeak.net/lxml/http://codespeak.net/lxml/dev/parsing.html

BTW, parsing and serialising it back to a string is most likely dominated by
the time it takes to transfer the document over the network, so it will not be
much slower than reading it using urlopen() and the like.

Stefan

xml.etree.ElementTree is in the standard lib now, too. Also,
xml.etree.cElementTree, which has the same interface but is blindingly
fast. (I'm working on a program which needs to read/recreate the
(badly designed, horrible, evil) iTunes Library XML, of which mine is
about 10mb, and cEtree parses it in under a second and 60mb of ram
(whearas minidom takes like two minutes and 600+mb to do the same
thing).)

(I mean really -- the playlists are stored as five megs of lists with
elements that are dictionaries of one element, all looking exactly
like this: <dict>\n<key>Track ID</key><integer>4521</integer>\n</dict>
\n --- </rant>)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top