J
jdvolz
I have a script that uses urllib2 to repeatedly lookup web pages (in a
spider sort of way). It appears to function normally, but if it runs
too long I start to get 404 responses. If I try to use the internet
through any other programs (Outlook, FireFox, etc.) it will also fail.
If I stop the script, the internet returns.
Has anyone observed this behavior before? I am relatively new to
Python and would appreciate any suggestions.
Shuad
spider sort of way). It appears to function normally, but if it runs
too long I start to get 404 responses. If I try to use the internet
through any other programs (Outlook, FireFox, etc.) it will also fail.
If I stop the script, the internet returns.
Has anyone observed this behavior before? I am relatively new to
Python and would appreciate any suggestions.
Shuad