using urllib on a more complex site

Discussion in 'Python' started by Adam W., Feb 25, 2013.

  1. Adam W.

    Adam W. Guest

    I'm trying to write a simple script to scrape http://www.vudu.com/movies/#tag/99centOfTheDay/99c Rental of the day

    in order to send myself an email every day of the 99c movie of the day.

    However, using a simple command like (in Python 3.0):
    urllib.request.urlopen('http://www.vudu.com/movies/#tag/99centOfTheDay/99c%20Rental%20of%20the%20day').read()

    I don't get the all the source I need, its just the navigation buttons. Now I assume they are using some CSS/javascript witchcraft to load all the useful data later, so my question is how do I make urllib "wait" and grab that data as well?
     
    Adam W., Feb 25, 2013
    #1
    1. Advertising

  2. Adam W.

    Dave Angel Guest

    On 02/24/2013 07:02 PM, Adam W. wrote:
    > I'm trying to write a simple script to scrape http://www.vudu.com/movies/#tag/99centOfTheDay/99c Rental of the day
    >
    > in order to send myself an email every day of the 99c movie of the day.
    >
    > However, using a simple command like (in Python 3.0):
    > urllib.request.urlopen('http://www.vudu.com/movies/#tag/99centOfTheDay/99c%20Rental%20of%20the%20day').read()
    >
    > I don't get the all the source I need, its just the navigation buttons. Now I assume they are using some CSS/javascript witchcraft to load all the useful data later, so my question is how do I make urllib "wait" and grab that data as well?
    >


    The CSS and the jpegs, and many other aspects of a web "page" are loaded
    explicitly, by the browser, when parsing the tags of the page you
    downloaded. There is no sooner or later. The website won't send the
    other files until you request them.

    For example, that site at the moment has one image (prob. jpeg)
    highlighted,

    <img class="gwt-Image" src="http://images2.vudu.com/poster2/179186-m"
    alt="Sex and the City: The Movie (Theatrical)">

    if you want to look at that jpeg, you need to download the file url
    specified by the src attribute of that img element.

    Or perhaps you can just look at the 'alt' attribute, which is mainly
    there for browsers who don't happen to do graphics, for example, the
    ones for the blind.

    Naturally, there may be dozens of images on the page, and there's no
    guarantee that the website author is trying to make it easy for you.
    Why not check if there's a defined api for extracting the information
    you want? Check the site, or send a message to the webmaster.

    No guarantee that tomorrow, the information won't be buried in some
    javascript fragment. Again, if you want to see that, you might need to
    write a javascript interpreter. it could use any algorithm at all to
    build webpage information, and the encoding could change day by day, or
    hour by hour.

    --
    DaveA
     
    Dave Angel, Feb 25, 2013
    #2
    1. Advertising

  3. Adam W.

    Adam W. Guest

    On Sunday, February 24, 2013 7:30:00 PM UTC-5, Dave Angel wrote:
    > On 02/24/2013 07:02 PM, Adam W. wrote:
    >
    > > I'm trying to write a simple script to scrape http://www.vudu.com/movies/#tag/99centOfTheDay/99c Rental of the day

    >
    > >

    >
    > > in order to send myself an email every day of the 99c movie of the day.

    >
    > >

    >
    > > However, using a simple command like (in Python 3.0):

    >
    > > urllib.request.urlopen('http://www.vudu.com/movies/#tag/99centOfTheDay/99c%20Rental%20of%20the%20day').read()

    >
    > >

    >
    > > I don't get the all the source I need, its just the navigation buttons.Now I assume they are using some CSS/javascript witchcraft to load all the useful data later, so my question is how do I make urllib "wait" and grabthat data as well?

    >
    > >

    >
    >
    >
    > The CSS and the jpegs, and many other aspects of a web "page" are loaded
    >
    > explicitly, by the browser, when parsing the tags of the page you
    >
    > downloaded. There is no sooner or later. The website won't send the
    >
    > other files until you request them.
    >
    >
    >
    > For example, that site at the moment has one image (prob. jpeg)
    >
    > highlighted,
    >
    >
    >
    > <img class="gwt-Image" src="http://images2.vudu.com/poster2/179186-m"
    >
    > alt="Sex and the City: The Movie (Theatrical)">
    >
    >
    >
    > if you want to look at that jpeg, you need to download the file url
    >
    > specified by the src attribute of that img element.
    >
    >
    >
    > Or perhaps you can just look at the 'alt' attribute, which is mainly
    >
    > there for browsers who don't happen to do graphics, for example, the
    >
    > ones for the blind.
    >
    >
    >
    > Naturally, there may be dozens of images on the page, and there's no
    >
    > guarantee that the website author is trying to make it easy for you.
    >
    > Why not check if there's a defined api for extracting the information
    >
    > you want? Check the site, or send a message to the webmaster.
    >
    >
    >
    > No guarantee that tomorrow, the information won't be buried in some
    >
    > javascript fragment. Again, if you want to see that, you might need to
    >
    > write a javascript interpreter. it could use any algorithm at all to
    >
    > build webpage information, and the encoding could change day by day, or
    >
    > hour by hour.
    >
    >
    >
    > --
    >
    > DaveA


    The problem is, the image url you found is not returned in the data urllib grabs. To be clear, I was aware of what urllib is supposed to do (ie not download image data when loading a page), I've used it before many times, just never had to jump through hoops to get at the content I needed.

    I'll look into figuring out how to find XHR requests in Chrome, I didn't know what they called that after the fact loading, so now my searching will be more productive.
     
    Adam W., Feb 25, 2013
    #3
  4. Adam W.

    Adam W. Guest

    On Sunday, February 24, 2013 7:30:00 PM UTC-5, Dave Angel wrote:
    > On 02/24/2013 07:02 PM, Adam W. wrote:
    >
    > > I'm trying to write a simple script to scrape http://www.vudu.com/movies/#tag/99centOfTheDay/99c Rental of the day

    >
    > >

    >
    > > in order to send myself an email every day of the 99c movie of the day.

    >
    > >

    >
    > > However, using a simple command like (in Python 3.0):

    >
    > > urllib.request.urlopen('http://www.vudu.com/movies/#tag/99centOfTheDay/99c%20Rental%20of%20the%20day').read()

    >
    > >

    >
    > > I don't get the all the source I need, its just the navigation buttons.Now I assume they are using some CSS/javascript witchcraft to load all the useful data later, so my question is how do I make urllib "wait" and grabthat data as well?

    >
    > >

    >
    >
    >
    > The CSS and the jpegs, and many other aspects of a web "page" are loaded
    >
    > explicitly, by the browser, when parsing the tags of the page you
    >
    > downloaded. There is no sooner or later. The website won't send the
    >
    > other files until you request them.
    >
    >
    >
    > For example, that site at the moment has one image (prob. jpeg)
    >
    > highlighted,
    >
    >
    >
    > <img class="gwt-Image" src="http://images2.vudu.com/poster2/179186-m"
    >
    > alt="Sex and the City: The Movie (Theatrical)">
    >
    >
    >
    > if you want to look at that jpeg, you need to download the file url
    >
    > specified by the src attribute of that img element.
    >
    >
    >
    > Or perhaps you can just look at the 'alt' attribute, which is mainly
    >
    > there for browsers who don't happen to do graphics, for example, the
    >
    > ones for the blind.
    >
    >
    >
    > Naturally, there may be dozens of images on the page, and there's no
    >
    > guarantee that the website author is trying to make it easy for you.
    >
    > Why not check if there's a defined api for extracting the information
    >
    > you want? Check the site, or send a message to the webmaster.
    >
    >
    >
    > No guarantee that tomorrow, the information won't be buried in some
    >
    > javascript fragment. Again, if you want to see that, you might need to
    >
    > write a javascript interpreter. it could use any algorithm at all to
    >
    > build webpage information, and the encoding could change day by day, or
    >
    > hour by hour.
    >
    >
    >
    > --
    >
    > DaveA


    The problem is, the image url you found is not returned in the data urllib grabs. To be clear, I was aware of what urllib is supposed to do (ie not download image data when loading a page), I've used it before many times, just never had to jump through hoops to get at the content I needed.

    I'll look into figuring out how to find XHR requests in Chrome, I didn't know what they called that after the fact loading, so now my searching will be more productive.
     
    Adam W., Feb 25, 2013
    #4
  5. Adam W.

    Adam W. Guest

    On Sunday, February 24, 2013 7:27:54 PM UTC-5, Chris Rebert wrote:
    > On Sunday, February 24, 2013, Adam W. wrote:
    > I'm trying to write a simple script to scrape http://www.vudu.com/movies/#tag/99centOfTheDay/99c Rental of the day
    >
    >
    >
    >
    > in order to send myself an email every day of the 99c movie of the day.
    >
    >
    >
    > However, using a simple command like (in Python 3.0):
    >
    > urllib.request.urlopen('http://www.vudu.com/movies/#tag/99centOfTheDay/99c%20Rental%20of%20the%20day').read()
    >
    >
    >
    >
    > I don't get the all the source I need, its just the navigation buttons.  Now I assume they are using some CSS/javascript witchcraft to load all the useful data later, so my question is how do I make urllib "wait" and grab that data as well?
    >
    >
    >
    >
    >
    > urllib isn't a web browser. It just requests the single (in this case, HTML) file from the given URL. It does not parse the HTML (indeed, it doesn't care what kind of file you're dealing with); therefore, it obviously does not retrieve the other resources linked within the document (CSS, JS, images, etc.) nor does it run any JavaScript. So, there's nothing to "wait" for; urllib is already doing everything it was designed to do.
    >
    >
    >
    > Your best bet is to open the page in a web browser yourself and use the developer tools/inspectors to watch what XHR requests the page's scripts aremaking, find the one(s) that have the data you care about, and then make those requests instead via urllib (or the `requests` 3rd-party lib, or whatever). If the URL(s) vary, reverse-engineering the scheme used to generate them will also be required.
    >
    >
    >
    > Alternatively, you could use something like Selenium, which let's you drive an actual full web browser (e.g. Firefox) from Python.
    >
    >
    > Cheers,
    > Chris
    >
    >
    > --
    > Cheers,
    > Chris
    > --
    > http://rebertia.com


    Huzzah! Found it: http://apicache.vudu.com/api2/claim.../program/type/season/type/episode/type/bundle

    Thanks for the tip about XHR's
     
    Adam W., Feb 25, 2013
    #5
  6. Adam W.

    Adam W. Guest

    On Sunday, February 24, 2013 7:27:54 PM UTC-5, Chris Rebert wrote:
    > On Sunday, February 24, 2013, Adam W. wrote:
    > I'm trying to write a simple script to scrape http://www.vudu.com/movies/#tag/99centOfTheDay/99c Rental of the day
    >
    >
    >
    >
    > in order to send myself an email every day of the 99c movie of the day.
    >
    >
    >
    > However, using a simple command like (in Python 3.0):
    >
    > urllib.request.urlopen('http://www.vudu.com/movies/#tag/99centOfTheDay/99c%20Rental%20of%20the%20day').read()
    >
    >
    >
    >
    > I don't get the all the source I need, its just the navigation buttons.  Now I assume they are using some CSS/javascript witchcraft to load all the useful data later, so my question is how do I make urllib "wait" and grab that data as well?
    >
    >
    >
    >
    >
    > urllib isn't a web browser. It just requests the single (in this case, HTML) file from the given URL. It does not parse the HTML (indeed, it doesn't care what kind of file you're dealing with); therefore, it obviously does not retrieve the other resources linked within the document (CSS, JS, images, etc.) nor does it run any JavaScript. So, there's nothing to "wait" for; urllib is already doing everything it was designed to do.
    >
    >
    >
    > Your best bet is to open the page in a web browser yourself and use the developer tools/inspectors to watch what XHR requests the page's scripts aremaking, find the one(s) that have the data you care about, and then make those requests instead via urllib (or the `requests` 3rd-party lib, or whatever). If the URL(s) vary, reverse-engineering the scheme used to generate them will also be required.
    >
    >
    >
    > Alternatively, you could use something like Selenium, which let's you drive an actual full web browser (e.g. Firefox) from Python.
    >
    >
    > Cheers,
    > Chris
    >
    >
    > --
    > Cheers,
    > Chris
    > --
    > http://rebertia.com


    Huzzah! Found it: http://apicache.vudu.com/api2/claim.../program/type/season/type/episode/type/bundle

    Thanks for the tip about XHR's
     
    Adam W., Feb 25, 2013
    #6
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Michael
    Replies:
    4
    Views:
    469
    Matt Hammond
    Jun 26, 2006
  2. howa
    Replies:
    17
    Views:
    607
    ~kurt
    Jul 1, 2007
  3. comeshopcheap
    Replies:
    1
    Views:
    355
    John J. Lee
    Jul 1, 2007
  4. Jonathan Gardner

    Asynchronous urllib (urllib+asyncore)?

    Jonathan Gardner, Feb 26, 2008, in forum: Python
    Replies:
    1
    Views:
    506
    Terry Jones
    Feb 27, 2008
  5. Chris McDonald
    Replies:
    0
    Views:
    327
    Chris McDonald
    Nov 1, 2010
Loading...

Share This Page