how can i use lxml with win32com?

M

Michiel Overtoom

elca said:
http://news.search.naver.com/search.naver?sm=tab_hty&where=news&query=korea+times&x=0&y=0
that is korea portal site and i was search keyword using 'korea times'
and i want to scrap resulted to text name with 'blogscrap_save.txt'

Aha, now we're getting somewhere.

Getting and parsing that page is no problem, and doesn't need JavaScript
or Internet Explorer.

import urllib2
import BeautifulSoup
doc=urllib2.urlopen("http://news.search.naver.com/search.naver?sm=tab_hty&where=news&query=korea+times&x=0&y=0")
soup=BeautifulSoup.BeautifulSoup(doc)


By analyzing the structure of that page you can see that the articles
are presented in an unordered list which has class "type01". The
interesting bit in each list item is encapsulated in a <dd> tag with
class "sh_news_passage". So, to parse the articles:

ul=soup.find("ul","type01")
for li in ul.findAll("li"):
dd=li.find("dd","sh_news_passage")
print dd.renderContents()
print

This example prints them, but you could also save them to a file (or a
database, whatever).

Greetings,
 
D

Dennis Lee Bieber

Pot. Kettle. Black.
comp.lang.python really is a usenet news group. There is a mailing list that mirrors the
newsgroup though.
And the mailing list is then also available via NNTP on gmane as
gmane.comp.python.general...

comp.lang.python (via NNTP)
<> mailing list (via SMTP/POP3)
<> gmane.comp.python.general (via NNTP)


I'm deliberately not defining what Google does with it...
 
E

elca

motoom said:
Aha, now we're getting somewhere.

Getting and parsing that page is no problem, and doesn't need JavaScript
or Internet Explorer.

import urllib2
import BeautifulSoup
doc=urllib2.urlopen("http://news.search.naver.com/search.naver?sm=tab_hty&where=news&query=korea+times&x=0&y=0")
soup=BeautifulSoup.BeautifulSoup(doc)


By analyzing the structure of that page you can see that the articles
are presented in an unordered list which has class "type01". The
interesting bit in each list item is encapsulated in a <dd> tag with
class "sh_news_passage". So, to parse the articles:

ul=soup.find("ul","type01")
for li in ul.findAll("li"):
dd=li.find("dd","sh_news_passage")
print dd.renderContents()
print

This example prints them, but you could also save them to a file (or a
database, whatever).

Greetings,



--
"The ability of the OSS process to collect and harness
the collective IQ of thousands of individuals across
the Internet is simply amazing." - Vinod Valloppillil
http://www.catb.org/~esr/halloween/halloween4.html


Hi, thanks for your help..
thread is too long, so i will open another new post.
thanks a lot

Paul
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,535
Members
45,007
Latest member
obedient dusk

Latest Threads

Top