[SQL] Pick random rows from SELECT?

G

Gilles Ganault

Hello

I have a working Python script that SELECTs rows from a database to
fetch a company's name from a web-based database.

Since this list is quite big and the site is the bottleneck, I'd like
to run multiple instances of this script, and figured a solution would
be to pick rows at random from the dataset, check in my local database
if this item has already been taken care of, and if not, download
details from the remote web site.

If someone's done this before, should I perform the randomization in
the SQL query (SQLite using the APSW wrapper
http://code.google.com/p/apsw/), or in Python?

Thank you.

Here's some simplified code:

sql = 'SELECT id,label FROM companies WHERE activity=1'
rows=list(cursor.execute(sql))
for row in rows:
id = row[0]
label = row[1]

print strftime("%H:%M")
url = "http://www.acme.com/details.php?id=%s" % id
req = urllib2.Request(url, None, headers)
response = urllib2.urlopen(req).read()

name = re_name.search(response)
if name:
name = name.group(1)
sql = 'UPDATE companies SET name=? WHERE id=?'
cursor.execute(sql, (name,id) )
 
P

Peter Otten

Gilles said:
I have a working Python script that SELECTs rows from a database to
fetch a company's name from a web-based database.

Since this list is quite big and the site is the bottleneck, I'd like
to run multiple instances of this script, and figured a solution would
be to pick rows at random from the dataset, check in my local database
if this item has already been taken care of, and if not, download
details from the remote web site.

If someone's done this before, should I perform the randomization in
the SQL query (SQLite using the APSW wrapper
http://code.google.com/p/apsw/), or in Python?

Thank you.

Here's some simplified code:

sql = 'SELECT id,label FROM companies WHERE activity=1'
rows=list(cursor.execute(sql))
for row in rows:
id = row[0]
label = row[1]

print strftime("%H:%M")
url = "http://www.acme.com/details.php?id=%s" % id
req = urllib2.Request(url, None, headers)
response = urllib2.urlopen(req).read()

name = re_name.search(response)
if name:
name = name.group(1)
sql = 'UPDATE companies SET name=? WHERE id=?'
cursor.execute(sql, (name,id) )

I don't think you need to randomize the requests. Instead you could control
a pool of worker processes using

http://docs.python.org/library/multiprocessing.html

Peter
 
D

Dennis Lee Bieber

Since this list is quite big and the site is the bottleneck, I'd like
to run multiple instances of this script, and figured a solution would
be to pick rows at random from the dataset, check in my local database
if this item has already been taken care of, and if not, download
details from the remote web site.
You really think making MULTIPLE, overlapping requests to a web site
is going to be more efficient than just suffering the single transfer
time of one large query?
If someone's done this before, should I perform the randomization in
the SQL query (SQLite using the APSW wrapper
http://code.google.com/p/apsw/), or in Python?
Pardon, I thought you implied the bottleneck is the web-site
database -- I'd worry about any web-site that exposes a file-server
based database to direct user access.
Here's some simplified code:

sql = 'SELECT id,label FROM companies WHERE activity=1'
rows=list(cursor.execute(sql))
for row in rows:
id = row[0]
label = row[1]

print strftime("%H:%M")
url = "http://www.acme.com/details.php?id=%s" % id
req = urllib2.Request(url, None, headers)
response = urllib2.urlopen(req).read()

name = re_name.search(response)
if name:
name = name.group(1)
sql = 'UPDATE companies SET name=? WHERE id=?'
cursor.execute(sql, (name,id) )

Ah... You mean you are retrieving the names from a local database,
and then requesting web-site details based upon that name.

No matter how you look at it, you appear to want to process the
entire local list of companies... Multiple randomized local queries will
just add to the final run-time as you start to get duplicates -- and
have to reject that one to query for another random name.

I'd suggest either a pool of threads -- 5-10, each reading company
names from a shared QUEUE, which is populated by the main thread
(remember to commit() so that you don't block on database updates by the
threads). OR... determine how many companies there are, and start
threads feeding them <start> and <length> (length being #names /
#threads, round up -- start then being 0*length+1, 1*length+1, etc...)
and use those in thread specific selects using "... limit <length>
offset <start>"... This way each thread retrieves its own limited set of
companies (make sure to use the same sorting criteria).
 
G

Gilles Ganault

I'd suggest either a pool of threads -- 5-10, each reading company
names from a shared QUEUE, which is populated by the main thread
(remember to commit() so that you don't block on database updates by the
threads). OR... determine how many companies there are, and start
threads feeding them <start> and <length> (length being #names /
#threads, round up -- start then being 0*length+1, 1*length+1, etc...)
and use those in thread specific selects using "... limit <length>
offset <start>"... This way each thread retrieves its own limited set of
companies (make sure to use the same sorting criteria).

Thanks for the help :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,564
Members
45,039
Latest member
CasimiraVa

Latest Threads

Top