shuffle the lines of a large file

J

Joerg Schuster

Hello,

I am looking for a method to "shuffle" the lines of a large file.

I have a corpus of sorted and "uniqed" English sentences that has been
produced with (1):

(1) sort corpus | uniq > corpus.uniq

corpus.uniq is 80G large. The fact that every sentence appears only
once in corpus.uniq plays an important role for the processes
I use to involve my corpus in. Yet, the alphabetical order is an
unwanted side effect of (1): Very often, I do not want (or rather, I
do not have the computational capacities) to apply a program to all of
corpus.uniq. Yet, any series of lines of corpus.uniq is obviously a
very lopsided set of English sentences.

So, it would be very useful to do one of the following things:

- produce corpus.uniq in a such a way that it is not sorted in any way
- shuffle corpus.uniq > corpus.uniq.shuffled

Unfortunately, none of the machines that I may use has 80G RAM.
So, using a dictionary will not help.

Any ideas?

Joerg Schuster
 
K

Kent Johnson

Joerg said:
Hello,

I am looking for a method to "shuffle" the lines of a large file.

I have a corpus of sorted and "uniqed" English sentences that has been
produced with (1):

(1) sort corpus | uniq > corpus.uniq

corpus.uniq is 80G large. The fact that every sentence appears only
once in corpus.uniq plays an important role for the processes
I use to involve my corpus in. Yet, the alphabetical order is an
unwanted side effect of (1): Very often, I do not want (or rather, I
do not have the computational capacities) to apply a program to all of
corpus.uniq. Yet, any series of lines of corpus.uniq is obviously a
very lopsided set of English sentences.

So, it would be very useful to do one of the following things:

- produce corpus.uniq in a such a way that it is not sorted in any way
- shuffle corpus.uniq > corpus.uniq.shuffled

Unfortunately, none of the machines that I may use has 80G RAM.
So, using a dictionary will not help.

There was a thread a while ago about choosing random lines from a file without reading the whole
file into memory. Would that help? Instead of shuffling the file, shuffle the users. I can't find
the thread though...

Kent
 
E

Eddie Corns

Joerg Schuster said:
I am looking for a method to "shuffle" the lines of a large file.
I have a corpus of sorted and "uniqed" English sentences that has been
produced with (1):
(1) sort corpus | uniq > corpus.uniq
corpus.uniq is 80G large. The fact that every sentence appears only
once in corpus.uniq plays an important role for the processes
I use to involve my corpus in. Yet, the alphabetical order is an
unwanted side effect of (1): Very often, I do not want (or rather, I
do not have the computational capacities) to apply a program to all of
corpus.uniq. Yet, any series of lines of corpus.uniq is obviously a
very lopsided set of English sentences.
So, it would be very useful to do one of the following things:
- produce corpus.uniq in a such a way that it is not sorted in any way
- shuffle corpus.uniq > corpus.uniq.shuffled
Unfortunately, none of the machines that I may use has 80G RAM.
So, using a dictionary will not help.
Any ideas?

Instead of shuffling the file itself maybe you could index it (with dbm for
instance) and select random lines by using random indexes whenever you need a
sample.

Eddie
 
H

Heiko Wundram

Any ideas?

The following program should do the trick (filenames are hardcoded, look at
top of file):

### shuffle.py

import random
import shelve

# Open external files needed for data storage.
lines = open("test.dat","r")
lineindex = shelve.open("test.idx")
newlines = open("test.new.dat","w")

# Create an index of all lines of the file in an external flat file DB.
# This means that nothing actually remains in memory, but in an extremely
# efficient (g)dbm flatfile DB.
def makeIdx():
i = 0L
lastpos = 0L
curpos = None
while lines.readline():
# This is after the (\r)\n, which will be stripped() and rewritten
# by writeNewLines().
curpos = long(lines.tell())
lineindex[hex(i)[2:-1]] = "%s:%s" % (hex(lastpos)[2:-1],
hex(curpos-lastpos)[2:-1])
lastpos = curpos
i += 1
return i

maxidx = makeIdx()

# To shuffle the file, just shuffle the index. Problem being: there is no
# random number generator which even remotely has the possibility of yielding
# all possible permutations. Thus, for simplicity: just exchange every element
# in order 1..end with a random element from the rest of the file. This is
# certainly no perfect shuffle, and in case the shuffling is too bad, just
# rerun shuffleIdx() a couple of times.
def shuffleIdx():
oldi = 0L
# Use a while loop, as xrange doesn't work with longs.
while oldi < maxidx:
oi = hex(oldi)[2:-1]
while True:
ni = hex(long(random.randrange(maxidx)))[2:-1]
if ni <> oi:
break
lineindex[oi], lineindex[ni] = lineindex[ni], lineindex[oi]
oldi += 1

shuffleIdx()

# Write out the shuffled file. Do this by just walking the index 0..end.
def writeNewLines():
i = 0L
# Use a while loop, as xrange doesn't work with longs.
while i < maxidx:
# Extract line index and line length from the index file.
lidx, llen = [long(x,16) for x in lineindex[hex(i)[2:-1]].split(":")]
lines.seek(lidx)
line = lines.read(llen).strip()
newlines.write(line+"\n")
i += 1

writeNewLines()

### End shuffle.py

I don't know how fast this program will run, but at least, it does as
told... ;)

--
--- Heiko.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)

iD8DBQBCLGMhf0bpgh6uVAMRAkxVAJ43QQI1d+X6FvxjQ0WBwME0JDc6fQCeJn9q
sTPw+DGj+/UVlp14TXia4Ds=
=Z4ir
-----END PGP SIGNATURE-----
 
H

Heiko Wundram

Replying to oneself is bad, but although the program works, I never intended
to use a shelve to store the data. Better to use anydbm.

So, just replace:

import shelve

by

import anydbm

and

lineindex = shelve.open("test.idx")

by

lineindex = anydbm.open("test.idx","c")

Keep the rest as is.

--
--- Heiko.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)

iD8DBQBCLGTDf0bpgh6uVAMRAkZqAJ468OmFM4lQmmaKUirUMgTNXOAcygCeIGS3
PzwDjx2kBEJSAygOeckYsRs=
=jJfb
-----END PGP SIGNATURE-----
 
R

Richard Brodie

I am looking for a method to "shuffle" the lines of a large file.

Of the top of my head: decorate, randomize, undecorate.
Prepend a suitable large random number or hash to each
line and then use sort. You could prepend new line numbers
instead but even storing the randomised indexes might use
too much memory.
 
W

Warren Postma

Joerg said:
Unfortunately, none of the machines that I may use has 80G RAM.
So, using a dictionary will not help.

Any ideas?

Why don't you index the file? I would store the byte-offsets of the
beginning of each line into an index file. Then you can generate a
random number from 1 to Whatever, go get that index from the index file,
then open your text file, seek to that position in the file, read one
line, and close the file. Using this process you can then extract a
somewhat random set of lines from your 'corpus' text file.

You probably should consider making a database of the file, keep the raw
text file for sure, but create a converted copy in bsddb or pytables format.

Warren
 
G

gry

As far as I can tell, what you ultimately want is to be able to extract
a random ("representative?") subset of sentences. Given the huge size
of data, I would suggest not randomizing the file, but randomizing
accesses to the file. E.g. (sorry for off-the-cuff pseudo python):
[adjust 8196 == 2**13 to your disk block size]
.. while True:
.. byteno = random.randint(0,length_of_file)
.. #align to disk block to avoid unnecessary IO
.. byteno = (byteno >> 13) << 13 #zero out the bottom 13 bits
.. f.seek(byteno) #set the file pointer to a random position
.. bytes = r.read(8196) #read one block
.. sentences = bytes.splitlines()[2:-1] #omit ends with partial
lines
.. do_something(sentences)

If you only need 1000 sentences, use only one sentence from each block,
if you need 1M, then use them all.
[I hope I understood you problem]

-- george
 
C

Christos TZOTZIOY Georgiou

Hello,

I am looking for a method to "shuffle" the lines of a large file.
[snip]

So, it would be very useful to do one of the following things:

- produce corpus.uniq in a such a way that it is not sorted in any way
- shuffle corpus.uniq > corpus.uniq.shuffled

Unfortunately, none of the machines that I may use has 80G RAM.
So, using a dictionary will not help.

To implement your 'shuffle' command in Python, you can do the following
algorithm, with a couple of assumptions:


ASSUMPTION
----------

The total line count in your big file is less than sys.maxint.

The algorithm as given works for systems where eol is a single '\n'.


ALGORITHM
---------

Create a temporary filelist.FileList fl (see attached file) of
struct.calcsize("q") bytes each (struct.pack and the 'q' format string is your
friend), to hold the offset of each line start in big_file. fl[0] would be 0,
fl[1] would be the length of the first line including its '\n' and so on.

Read once the big_file appending to fl the offset each time (if you need help
with this, let me know).

random.shuffle(fl) # this is tested with the filelist.FileList as given

for offset_as_str in fl:
offset= struct.unpack("q", offset_as_str)[0]
big_file.seek(offset)
sys.stdout.write(big_file.readline())

That's it. Redirect output to your preferred file. No promises for speed
though :)
 
J

Joerg Schuster

Thanks to all. This thread shows again that Python's best feature is
comp.lang.python.

Jörg
 
?

=?iso-8859-1?Q?Fran=E7ois?= Pinard

[Joerg Schuster]
I am looking for a method to "shuffle" the lines of a large file.

If speed and space are not a concern, I would be tempted to presume that
this can be organised without too much difficulty. However, looking for
speed handling a big file, while keeping equiprobability of all possible
permutations, might be sensibly more difficult.

I vaguely remember having read something along these lines (not
shuffling as you mean it, but still, reorganising a lengthy file) in
Knuth's "Art of Computer Programming", in one of the exercises within
the chapter on Sorting methods (volume 3). That's long ago, but if I
remember well, Knuth did not consider this as an easy exercise.
 
R

Raymond Hettinger

[Joerg Schuster]
I am looking for a method to "shuffle" the lines of a large file.

I have a corpus of sorted and "uniqed" English sentences that has been
produced with (1):

(1) sort corpus | uniq > corpus.uniq

corpus.uniq is 80G large.

Since the corpus is huge, the python portion should not pull it all into memory.
The best bet is to let the o/s tools take care of the that part:
print >> out, '%.14f %s' % (random(), line),

sort corpus.decorated | cut -c 18- > corpus.randomized


Raymond Hettinger
 
N

Nick Craig-Wood

Raymond Hettinger said:
print >> out, '%.14f %s' % (random(), line),


sort corpus.decorated | cut -c 18- > corpus.randomized

Very good solution!

Sort is truly excellent at very large datasets. If you give it a file
bigger than memory then it divides it up into temporary files of
memory size, sorts each one, then merges all the temporary files back
together.

You tune the memory sort uses for in memory sorts with --buffer-size.
Its pretty good at auto tuning though.

You may want to set --temporary-directory also to save filling up your
/tmp.

In a previous job I did a lot of stuff with usenet news and was
forever blowing up the server with scripts which used too much memory.
sort was always the solution!
 
S

Simon Brunning

If this is what's wanted, then perhaps some variation on this cookbook
recipe might do the trick:

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/59865

I couldn't resist. ;-)

import random

def randomLines(filename, lines=1):
selected_lines = list(None for line_no in xrange(lines))

for line_index, line in enumerate(open(filename)):
for selected_line_index in xrange(lines):
if random.uniform(0, line_index) < 1:
selected_lines[selected_line_index] = line

return selected_lines

This has the advantage that every line had the same chance of being
picked regardless of its length. There is the chance that it'll pick
the same line more than once, though.
 
H

Heiko Wundram

This has the advantage that every line had the same chance of being
picked regardless of its length. There is the chance that it'll pick
the same line more than once, though.

Problem being: if the file the OP is talking about really is 80GB in size, and
you consider a sentence to have 80 bytes on average (it's likely to have less
than that), that makes 10^9 sentences in the file. Now, multiply that with
the memory overhead of storing a list of 10^9 None(s), and reconsider,
whether that algorithm really works for the posted conditions. I don't think
that any machine I have access to even has near enough memory just to store
this list... ;)

--
--- Heiko.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)

iD8DBQBCLbuCf0bpgh6uVAMRApm9AJ0eyz5oDpOjC9+SpRoiZS+tBtxb8ACfXV0I
wkQvOOufeCgdGDfRU2vw0Xs=
=9R1e
-----END PGP SIGNATURE-----
 
S

Simon Brunning

Problem being: if the file the OP is talking about really is 80GB in size, and
you consider a sentence to have 80 bytes on average (it's likely to have less
than that), that makes 10^9 sentences in the file. Now, multiply that with
the memory overhead of storing a list of 10^9 None(s), and reconsider,
whether that algorithm really works for the posted conditions. I don't think
that any machine I have access to even has near enough memory just to store
this list... ;)

Ah, but that's the clever bit; it *doesn't* store the whole list -
only the selected lines.
 
S

Stefan Behnel

Simon said:
On Tue, 8 Mar 2005 14:13:01 +0000, Simon Brunning wrote:
selected_lines = list(None for line_no in xrange(lines))

Just a short note on this line. If lines is really large, its much faster to use

from itertools import repeat
selected_lines = list(repeat(None, len(lines)))

which only repeats None without having to create huge numbers of integer
objects as xrange does.

BTW, list comprehension is usually faster than list(iterator), so

[None for no in xrange(lines)]

ends up somwhere between the two.

Proof (in 2.4):

# python -m timeit 'from itertools import repeat
a = [ None for i in range(10000) ]'
100 loops, best of 3: 3.68 msec per loop

# python -m timeit 'from itertools import repeat
a = [ None for i in xrange(10000) ]'
100 loops, best of 3: 3.49 msec per loop

# python -m timeit 'from itertools import repeat
a = list(repeat(None, 10000))'
1000 loops, best of 3: 308 usec per loop

There. Factor 10. That's what I call optimization...

Stefan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,763
Messages
2,569,563
Members
45,039
Latest member
CasimiraVa

Latest Threads

Top