Tokenizing a large file

D

Don Wood

I have a large file that I need to tokenize. The method I am using now
is fast, but eats up a ton of memory by reading in the entire file first
as a String. I would also like to reuse existing tokens for duplicates.
(I have no control over the file format, but this Regex works well for
what I need.)

Here is what I am doing today.

tokens= File.read(filename).scan(/'[^']*'|"[^"]*"|[:))]|[^:))\s]+/)

And here is what I would like to do.

tokens= []
File.open(filename) do |fh|
fh.scan(/'[^']*'|"[^"]*"|[:))]|[^:))\s]+/) do |token|
tokens << i=tokens.index(token) ? tokens : token
end
end

So what I would like to have is a scan method for File objects that
yields the tokens when called with a block, instead of returning an
array. (It would be nice if String#scan could do this as well.) This
isn’t a big issue, it just causes my machine to overflow to the swap
file periodically. I could easily fix that with a couple DIMMs, but I
can’t help thinking that there should be a better way.
 
E

Eric Hodel

I have a large file that I need to tokenize. The method I am using =20=
now
is fast, but eats up a ton of memory by reading in the entire file =20
first
as a String. I would also like to reuse existing tokens for =20
duplicates.
(I have no control over the file format, but this Regex works well for
what I need.)

Here is what I am doing today.

tokens=3D File.read(filename).scan(/'[^']*'|"[^"]*"|[:))]|[^:))\s]+/)

And here is what I would like to do.

tokens=3D []
File.open(filename) do |fh|
fh.scan(/'[^']*'|"[^"]*"|[:))]|[^:))\s]+/) do |token|
tokens << i=3Dtokens.index(token) ? tokens : token
end
end

So what I would like to have is a scan method for File objects that
yields the tokens when called with a block, instead of returning an
array. (It would be nice if String#scan could do this as well.) This
isn=92t a big issue, it just causes my machine to overflow to the swap
file periodically. I could easily fix that with a couple DIMMs, but I
can=92t help thinking that there should be a better way.


You should look at StringScanner in strscan.rb, it'll allow you to =20
intern your tokens like you want.=
 
J

Joel VanderWerf

Eric said:
I have a large file that I need to tokenize. The method I am using now
is fast, but eats up a ton of memory by reading in the entire file first
as a String. I would also like to reuse existing tokens for duplicates.
(I have no control over the file format, but this Regex works well for
what I need.)

Here is what I am doing today.

tokens= File.read(filename).scan(/'[^']*'|"[^"]*"|[:))]|[^:))\s]+/)

And here is what I would like to do.

tokens= []
File.open(filename) do |fh|
fh.scan(/'[^']*'|"[^"]*"|[:))]|[^:))\s]+/) do |token|
tokens << i=tokens.index(token) ? tokens : token
end
end

So what I would like to have is a scan method for File objects that
yields the tokens when called with a block, instead of returning an
array. (It would be nice if String#scan could do this as well.) This
isn’t a big issue, it just causes my machine to overflow to the swap
file periodically. I could easily fix that with a couple DIMMs, but I
can’t help thinking that there should be a better way.


You should look at StringScanner in strscan.rb, it'll allow you to
intern your tokens like you want.


I was going to suggest that, but:

$ irb -r strscan
irb(main):001:0> StringScanner.new(File.open('tmp/t'))
TypeError: can't convert File into String
from (irb):1:in `initialize'
from (irb):1:in `new'
from (irb):1

Is there some way to use StringScanner with an open file?

(also, my ruby 1.8.6 only comes with ext/strscan, not lib/strscan.rb...
maybe we're talking about different things)
 
C

Caleb Clausen

I have a large file that I need to tokenize. The method I am using now
is fast, but eats up a ton of memory by reading in the entire file first
as a String. I would also like to reuse existing tokens for duplicates.
(I have no control over the file format, but this Regex works well for
what I need.)

Here is what I am doing today.

tokens=3D File.read(filename).scan(/'[^']*'|"[^"]*"|[:))]|[^:))\s]+/)

And here is what I would like to do.

tokens=3D []
File.open(filename) do |fh|
fh.scan(/'[^']*'|"[^"]*"|[:))]|[^:))\s]+/) do |token|
tokens << i=3Dtokens.index(token) ? tokens : token
end
end

So what I would like to have is a scan method for File objects that
yields the tokens when called with a block, instead of returning an
array. (It would be nice if String#scan could do this as well.) This
isn=92t a big issue, it just causes my machine to overflow to the swap
file periodically. I could easily fix that with a couple DIMMs, but I
can=92t help thinking that there should be a better way.


The sequence gem permits scanning a file directly with a regexp.
Something like this should work:

require 'rubygems'
require 'sequence'
require 'sequence/file'
tokens=3D []
fh=3DSequence::File.new(open(filename))
until fh.eof?
tokens<<fh.scan(/'[^']*'|"[^"]*"|[:))]|[^:))\s]+/) #or yield token
up to the caller...
fh.scan "\n"
end
fh.close

As I don't know your data format, I'm not sure if this is right. I'm
assuming that your tokens are separated by newlines, but if it's more
complicated than that, you will have to fiddle with the argument to
the 2nd scan. (As Sequence doesn't have String#scan's bump-a-long
behavior, you have to explicitly match the things between scanned
patterns yourself.)

Note that Sequence::File#scan will match patterns only up to a certain
size (4k bytes, I think). This is an inevitable consequence of using a
Regexp against a file; you wouldn't want arbitrary amounts of
backtracking in a 1GB+ file. Java had this restriction as well, last
time I knew (several years ago).

On the other hand, if you really do have one token per line, it will
be simpler and probably faster to use #readline to get tokens one by
one and no special library is needed.

Joel: I think the original ruby implementation of strscan was replaced
by a c extension long ago.
 
R

Robert Klemme

2009/4/15 Don Wood said:
I have a large file that I need to tokenize. =A0The method I am using now
is fast, but eats up a ton of memory by reading in the entire file first
as a String. =A0I would also like to reuse existing tokens for duplicates=
 
R

Robert Klemme

2009/4/16 Ryan Davis said:
Converted to the block form:

def my_tokenize file
=A0tokens =3D Hash.new {|h,k| h[k.freeze] =3D k}

FYI:

% irb
h =3D {} =3D> {}
h["key"] =3D 42 =3D> 42
h.keys.map { |k| k.frozen? }
=3D> [true]

hashes dupe and freeze string keys to prevent them from being mutated whi= le
hash keys.

Only if they are not frozen yet.

irb(main):001:0> h =3D {}
=3D> {}
irb(main):002:0> s =3D "abc"
=3D> "abc"
irb(main):003:0> h =3D s
=3D> "abc"
irb(main):004:0> s =3D "bar".freeze
=3D> "bar"
irb(main):005:0> h =3D s
=3D> "bar"
irb(main):006:0> h
=3D> {"abc"=3D>"abc", "bar"=3D>"bar"}
irb(main):007:0> h.each {|kv| p kv.map {|x| x.object_id}}
[134954550, 134972840]
[134951170, 134951170]
=3D> {"abc"=3D>"abc", "bar"=3D>"bar"}

Do you now know why I did it the way I did?

Cheers

robert

--=20
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/
 
D

Don Wood

Caleb said:
And here is what I would like to do.
array. (It would be nice if String#scan could do this as well.) This
isn�t a big issue, it just causes my machine to overflow to the swap
file periodically. I could easily fix that with a couple DIMMs, but I
can�t help thinking that there should be a better way.

The sequence gem permits scanning a file directly with a regexp.
Something like this should work:

require 'rubygems'
require 'sequence'
require 'sequence/file'
tokens= []
fh=Sequence::File.new(open(filename))
until fh.eof?
tokens<<fh.scan(/'[^']*'|"[^"]*"|[:))]|[^:))\s]+/) #or yield token
up to the caller...
fh.scan "\n"
end
fh.close

As I don't know your data format, I'm not sure if this is right. I'm
assuming that your tokens are separated by newlines, but if it's more
complicated than that, you will have to fiddle with the argument to
the 2nd scan. (As Sequence doesn't have String#scan's bump-a-long
behavior, you have to explicitly match the things between scanned
patterns yourself.)

Note that Sequence::File#scan will match patterns only up to a certain
size (4k bytes, I think). This is an inevitable consequence of using a
Regexp against a file; you wouldn't want arbitrary amounts of
backtracking in a 1GB+ file. Java had this restriction as well, last
time I knew (several years ago).

Thanks Caleb,

This looks like exactly what I needed. I'm not sure I understand the
point of the second scan though. The first scan should already ignore
unquoted whitespace, including "\n". (At least that is how it currently
works when I scan a string.) I don't think that I will get anywhere
near the per-token 4k limit.
 
D

Don Wood

Robert said:
2009/4/16 Ryan Davis said:
% irb
h = {} => {}
h["key"] = 42 => 42
h.keys.map { |k| k.frozen? }
=> [true]

hashes dupe and freeze string keys to prevent them from being mutated while
hash keys.

Only if they are not frozen yet.

irb(main):001:0> h = {}
=> {}
irb(main):002:0> s = "abc"
=> "abc"
irb(main):003:0> h = s
=> "abc"
irb(main):004:0> s = "bar".freeze
=> "bar"
irb(main):005:0> h = s
=> "bar"
irb(main):006:0> h
=> {"abc"=>"abc", "bar"=>"bar"}
irb(main):007:0> h.each {|kv| p kv.map {|x| x.object_id}}
[134954550, 134972840]
[134951170, 134951170]
=> {"abc"=>"abc", "bar"=>"bar"}

Do you now know why I did it the way I did?

Cheers

robert


Thanks Robert,

I see what you did there. This looks like the perfect solution for
finding duplicate strings quickly. I don't want to assume that tokens
don't span lines but, combining this with Caleb's suggestion of using
the sequence gem, I should have all I need to drastically cut my memory
footprint.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,522
Members
44,995
Latest member
PinupduzSap

Latest Threads

Top