S
Sebastian Krause
Hello,
I tried to read in some large ascii files (200MB-2GB) in Python using
scipy.io.read_array, but it did not work as I expected. The whole idea
was to find a fast Python routine to read in arbitrary ascii files, to
replace Yorick (which I use right now and which is really fast, but not
as general as Python). The problem with scipy.io.read_array was, that it
is really slow, returns errors when trying to process large files and it
also changes (cuts) the files (after scipy.io.read_array processed a 2GB
file its size was only 64MB).
Can someone give me hint how to use Python to do this job correctly and
fast? (Maybe with another read-in routine.)
Thanks.
Greetings,
Sebastian
I tried to read in some large ascii files (200MB-2GB) in Python using
scipy.io.read_array, but it did not work as I expected. The whole idea
was to find a fast Python routine to read in arbitrary ascii files, to
replace Yorick (which I use right now and which is really fast, but not
as general as Python). The problem with scipy.io.read_array was, that it
is really slow, returns errors when trying to process large files and it
also changes (cuts) the files (after scipy.io.read_array processed a 2GB
file its size was only 64MB).
Can someone give me hint how to use Python to do this job correctly and
fast? (Maybe with another read-in routine.)
Thanks.
Greetings,
Sebastian