safely reading large files

Discussion in 'C++' started by byte8bits@gmail.com, May 21, 2008.

  1. Guest

    How does C++ safely open and read very large files? For example, say I
    have 1GB of physical memory and I open a 4GB file and attempt to read
    it like so:

    #include <iostream>
    #include <fstream>
    #include <string>
    using namespace std;

    int main () {
    string line;
    ifstream myfile ("example.txt", ios::binary);
    if (myfile.is_open())
    {
    while (! myfile.eof() )
    {
    getline (myfile,line);
    cout << line << endl;
    }
    myfile.close();
    }

    else cout << "Unable to open file";

    return 0;
    }

    In particular, what if a line in the file is more than the amount of
    available physical memory? What would happen? Seems getline() would
    cause a crash. Is there a better way. Maybe... check amount of free
    memory, then use 10% or so of that amount for the read. So if 1GB of
    memory is free, then take 100MB for file IO. If only 10MB is free,
    then just read 1MB at a time. Repeat this step until the file has been
    read completely. Is something built into standard C++ to handle this?
    Or is there a accepted way to do this?

    Thanks,

    Brad
    , May 21, 2008
    #1
    1. Advertising

  2. red floyd Guest

    wrote:
    > How does C++ safely open and read very large files? For example, say I
    > have 1GB of physical memory and I open a 4GB file and attempt to read
    > it like so:
    >


    Others have already answered your question, so I'm going to address
    something else.

    > #include <iostream>
    > #include <fstream>
    > #include <string>
    > using namespace std;
    >
    > int main () {
    > string line;
    > ifstream myfile ("example.txt", ios::binary);
    > if (myfile.is_open())
    > {


    This while loop does not do what you think it does. See FAQ 15.5
    (http://parashift.com/c -faq-lite/input-output.html#faq-15.5)

    > while (! myfile.eof() )
    > {
    > getline (myfile,line);
    > cout << line << endl;
    > }
    > myfile.close();
    > }
    >
    > else cout << "Unable to open file";
    >
    > return 0;
    > }
    red floyd, May 21, 2008
    #2
    1. Advertising

  3. James Kanze Guest

    On May 21, 4:11 am, Victor Bazarov <> wrote:
    > wrote:
    > > How does C++ safely open and read very large files? For example, say I
    > > have 1GB of physical memory and I open a 4GB file and attempt to read
    > > it like so:


    > > #include <iostream>
    > > #include <fstream>
    > > #include <string>
    > > using namespace std;


    > > int main () {
    > > string line;
    > > ifstream myfile ("example.txt", ios::binary);
    > > if (myfile.is_open())
    > > {
    > > while (! myfile.eof() )
    > > {
    > > getline (myfile,line);
    > > cout << line << endl;
    > > }
    > > myfile.close();
    > > }


    > > else cout << "Unable to open file";

    >
    > > return 0;
    > > }


    > > In particular, what if a line in the file is more than the
    > > amount of available physical memory? What would happen?
    > > Seems getline() would cause a crash. Is there a better way.
    > > Maybe... check amount of free memory, then use 10% or so of
    > > that amount for the read. So if 1GB of memory is free, then
    > > take 100MB for file IO. If only 10MB is free, then just read
    > > 1MB at a time. Repeat this step until the file has been read
    > > completely. Is something built into standard C++ to handle
    > > this? Or is there a accepted way to do this?


    > Actually, performing operations that can lead to running out
    > of memory is not a simple thing at all.


    I'm sure you don't mean what that literally says. There's
    certainly nothing difficult about running out of memory. Doing
    something reasonable (other than just aborting) when it happens
    is difficult, however.

    > Yes, if you can estimate the amount of memory you will need
    > over what you right now want to allocate and you know the size
    > of available memory somehow, then you can allocate a chunk and
    > operate on that chunk until done and move over to the next
    > chunk. In the good ol' days that's how we solved large
    > systems of linear equations, one piece of the matrix at a time
    > (or two if the algorithm called for it).


    And you'd manually manage overlays, as well, so that only part
    of the program was in memory at a time. (I once saw a PL/1
    compiler which ran in 16 KB real memory, using such techniques.
    Took something like three hours to compile a 500 line program,
    but it did work.)

    > Unfortunately there is no single straightforward solution. In
    > most cases you don't even know that you're going to run out of
    > memory until it's too late. You can write the program to
    > handle those situations using C++ exceptions. The pseudo-code
    > might look like this:


    > std::size_t chunk_size = 1024*1024*1024;
    > MyAlgorithgm algo;


    > do {
    > try {
    > algo.prepare_the_operation(chunk_size);
    > // if I am here, the chunk_size is OK
    > algo.perform_the_operation();
    > algo.wrap_it_up();
    > }
    > catch (std::bad_alloc & e) {
    > chunk_size /= 2; // or any other adjustment
    > }
    > }
    > while (chunk_size > 1024*1024); // or some other threshold


    Shouldn't the condition here be "while ( operation not done )",
    something like:

    bool didIt = false ;
    do {
    try {
    // your code from the try block
    didIt = true ;
    }
    // ... your catch
    } while ( ! didIt ) ;

    > That way if your preparation fails, you just restart it using
    > a smaller chunk, until you either complete the operation or
    > your chunk is too small and you can't really do anything...


    Just a note, but that isn't allways reliable. Not all OS's will
    tell you when there isn't enough memory: they'll return an
    address, then crash or suspend your program when you try to
    access it. (I've seen this happen on at least three different
    systems: Windows, AIX and Linux. At least in the case of AIX
    and Linux, and probably Windows as well, it depends on the
    version, and some configuration parameters, but most Linux are
    still configured so that you cannot catch allocation errors: if
    the command "/sbin/sysctl vm.overcommit_memory" displays any
    value other than 2, then a reliably conforming implementation of
    C or C++ is impossible.)

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, May 21, 2008
    #3
  4. brad Guest

    wrote:
    > How does C++ safely open and read very large files?


    How about this mate? It's a start.

    > // read a file into memory
    > // read in chunks if the file is larger than 16MB
    > #include <iostream>
    > #include <fstream>
    >
    > using namespace std;
    >
    > int main ()
    > {
    >
    > int max = 16384000;
    >
    > ifstream is;
    > is.open ("test.txt");
    >
    > // get length of file:
    > is.seekg (0, ios::end);
    > int file_size = is.tellg();
    > is.seekg (0, ios::beg);
    >
    > if (file_size > max)
    > {
    > // allocate memory
    > char* buffer = new char[16384000];
    >
    > cout << file_size << " bytes... break up to read" << endl;
    > while (!is.eof())
    > {
    > //read data as a block
    > is.read (buffer, max);
    >
    > //write the read data to stdout
    > cout << buffer;
    > }
    > delete[] buffer;
    > }
    > else
    > {
    > // allocate memory:
    > char* buffer = new char[file_size];
    >
    > cout << file_size << " bytes" << endl;
    > is.read (buffer, file_size);
    > cout << buffer;
    > delete[] buffer;
    > }
    >
    > is.close();
    >
    > return 0;
    > }
    brad, May 21, 2008
    #4
  5. kwikius Guest

    "Victor Bazarov" <> wrote in message
    news:g11ujj$tfd$...

    <...>

    > I am still confused. What's non-portable if you use standard types?
    > Perhaps you're using some special meaning of the word "non-portable" (or
    > the word "portable")... Care to elaborate?


    Portability of a type is a function of the programming language and hardware
    and other parameters.

    In a contemporary language with a virtual machine, an integer will provide a
    guarantee on size and semantics of expressions.

    In C and C++, both quite old programming languages, neither size or
    detailed semantics is guaranteed, but pointer semantics is usually well
    accomodated, ... hardware dominates.

    Short term I'll bet on the virtual machine. Some hardware is going that
    direction, but ..

    Long term Von Neumann architecture is history too, a function of Moores Law.

    Hence I coin the term fluid virtual machine. Define your types and the
    architecture will morph itself to their characteristics.

    In this semantic view , hardware is portable.

    software heaven....

    HTH

    regards
    Andy Little
    kwikius, May 22, 2008
    #5
  6. Sam wrote:
    >> In particular, what if a line in the file is more than the amount of
    >> available physical memory?

    >
    > The C++ library will fail to allocate sufficient memory, throw an
    > exception, and terminate the process.


    You wish. What will more likely happen is that the OS will start
    swapping like mad and your system will be next to unusable the next half
    hour, while you try desperately to kill your program. (This will happen
    in most current OSes, including Windows and Linux.) I am talking from
    experience.

    Too many C++ programs out there carelessly create things like
    std::vectors or std::strings which size depends on some user-given
    input, without any sanity checks about the size. This is a bad idea for
    the abovementioned reason.
    Juha Nieminen, May 22, 2008
    #6
  7. James Kanze Guest

    On May 22, 8:56 am, Juha Nieminen <> wrote:
    > Sam wrote:
    > >> In particular, what if a line in the file is more than the amount of
    > >> available physical memory?


    > > The C++ library will fail to allocate sufficient memory, throw an
    > > exception, and terminate the process.


    > You wish. What will more likely happen is that the OS will start
    > swapping like mad and your system will be next to unusable the next half
    > hour, while you try desperately to kill your program. (This will happen
    > in most current OSes, including Windows and Linux.) I am talking from
    > experience.


    Hmmm. Sounds like Solaris 2.4. More recent Solaris doesn't
    have this problem. (Things do slow down when you start
    swapping, but the machine remains usable for simple
    things---like opening an xterm to do ps and kill.)

    In the case of Linux, of course, you're likely to get a program
    crash before you get your exception.

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, May 22, 2008
    #7
  8. James Kanze Guest

    On May 21, 9:19 pm, Victor Bazarov <> wrote:
    > ltcmelo wrote:
    > > [..]
    > > Even assuming you have memory, there's another concern. If
    > > you're running on a 32-bit platform, you can have problems
    > > working with such large files. Basically, the file pointer
    > > might not be large enough to traverse, for example, a file
    > > with more than 4GB. (There are "non- standard" solutions for
    > > this.)


    > Why do you say it's non-standard? The Standard defines
    > 'streampos' for stream positioning, which any decent library
    > implementation on a system that allows its files to have more
    > than 32-bit size, should be the next larger integral type
    > (whatever that might be on that system). Also, there is the
    > 'streamoff' for offsetting in a stream buffer. Both are
    > implementation-defined.


    The problem is that implementations don't want to break binary
    compatibility, that support for files larger than 4GB is often a
    more or less recent feature, and earlier implementations had a
    32-bit streampos. If you've written streampos values to a file
    in binary, you don't want their size to change, and there's
    probably more existing code which does this (even though it's
    really stupid) than new code which really needs files over 4GB.
    So depending on the implementation, you may not be able to
    access the end of the file. (streamoff doesn't help if all the
    implementation does is add it to the current position,
    maintained in a streampos.)

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, May 22, 2008
    #8
  9. James Kanze wrote:
    > Hmmm. Sounds like Solaris 2.4. More recent Solaris doesn't
    > have this problem. (Things do slow down when you start
    > swapping, but the machine remains usable for simple
    > things---like opening an xterm to do ps and kill.)


    The few times I have accidentally allocated a 4-gigabyte vector
    because of a bug (or because of not validating input) I have really
    wished they did something to solve this problem in Linux. It really
    doesn't make too much sense from a security point of view that a regular
    user can hinder the entire system so badly with a simply self-made program.

    It has also happened to me in Windows XP as well (similar
    allocate-4GB-because-of-non-validated-input error), and the behavior was
    approximately the same.
    Juha Nieminen, May 22, 2008
    #9
  10. brad Guest

    Juha Nieminen wrote:
    > You wish. What will more likely happen is that the OS will start
    > swapping like mad and your system will be next to unusable the next half
    > hour, while you try desperately to kill your program.


    I can confirm this behavior. Tried to read a 9GB file all at once into
    4GB of RAM. I think my kernel ended up in swap space :)
    brad, May 22, 2008
    #10
  11. Juha Nieminen wrote:

    > It really
    > doesn't make too much sense from a security point of view that a regular
    > user can hinder the entire system so badly with a simply self-made program.


    ulimit(1), setrlimit(2)

    Of course the design is somewhat broken since it only applies
    per-process limits, and you can't set a per-user limit. (Some other
    systems, for example VMS, had better resource control.)
    Matthias Buelow, May 22, 2008
    #11
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. =?Utf-8?B?TmVvIFRoZSBPbmU=?=

    Can I safely delete Temporary ASP.NET Files?

    =?Utf-8?B?TmVvIFRoZSBPbmU=?=, Oct 15, 2004, in forum: ASP .Net
    Replies:
    7
    Views:
    48,046
    Nora.brown
    Apr 3, 2010
  2. =?Utf-8?B?SHV0dHk=?=

    Reading large text files

    =?Utf-8?B?SHV0dHk=?=, Sep 24, 2005, in forum: ASP .Net
    Replies:
    1
    Views:
    708
    =?Utf-8?B?SHV0dHk=?=
    Sep 27, 2005
  3. Davidski
    Replies:
    0
    Views:
    3,870
    Davidski
    Nov 5, 2004
  4. freesoft_2000

    Reading Large Files

    freesoft_2000, Aug 11, 2005, in forum: Java
    Replies:
    18
    Views:
    1,005
    Andrew Thompson
    Aug 13, 2005
  5. Tony Girgenti
    Replies:
    3
    Views:
    823
    Laurent Bugnion [MVP]
    Feb 18, 2007
Loading...

Share This Page