maps to hold ultra large data sets using customer allocators to allocate disk space rather than main

Discussion in 'C++' started by CMOS, May 15, 2007.

  1. CMOS

    CMOS Guest

    one of the projects im working in currently requires use of ultra
    large sized maps, lists, vector, etc. (basically stl containers).
    Sizes might grow up to 1000 Million entries. since it is impossible to
    have all this data in memory, im planning to implement these
    containers to hold data both in memory and disk at the same time.
    im not sure this can be achieved using customer allocators and im
    wondering if there are any such implementations.

    thank you
    CMOS, May 15, 2007
    #1
    1. Advertising

  2. Re: maps to hold ultra large data sets using customer allocatorsto allocate disk space rather than main memory

    * CMOS:
    > one of the projects im working in currently requires use of ultra
    > large sized maps, lists, vector, etc. (basically stl containers).
    > Sizes might grow up to 1000 Million entries. since it is impossible to
    > have all this data in memory, im planning to implement these
    > containers to hold data both in memory and disk at the same time.
    > im not sure this can be achieved using customer allocators and im
    > wondering if there are any such implementations.


    A few GBytes of data isn't that much, really, if you have the hardware
    to match. However, from your comment about "customer (sic) allocators",
    and simply from the fact that you're seeking advice here, I'm reasonably
    sure that this is not a million-dollar budget project, but rather a
    student project, and that the requirement of billions of entries stems
    from bad design, and is not an inherent requirement of the problem
    you're trying to solve. So do tell about the problem, not how you're
    envisioning solving it; perhaps we can suggest better ways.

    --
    A: Because it messes up the order in which people normally read text.
    Q: Why is it such a bad thing?
    A: Top-posting.
    Q: What is the most annoying thing on usenet and in e-mail?
    Alf P. Steinbach, May 15, 2007
    #2
    1. Advertising

  3. CMOS

    Ian Collins Guest

    Re: maps to hold ultra large data sets using customer allocatorsto allocate disk space rather than main memory

    CMOS wrote:
    > one of the projects im working in currently requires use of ultra
    > large sized maps, lists, vector, etc. (basically stl containers).
    > Sizes might grow up to 1000 Million entries. since it is impossible to
    > have all this data in memory, im planning to implement these
    > containers to hold data both in memory and disk at the same time.
    > im not sure this can be achieved using customer allocators and im
    > wondering if there are any such implementations.
    >

    The short answer is yes, but are you sure you want to?

    --
    Ian Collins.
    Ian Collins, May 15, 2007
    #3
  4. CMOS

    CMOS Guest

    NOTED: Custom Allocator: sorry.

    the problem is to index 10 Billion records of certain type using a
    given field. field type might be a number, string, date, etc.
    and to query the results for fast retrieval.

    thanks
    CMOS, May 15, 2007
    #4
  5. CMOS

    Ian Collins Guest

    Re: maps to hold ultra large data sets using customer allocatorsto allocate disk space rather than main memory

    CMOS wrote:
    > NOTED: Custom Allocator: sorry.
    >
    > the problem is to index 10 Billion records of certain type using a
    > given field. field type might be a number, string, date, etc.
    > and to query the results for fast retrieval.
    >

    Why not just use a database, which will have been optimised for this task?

    --
    Ian Collins.
    Ian Collins, May 15, 2007
    #5
  6. CMOS

    CMOS Guest

    a generic DB's performance will not be enough and i do not need it to
    support any data modifications, transactions, etc which will slow down
    operation.
    the only requrement is to Insert data and query and delete records
    using keys. no need of SQL interface either.
    CMOS, May 15, 2007
    #6
  7. On May 15, 4:12 pm, CMOS <> wrote:
    > one of the projects im working in currently requires use of ultra
    > large sized maps, lists, vector, etc. (basically stl containers).
    > Sizes might grow up to 1000 Million entries. since it is impossible to
    > have all this data in memory, im planning to implement these
    > containers to hold data both in memory and disk at the same time.
    > im not sure this can be achieved using customer allocators and im
    > wondering if there are any such implementations.
    >
    > thank you


    http://www.sqlite.org/
    http://www.postgresql.org/

    Either one should do the job.
    Gianni Mariani, May 15, 2007
    #7
  8. CMOS

    Ian Collins Guest

    Re: maps to hold ultra large data sets using customer allocatorsto allocate disk space rather than main memory

    CMOS wrote:

    Please quote enough context for your reply to make sense.

    > a generic DB's performance will not be enough and i do not need it to
    > support any data modifications, transactions, etc which will slow down
    > operation.
    > the only requrement is to Insert data and query and delete records
    > using keys. no need of SQL interface either.
    >

    Well I'd still use one, unless I had a real performance issue. Even
    then my first action would be upgrade the hardware!

    --
    Ian Collins.
    Ian Collins, May 15, 2007
    #8
  9. CMOS

    meagar Guest

    On May 15, 1:12 am, CMOS <> wrote:
    > one of the projects im working in currently requires use of ultra
    > large sized maps, lists, vector, etc. (basically stl containers).
    > Sizes might grow up to 1000 Million entries. since it is impossible to
    > have all this data in memory, im planning to implement these
    > containers to hold data both in memory and disk at the same time.
    > im not sure this can be achieved using customer allocators and im
    > wondering if there are any such implementations.
    >
    > thank you


    I don't think you appreciate how slow it will be to search a billion
    records without loading the bulk of them into RAM. I mean, you're
    going to be swapping the entire file in and out of RAM anyways, as the
    OS will be buffering the file in memory anyways.

    You might consider storing the records in a file, and then creating a
    separate indexing map which just contains the unique identifying
    fields from the object, mapped to an byte-offset leading to the record
    in the file.
    meagar, May 15, 2007
    #9
  10. CMOS

    James Kanze Guest

    On May 15, 9:04 am, CMOS <> wrote:
    > a generic DB's performance will not be enough and i do not need it to
    > support any data modifications, transactions, etc which will slow down
    > operation.
    > the only requrement is to Insert data and query and delete records
    > using keys. no need of SQL interface either.


    Using std::map with allocators for data on disk will *not*
    result in better performance than a commercial data base.
    Commercial data bases have invested hundreds of man years in
    optimizing their accesses. At least one commercial vendor,
    SyBase, has a variant of their data base optimized for exactly
    this sort of application: updates only in batch, no
    transactions, but very fast read access for very, very large
    data sets. And all commercial data bases know how to maintain
    indexes for multiple fields, in a fashion optimized for disk (B+
    trees or hash tables, rather than the classical binary tree
    typically used by std::map.) It may be possible to do better
    than the commercial data bases for a specialized application,
    but to do so will require very special custom code (and not just
    std::map with a special allocator), and probably something up of
    ten man years development time.

    In answer to your question, however, I have my doubts as to
    whether it is even possible. The accessors to std::map return
    references, and these are required by the standard to be real
    references. Which means that user code will have references
    into your in memory data which you cannot track, which in turn
    means that you cannot know when you can release the in memory
    data---any data, once accessed, must be maintained in memory for
    all time.

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, May 16, 2007
    #10
  11. CMOS

    James Kanze Guest

    On May 15, 8:42 am, Ian Collins <> wrote:
    > CMOS wrote:
    > > one of the projects im working in currently requires use of ultra
    > > large sized maps, lists, vector, etc. (basically stl containers).
    > > Sizes might grow up to 1000 Million entries. since it is impossible to
    > > have all this data in memory, im planning to implement these
    > > containers to hold data both in memory and disk at the same time.
    > > im not sure this can be achieved using customer allocators and im
    > > wondering if there are any such implementations.


    > The short answer is yes, but are you sure you want to?


    Are you sure? It's probably possible to do something so that
    parts of the map are loaded lazily, but functions like
    map<>::eek:perator[] and map<>::iterator::eek:perator* return
    references that are required to be real references, and are
    guaranteed to be valid as long as the corresponding entry has
    not been erased. I think that that more or less pins any
    accessed entry in memory, at its original address. Which means
    that while you can load lazily (maybe), you cannot drop an entry
    from memory once it has been accessed.

    One solution that probably is possible, however, is to put the
    map in shared memory, backed by a file, using mmap (or its
    Windows equivalent). In theory, I think it is possible to even
    allow loading it at an arbitrary address; in practice, the one
    time I played this game, we loaded at a fixed adress, and left
    the pointer type a T*. We also designed the data structures so
    that they only contained PODs: char[] instead of std::string,
    for example. Of course, this still isn't optimized for disk; if
    your data set is significantly larger than real memory, and you
    start accessing randomly, you're going to page fault like crazy,
    and probably end up significantly slower than a classical data
    base (which optimize for disk accesses, taking into account the
    difference in access times between real memory and disk).

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, May 16, 2007
    #11
  12. CMOS

    Ian Collins Guest

    Re: maps to hold ultra large data sets using customer allocatorsto allocate disk space rather than main memory

    James Kanze wrote:
    > On May 15, 8:42 am, Ian Collins <> wrote:
    >> CMOS wrote:
    >>> one of the projects im working in currently requires use of ultra
    >>> large sized maps, lists, vector, etc. (basically stl containers).
    >>> Sizes might grow up to 1000 Million entries. since it is impossible to
    >>> have all this data in memory, im planning to implement these
    >>> containers to hold data both in memory and disk at the same time.
    >>> im not sure this can be achieved using customer allocators and im
    >>> wondering if there are any such implementations.

    >
    >> The short answer is yes, but are you sure you want to?

    >
    > Are you sure?


    I was thinking of the solution you propose in you second paragraph and
    the drawbacks you mention were my reason for suggesting a database.

    > It's probably possible to do something so that
    > parts of the map are loaded lazily, but functions like
    > map<>::eek:perator[] and map<>::iterator::eek:perator* return
    > references that are required to be real references, and are
    > guaranteed to be valid as long as the corresponding entry has
    > not been erased. I think that that more or less pins any
    > accessed entry in memory, at its original address. Which means
    > that while you can load lazily (maybe), you cannot drop an entry
    > from memory once it has been accessed.
    >
    > One solution that probably is possible, however, is to put the
    > map in shared memory, backed by a file, using mmap (or its
    > Windows equivalent). In theory, I think it is possible to even
    > allow loading it at an arbitrary address; in practice, the one
    > time I played this game, we loaded at a fixed adress, and left
    > the pointer type a T*. We also designed the data structures so
    > that they only contained PODs: char[] instead of std::string,
    > for example. Of course, this still isn't optimized for disk; if
    > your data set is significantly larger than real memory, and you
    > start accessing randomly, you're going to page fault like crazy,
    > and probably end up significantly slower than a classical data
    > base (which optimize for disk accesses, taking into account the
    > difference in access times between real memory and disk).
    >


    --
    Ian Collins.
    Ian Collins, May 16, 2007
    #12
  13. CMOS

    CMOS Guest

    thanks for all the suggessions. i'll be looking at something like
    SysBase while investigating the possibility of implementing a
    specialized DB.
    one other problem im facing in this project is to have millions of
    files in the same directory. this might go up to billions
    (2000Million) as well.
    does any one have any experiance on this type of thing?
    CMOS, May 17, 2007
    #13
  14. CMOS

    CMOS Guest

    thanks for all the suggessions. i'll be looking at something like
    SysBase while investigating the possibility of implementing a
    specialized DB.
    one other problem im facing in this project is to have millions of
    files in the same directory. this might go up to billions
    (2000Million) as well.
    does any one have any experiance on this type of thing?
    CMOS, May 17, 2007
    #14
  15. Re: maps to hold ultra large data sets using customer allocatorsto allocate disk space rather than main memory

    * CMOS:
    > thanks for all the suggessions. i'll be looking at something like
    > SysBase while investigating the possibility of implementing a
    > specialized DB.
    > one other problem im facing in this project is to have millions of
    > files in the same directory. this might go up to billions
    > (2000Million) as well.
    > does any one have any experiance on this type of thing?


    Where do the files come from?

    You're leaving us guessing.

    I'd guess this is a design for storing collected measurements. Some
    sort of automated physical data acquisition. Is that right?

    By the way, you should really be asking in e.g. [comp.programming],
    since questions of design at that level are off-topic in clc++.

    Follow-ups set accordingly.

    --
    A: Because it messes up the order in which people normally read text.
    Q: Why is it such a bad thing?
    A: Top-posting.
    Q: What is the most annoying thing on usenet and in e-mail?
    Alf P. Steinbach, May 17, 2007
    #15
  16. CMOS

    James Kanze Guest

    On May 17, 2:01 pm, CMOS <> wrote:
    > thanks for all the suggessions. i'll be looking at something like
    > SysBase while investigating the possibility of implementing a
    > specialized DB.
    > one other problem im facing in this project is to have millions of
    > files in the same directory. this might go up to billions
    > (2000Million) as well.
    > does any one have any experiance on this type of thing?


    Yes, but it's very system dependent. At least on some earlier
    versions of Unix (and maybe still today---I'm not about to try
    it), access becomes very, very slow for anything over a couple
    of hundred files.

    More generally, I don't think any file system is designed with
    this kind of thing in mind. Anytime you need more than a couple
    of hundred elements in a flat structure, with rapid access, you
    should be thinking in terms of a data base.

    --
    James Kanze (Gabi Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
    James Kanze, May 17, 2007
    #16
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Punya Narra
    Replies:
    5
    Views:
    1,685
    Itai Raz
    Feb 17, 2004
  2. Jas Shultz
    Replies:
    0
    Views:
    942
    Jas Shultz
    Dec 3, 2003
  3. Janice

    allocate space for typedef data type

    Janice, Nov 9, 2004, in forum: C Programming
    Replies:
    5
    Views:
    429
    Joe Wright
    Nov 10, 2004
  4. Replies:
    12
    Views:
    518
    santosh
    Nov 15, 2006
  5. Replies:
    20
    Views:
    923
    Tor Rustad
    Nov 15, 2006
Loading...

Share This Page