maps to hold ultra large data sets using customer allocators to allocate disk space rather than main

C

CMOS

one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.

thank you
 
A

Alf P. Steinbach

* CMOS:
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.

A few GBytes of data isn't that much, really, if you have the hardware
to match. However, from your comment about "customer (sic) allocators",
and simply from the fact that you're seeking advice here, I'm reasonably
sure that this is not a million-dollar budget project, but rather a
student project, and that the requirement of billions of entries stems
from bad design, and is not an inherent requirement of the problem
you're trying to solve. So do tell about the problem, not how you're
envisioning solving it; perhaps we can suggest better ways.
 
I

Ian Collins

CMOS said:
one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.
The short answer is yes, but are you sure you want to?
 
C

CMOS

NOTED: Custom Allocator: sorry.

the problem is to index 10 Billion records of certain type using a
given field. field type might be a number, string, date, etc.
and to query the results for fast retrieval.

thanks
 
I

Ian Collins

CMOS said:
NOTED: Custom Allocator: sorry.

the problem is to index 10 Billion records of certain type using a
given field. field type might be a number, string, date, etc.
and to query the results for fast retrieval.
Why not just use a database, which will have been optimised for this task?
 
C

CMOS

a generic DB's performance will not be enough and i do not need it to
support any data modifications, transactions, etc which will slow down
operation.
the only requrement is to Insert data and query and delete records
using keys. no need of SQL interface either.
 
G

Gianni Mariani

one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.

thank you

http://www.sqlite.org/
http://www.postgresql.org/

Either one should do the job.
 
I

Ian Collins

CMOS wrote:

Please quote enough context for your reply to make sense.
a generic DB's performance will not be enough and i do not need it to
support any data modifications, transactions, etc which will slow down
operation.
the only requrement is to Insert data and query and delete records
using keys. no need of SQL interface either.
Well I'd still use one, unless I had a real performance issue. Even
then my first action would be upgrade the hardware!
 
M

meagar

one of the projects im working in currently requires use of ultra
large sized maps, lists, vector, etc. (basically stl containers).
Sizes might grow up to 1000 Million entries. since it is impossible to
have all this data in memory, im planning to implement these
containers to hold data both in memory and disk at the same time.
im not sure this can be achieved using customer allocators and im
wondering if there are any such implementations.

thank you

I don't think you appreciate how slow it will be to search a billion
records without loading the bulk of them into RAM. I mean, you're
going to be swapping the entire file in and out of RAM anyways, as the
OS will be buffering the file in memory anyways.

You might consider storing the records in a file, and then creating a
separate indexing map which just contains the unique identifying
fields from the object, mapped to an byte-offset leading to the record
in the file.
 
J

James Kanze

a generic DB's performance will not be enough and i do not need it to
support any data modifications, transactions, etc which will slow down
operation.
the only requrement is to Insert data and query and delete records
using keys. no need of SQL interface either.

Using std::map with allocators for data on disk will *not*
result in better performance than a commercial data base.
Commercial data bases have invested hundreds of man years in
optimizing their accesses. At least one commercial vendor,
SyBase, has a variant of their data base optimized for exactly
this sort of application: updates only in batch, no
transactions, but very fast read access for very, very large
data sets. And all commercial data bases know how to maintain
indexes for multiple fields, in a fashion optimized for disk (B+
trees or hash tables, rather than the classical binary tree
typically used by std::map.) It may be possible to do better
than the commercial data bases for a specialized application,
but to do so will require very special custom code (and not just
std::map with a special allocator), and probably something up of
ten man years development time.

In answer to your question, however, I have my doubts as to
whether it is even possible. The accessors to std::map return
references, and these are required by the standard to be real
references. Which means that user code will have references
into your in memory data which you cannot track, which in turn
means that you cannot know when you can release the in memory
data---any data, once accessed, must be maintained in memory for
all time.
 
J

James Kanze

The short answer is yes, but are you sure you want to?

Are you sure? It's probably possible to do something so that
parts of the map are loaded lazily, but functions like
map<>::eek:perator[] and map<>::iterator::eek:perator* return
references that are required to be real references, and are
guaranteed to be valid as long as the corresponding entry has
not been erased. I think that that more or less pins any
accessed entry in memory, at its original address. Which means
that while you can load lazily (maybe), you cannot drop an entry
from memory once it has been accessed.

One solution that probably is possible, however, is to put the
map in shared memory, backed by a file, using mmap (or its
Windows equivalent). In theory, I think it is possible to even
allow loading it at an arbitrary address; in practice, the one
time I played this game, we loaded at a fixed adress, and left
the pointer type a T*. We also designed the data structures so
that they only contained PODs: char[] instead of std::string,
for example. Of course, this still isn't optimized for disk; if
your data set is significantly larger than real memory, and you
start accessing randomly, you're going to page fault like crazy,
and probably end up significantly slower than a classical data
base (which optimize for disk accesses, taking into account the
difference in access times between real memory and disk).
 
I

Ian Collins

James said:
Are you sure?

I was thinking of the solution you propose in you second paragraph and
the drawbacks you mention were my reason for suggesting a database.
It's probably possible to do something so that
parts of the map are loaded lazily, but functions like
map<>::eek:perator[] and map<>::iterator::eek:perator* return
references that are required to be real references, and are
guaranteed to be valid as long as the corresponding entry has
not been erased. I think that that more or less pins any
accessed entry in memory, at its original address. Which means
that while you can load lazily (maybe), you cannot drop an entry
from memory once it has been accessed.

One solution that probably is possible, however, is to put the
map in shared memory, backed by a file, using mmap (or its
Windows equivalent). In theory, I think it is possible to even
allow loading it at an arbitrary address; in practice, the one
time I played this game, we loaded at a fixed adress, and left
the pointer type a T*. We also designed the data structures so
that they only contained PODs: char[] instead of std::string,
for example. Of course, this still isn't optimized for disk; if
your data set is significantly larger than real memory, and you
start accessing randomly, you're going to page fault like crazy,
and probably end up significantly slower than a classical data
base (which optimize for disk accesses, taking into account the
difference in access times between real memory and disk).
 
C

CMOS

thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?
 
C

CMOS

thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?
 
A

Alf P. Steinbach

* CMOS:
thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?

Where do the files come from?

You're leaving us guessing.

I'd guess this is a design for storing collected measurements. Some
sort of automated physical data acquisition. Is that right?

By the way, you should really be asking in e.g. [comp.programming],
since questions of design at that level are off-topic in clc++.

Follow-ups set accordingly.
 
J

James Kanze

thanks for all the suggessions. i'll be looking at something like
SysBase while investigating the possibility of implementing a
specialized DB.
one other problem im facing in this project is to have millions of
files in the same directory. this might go up to billions
(2000Million) as well.
does any one have any experiance on this type of thing?

Yes, but it's very system dependent. At least on some earlier
versions of Unix (and maybe still today---I'm not about to try
it), access becomes very, very slow for anything over a couple
of hundred files.

More generally, I don't think any file system is designed with
this kind of thing in mind. Anytime you need more than a couple
of hundred elements in a flat structure, with rapid access, you
should be thinking in terms of a data base.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top