how much memory does an empty STL deque occupy?

R

R.Z.

i was wondering whether it pays off in terms of memory use to maintain lots
of empty deques (it would be convenient for my algorithms but memory use is
more important). and does the size of a deque depends on the size of its
members even if the deque is empty? is there at all a way to check out how
much memory my deque occupies? i've read that the sizeof operator cannot be
used with dynamically allocated arrays so i figured it wouldn't give me
correct results with a deque (cause there is some dynamic memory allocation
going on there, isn't it?)

best regards
remi
 
R

R.Z.

to further clarify what i mean:
i have a deque of deques and i am wondering whether to keep the number of
the deques in a deque constant (cause it would make my code simpler) or vary
it by removing any empty member deques (to conserve memory).
 
D

David White

R.Z. said:
i was wondering whether it pays off in terms of memory use to
maintain lots of empty deques (it would be convenient for my
algorithms but memory use is more important).

That can't be answered without a lot more information.
and does the size of a
deque depends on the size of its members even if the deque is empty?
is there at all a way to check out how much memory my deque occupies?
i've read that the sizeof operator cannot be used with dynamically
allocated arrays so i figured it wouldn't give me correct results
with a deque (cause there is some dynamic memory allocation going on
there, isn't it?)

Using sizeof will tell you the bare minimum bytes that a deque occupies.
This value will not change regardless of the number of items in the deque.
It is also unlikely to be affected by the item size, though I think you'd
have to test that on the compiler to be used.
As an example, in VS .NET 2003 an empty deque appears to occupy 20 bytes
always, not counting its dynamic allocations. As for the dynamic
allocations, you would hope that it wouldn't do any until you added the
first item to tthe deque, but I think you'd have to examine the deque's data
members or its code to be sure. AFAIK, a given implementation of the
standard library could implement such details of the deque differently from
another implementation.

DW
 
R

R.Z.

to further clarify what i mean:
i have a deque of deques (some of which are or may become empty during
program execution). i wonder whether i should keep only non-empty deques (to
conserve memory) or maintain all of them (to simplify coding).
 
J

John Harrison

R.Z. said:
i was wondering whether it pays off in terms of memory use to maintain lots
of empty deques (it would be convenient for my algorithms but memory use is
more important). and does the size of a deque depends on the size of its
members even if the deque is empty?

It might, it might also depend on how the deque has been used upto that
point.

is there at all a way to check out how
much memory my deque occupies?

No, other then examining the code that is used in your implementation
and so working it out for yourself.

i've read that the sizeof operator cannot be
used with dynamically allocated arrays so i figured it wouldn't give me
correct results with a deque (cause there is some dynamic memory allocation
going on there, isn't it?)

Right.

You might consider the swap trick, assuming you have a deque of type T
called mydeque.

deque<T>().swap(mydeque);

this creates a temporary empty deque and swaps it with mydeque. This
result is that mydeque becaome exactly like a newly constructed empty
deque. There are still no guarantees but I would say that after doing
this it is more likely that mydeque is occupying the minimum possible
amount of memory.

Josuttis in 'The C++ Standard Library' advises to use a deque when 'It
is important that the container frees memory when it is no longer used
(however the standard does not guarantee that this happens)'. I would
read this as implying that typically the size of an empty deque would
not depend on the size of its members, but of course there are no
guarantees.

john
 
M

Mark P

R.Z. said:
i was wondering whether it pays off in terms of memory use to maintain lots
of empty deques (it would be convenient for my algorithms but memory use is
more important). and does the size of a deque depends on the size of its
members even if the deque is empty? is there at all a way to check out how
much memory my deque occupies? i've read that the sizeof operator cannot be
used with dynamically allocated arrays so i figured it wouldn't give me
correct results with a deque (cause there is some dynamic memory allocation
going on there, isn't it?)

best regards
remi

This is all implementation dependent and I've seen different library
implementations do things in different ways. What you can do is use
sizeof to find the size of the non-dynamically allocated portion of the
container. And then you can use a simple allocator that reports all
memory allocations to observe the dynamic allocations. Here's one such
allocator:

http://www.josuttis.com/libbook/memory/myalloc.hpp.html

In my experience, some containers will allocate space for one element at
a time while others will never allocate space for less than a certain
number of elements at a time (32 is a common value). From what I've
seen, STLPort does one at a time and so does SGI, RogueWave does 32 but
this is controlled by a preprocessor defn. so can be altered. I've
never seen an implementation that dynamically allocates memory while the
container is still empty. (But of course one could.)

Mark
 
N

n2xssvv g02gfr12930

R.Z. said:
i was wondering whether it pays off in terms of memory use to maintain lots
of empty deques (it would be convenient for my algorithms but memory use is
more important). and does the size of a deque depends on the size of its
members even if the deque is empty? is there at all a way to check out how
much memory my deque occupies? i've read that the sizeof operator cannot be
used with dynamically allocated arrays so i figured it wouldn't give me
correct results with a deque (cause there is some dynamic memory allocation
going on there, isn't it?)

best regards
remi
The only way to be sure of how deque memory management works is to
provide your own allocator. I tend to use vector which has a reserve
member function that allocates memory for a specified number of items
thus avoiding reallocation later. Naturally this is only useful if you
know in advance the algorithms requirements. Certainly for an algorithm
I developed it did speed thing up, although reducing loop counts by only
considering relevant data was most effective, (this required the data
to be sorted first). Unfortunately this is not always applicable and
is somewhat complex to implement, (not good if you're short of time).

Hope you find this useful

JFJB
 
R

Robbie Hatley

n2xssvv g02gfr12930 said:
The only way to be sure of how deque memory management works is to
provide your own allocator. I tend to use vector which has a reserve
member function that allocates memory for a specified number of items
thus avoiding reallocation later. Naturally this is only useful if you
know in advance the algorithms requirements. Certainly for an algorithm
I developed it did speed thing up, although reducing loop counts by only
considering relevant data was most effective, (this required the data
to be sorted first). Unfortunately this is not always applicable and
is somewhat complex to implement, (not good if you're short of time).

Perhaps the following idea is a bit far removed from the OP's topic,
but perhaps hir (and others) may find it useful:

If one wants to "consider only relevant data" as "n2xssvv g02gfr12930"
(how do you pronounce that, anyway?) says, one way is by using a
std::multimap<> instead of a std::deque<>.

Consider my quandry when writing a duplicate-file remover. Consider
a directory with 1000 files in it. Doing a byte-by-byte comparison
on all possible file pairs involves 1000! comparisons (where "!" means
factorial). This is about 4.0 x 10^2567. Even at 100 file compares
per second, this will take 1.3 x 10^2560 years. (By comparison, the
entire duration of the universe, so far, is only about 10^10 years.)
Obviously not workable! (Well, obvious NOW. At the time, I was
wondering why the program was siting there for hours hogging 100%
CPU time. When I realized the transuniversal enormity of my blunder,
I was suitably embarrassed.)

So, instead of using a std::deque<>, std::vector<>, or std::list<>,
I used a std::multimap<int, FileRecord> (where FileRecord is a struct
with file info like name, size, date, etc.). All files are then
automatically sorted into "equivalence groups" keyed by size. Since
only files the same size can be duplicates, this cuts the time of
execution from googol^25 years down to about 2 seconds, because the
app now only compares maybe 2 or 3 pairs of files instead of 1000!.
A significant improvement in efficiency. :)

--
Cheers,
Robbie Hatley
Tustin, CA, USA
email: lonewolfintj at pacbell dot net
web: home dot pacbell dot net slant earnur slant
 
M

Mark P

Robbie said:
Consider my quandry when writing a duplicate-file remover. Consider
a directory with 1000 files in it. Doing a byte-by-byte comparison
on all possible file pairs involves 1000! comparisons (where "!" means
factorial).

Probably I'm misunderstanding what you did, but how do you get 1000!
pairs from 1000 files. There are 1000C2 = 1000*999/2 ways to choose a
pair from 1000 items. This is vastly less than 1000!

Mark
 
R

Robbie Hatley

Mark P said:
Probably I'm misunderstanding what you did, but how do you get 1000!
pairs from 1000 files. There are 1000C2 = 1000*999/2 ways to choose a
pair from 1000 items. This is vastly less than 1000!

My mistake. For each member of the sequence, I'm comparing
it to all files to it's right. So the total number of compares is
SUM(999, 998, ... , 3, 2, 1) not PRODUCT(999, 998, ... , 3, 2, 1).
I don't know why my mind came up with the latter. Twas late
when I wrote the post, and my mind must have been fuzzier than
I thought.

Still, though, comparing files that way, just iterating through
a list or deque of 1000 FileRecord objects with nested for loops,
means 499500 separate file comparisons. Takes a long time, even
if you don't actually do byte-compares on files with non-equal
sizes, because you still have to stop and compare sizes of each
possible pair of file records.

Whereas with std::multimap, typically you get 0 or 1 or 2 file
comparisons in a directory with 1000 files. Takes a second or so.
Huge difference. That's the advantage of "considering only the
relevant data" as one of the other posters mentioned. multimap
makes that easy by allowing one to group things into "equivalence
groups". That way you can visit just those keys that have more
than one value, and iterate through those equivalence groups using
"lower_bound" and "upper_bound".

Also, it further speeds things up if you make a list L of "relevant"
keys while loading a multimap M. Then iterate through L with
iterator i, looking only at equivalence groups in M keyed by (*i).

--
Cheers,
Robbie Hatley
Tustin, CA, USA
email: lonewolfintj at pacbell dot net
web: home dot pacbell dot net slant earnur slant
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,898
Latest member
BlairH7607

Latest Threads

Top