n2xssvv g02gfr12930 said:
The only way to be sure of how deque memory management works is to
provide your own allocator. I tend to use vector which has a reserve
member function that allocates memory for a specified number of items
thus avoiding reallocation later. Naturally this is only useful if you
know in advance the algorithms requirements. Certainly for an algorithm
I developed it did speed thing up, although reducing loop counts by only
considering relevant data was most effective, (this required the data
to be sorted first). Unfortunately this is not always applicable and
is somewhat complex to implement, (not good if you're short of time).
Perhaps the following idea is a bit far removed from the OP's topic,
but perhaps hir (and others) may find it useful:
If one wants to "consider only relevant data" as "n2xssvv g02gfr12930"
(how do you pronounce that, anyway?) says, one way is by using a
std::multimap<> instead of a std::deque<>.
Consider my quandry when writing a duplicate-file remover. Consider
a directory with 1000 files in it. Doing a byte-by-byte comparison
on all possible file pairs involves 1000! comparisons (where "!" means
factorial). This is about 4.0 x 10^2567. Even at 100 file compares
per second, this will take 1.3 x 10^2560 years. (By comparison, the
entire duration of the universe, so far, is only about 10^10 years.)
Obviously not workable! (Well, obvious NOW. At the time, I was
wondering why the program was siting there for hours hogging 100%
CPU time. When I realized the transuniversal enormity of my blunder,
I was suitably embarrassed.)
So, instead of using a std::deque<>, std::vector<>, or std::list<>,
I used a std::multimap<int, FileRecord> (where FileRecord is a struct
with file info like name, size, date, etc.). All files are then
automatically sorted into "equivalence groups" keyed by size. Since
only files the same size can be duplicates, this cuts the time of
execution from googol^25 years down to about 2 seconds, because the
app now only compares maybe 2 or 3 pairs of files instead of 1000!.
A significant improvement in efficiency.
--
Cheers,
Robbie Hatley
Tustin, CA, USA
email: lonewolfintj at pacbell dot net
web: home dot pacbell dot net slant earnur slant