Is there a faster way to fetch data from stl map?

J

jason.cipriani

Except that the loading takes place before he starts the tight
loop, so can have no impact on the time in the loop.  (I presume
he has profiled the code, and established that the bottleneck is
the find.)

My comment was unclear. I meant to imply that the text file format
could be modified to make it easier and more efficient to generate
optimized data structures on load. I did not mean to imply that
improving load times would have a direct effect on his particular
bottleneck.

That's what he said.  Presumably based on profiler output.

I was acknowledging what he said. Now you are just picking on me. :)

Or have a second pass after loading which establishes them.

Given that there was not enough information to determine how many
times he went through his loop per run (and therefore per data load),
I took the approach of suggesting optimizations for both loading *and*
analysis. Increased load times by doing a second pass on load (which
already takes 10+ seconds) are only acceptable if the amount of
increased time is less than the amount of time saved while doing the
analysis. If that's the case, or if the text file can no longer be
modified for some reason, then sure, a second pass could make sense as
well.

If the set of user id's is more or less dense (say at least half
of the entries between 0 and the largest entry allocated), then
he can get the same effect by using std::vector instead of
std::map, without any extra set-up.  Maybe his best solution is
to ensure that the user id's are dense.

Yes, that is another good idea.


Jason
 
J

James Kanze

tni wrote:
[...]> struct hash {
size_t operator()(int a) const {
a = (a ^ 61) ^ (a >> 16);
a = a + (a << 3);
a = a ^ (a >> 4);
a = a * 0x27d4eb2d;
a = a ^ (a >> 15);
return a;
}
};

Your hash function would be more portable if you used unsigned
int, since signed int can overflow in the expressions you use.

More to the point: how are the integer values distributed. If
they're dense, then just using an std::vector would be even
better than a hash table. If they're more or less randomly
distributed, just returning the integral value, converted to
size_t, would be a lot faster than his code, and just as
effective. (His function is really a good example of how not to
implement a hash function.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,270
Latest member
TopCryptoTwitterChannels_

Latest Threads

Top