lots of little mallocs or one big one? (somewhat naive)

D

Dave Stallard

So, I'm looking at some code that does 770K malloc calls in a row, of
varying size, paired with corresponding freads from a binary file to
initialize. In total, about 58 MB of data is allocated an initialized.
Unsurprisingly, this takes a good bit of time.

Question: Wouldn't it be a lot more efficient to allocate this 58 MB in
one big gulp, then fread in the 58 MB of data from the file to
initialize it, rather than a zillion small calls?

Also, can it ever happen that a malloc of N bytes would fail, yet an
equivalent number of small mallocs summing to N would succeed? What
about the fread?

thanks,
Dave
 
J

John Harrison

Dave said:
So, I'm looking at some code that does 770K malloc calls in a row, of
varying size, paired with corresponding freads from a binary file to
initialize. In total, about 58 MB of data is allocated an initialized.
Unsurprisingly, this takes a good bit of time.

Question: Wouldn't it be a lot more efficient to allocate this 58 MB in
one big gulp, then fread in the 58 MB of data from the file to
initialize it, rather than a zillion small calls?

My instinct says yes, but the only sure way would be to try it and see.
C++ doesn't impose any constraints on the efficiency of malloc so
different implementations are optimized for different situations.

I've read that malloc is typically optimized for C, which tends to make
fewer and larger allocations than C++, where you do often need to make a
lot of small allocations.

An alternative to one big allocation would be to use a small object
allocator, i.e. an allocator optimized for a lot of small allocations.
Try google.
Also, can it ever happen that a malloc of N bytes would fail, yet an
equivalent number of small mallocs summing to N would succeed?

That is possible, a malloc of N bytes must return a contiguous block of
N bytes. It could be that N bytes are available for allocation but not
in one contiguous block.

What
about the fread?

I think there is likely to be little difference there. The O/S will
likely buffer all I/O itself, so there should be little difference in
doing lots of calls to fread reading a few bytes vs. one call to fread
reading a large number of bytes. But as usual the only way to be sure if
to try it and time it.
thanks,
Dave

john
 
J

Jim Langston

Dave Stallard said:
So, I'm looking at some code that does 770K malloc calls in a row, of
varying size, paired with corresponding freads from a binary file to
initialize. In total, about 58 MB of data is allocated an initialized.
Unsurprisingly, this takes a good bit of time.

Question: Wouldn't it be a lot more efficient to allocate this 58 MB in
one big gulp, then fread in the 58 MB of data from the file to initialize
it, rather than a zillion small calls?

Maybe. It would be faster to allocate 58MB in one call then 770k calls for
smaller chucnks, quite a bit faster (maybe even 770k times faster). Also,
you will need 770k places to hold the pointers to the allocated memory with
smaller calls, just one pointer with the bigger call.

However, alignment can be an issue. Consider you want to read, say, a
character and an integer. On your system (for example) char has sizeof 1,
integer has sizeof 4. So you may allocate 5 bytes. Then read the character
then the integer. However, misaligned integers on some systems are slower
(mine) than an aligned one, and on some systems will cause a program crash.
Typically, a 4 byte integer needs to be aligned on a 4 byte boundary. That
is, an address evenly divisible by 4. On some systems the general rule
*may* apply, that any given built in type needs to be aligned on it's size
(I doubt this is true for all/most systems).

So, code like this may fail (untested code, and I don't use fread so kinda
guessing at it's syntax):

unsigned char* buffer = malloc( sizeof( char ) + sizeof( int ) );
unsigned char* bufferpos = buffer;
fread( bufferpos, sizeof( char ), 1, myfile );
bufferpos += sizeof( char );
fread( bufferpos, sizeof( int ), 1, myfile );

Well, now our buffer contains (supposedly) a character in the first bye, an
integer in the next 4 bytes. Now you get the fun of pulling out the integer
without crashing your system. Which generally means you're going to have to
allocate an integer and move the bytes over one by one with some method to
be safe. If you had allocated an integer in the first place and read into
it, you wouldn't have to do this extra step.

So, in the long run, yes, the malloc would be faster, but you're going to go
through extra steps to move your data into variables you can use, which you
are going ot have to malloc anyway.
Also, can it ever happen that a malloc of N bytes would fail, yet an
equivalent number of small mallocs summing to N would succeed?

Depending on how effecient the OS's mallocing is, yes. With padding and
everything else being an issue.
What about the fread?

What about it?
 
J

Jacek Dziedzic

Dave said:
So, I'm looking at some code that does 770K malloc calls in a row, of
varying size, paired with corresponding freads from a binary file to
initialize. In total, about 58 MB of data is allocated an initialized.
Unsurprisingly, this takes a good bit of time.

Question: Wouldn't it be a lot more efficient to allocate this 58 MB in
one big gulp, then fread in the 58 MB of data from the file to
initialize it, rather than a zillion small calls?

Measure. Try timing the same thing, but with memory allocation
turned off, just fread() the data and ignore the results. Maybe
it will turn out that file I/O is the bottleneck here and you
won't have to worry about malloc?
Also, can it ever happen that a malloc of N bytes would fail, yet an
equivalent number of small mallocs summing to N would succeed? What
about the fread?

Yes, what John Harrison said.

HTH,
- J.
 
D

Dave Stallard

John said:
That is possible, a malloc of N bytes must return a contiguous block of
N bytes. It could be that N bytes are available for allocation but not
in one contiguous block.

Doh! Of course. Should have thought of that - thanks.

Dave
 
A

Alf P. Steinbach

* John Harrison:
The O/S will
likely buffer all I/O itself, so there should be little difference in
doing lots of calls to fread reading a few bytes vs. one call to fread
reading a large number of bytes. But as usual the only way to be sure if
to try it and time it.

What I remember from the time of discussion about efficiency of old
versus new iostreams versus FILE* versus native, when I tested this, is
that at that time doing one big gulp was very much more efficient.

I don't expect that to have changed significantly.

But YMMV -- measure.
 
J

John Harrison

Alf said:
* John Harrison:

What I remember from the time of discussion about efficiency of old
versus new iostreams versus FILE* versus native, when I tested this, is
that at that time doing one big gulp was very much more efficient.

I don't expect that to have changed significantly.

But YMMV -- measure.

You are right. It is because the I/O is typically buffered that the way
you access that buffer, one big gulp or lots of little ones, can make a
big difference. The underlying I/O would be the same in both cases.

I think I've been guilty of fuzzy thinking on this issue until now.

john
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top