Utility to report performance characteristics of standard C/C++library implementation?

B

Boris Du¹ek

Hello,

I would like to get more insight into how much some common operations
in C/C++ standard library cost in terms of performance, and also how
these costs differ between different platforms (like vc9 vs. vc10 beta
1 vs. g++-4.1.2/Linux-32 vs. g++-4.0.1/OS X). I started to write a
small utility for that with stuff I am currently interested in (like
cost of malloc/free, cost of "string str; copy(data, data + n,
back_inserter(str)" versus cost of "string str(data, data + n)",
similar with transform versus boost::transform_iterator). Also I
output allocation strategy for vector and string (and learn libstdc++
is strictly exponential with base of 2, while Microsoft's string
implementation has minimal storage allocated and then behaves
~exponentially, but not with base of 2.

But I think I am probably reinventing the wheel. Isn't there a utility
that is portable and assesses such characteristics and outputs them
for study by an interested developer (possibly calibrating the results
w.r.t. CPU power/load)? I also guess such utility would be great for
developers of standard libraries, since they can both catch
performance regressions and compare with competition, so I hope there
is something available.

Thanks for any pointers,
Boris Dušek
 
J

Jorgen Grahn

Hello,

I would like to get more insight into how much some common operations
in C/C++ standard library cost in terms of performance, and also how
these costs differ between different platforms (like vc9 vs. vc10 beta
1 vs. g++-4.1.2/Linux-32 vs. g++-4.0.1/OS X). I started to write a
small utility for that with stuff I am currently interested in (like
cost of malloc/free, cost of "string str; copy(data, data + n,
back_inserter(str)" versus cost of "string str(data, data + n)",
similar with transform versus boost::transform_iterator). Also I
output allocation strategy for vector and string (and learn libstdc++
is strictly exponential with base of 2, while Microsoft's string
implementation has minimal storage allocated and then behaves
~exponentially, but not with base of 2.

But I think I am probably reinventing the wheel. Isn't there a utility
that is portable and assesses such characteristics and outputs them
for study by an interested developer

There is. Under Unix, I use my shell's 'time' command, and a profiler
(gprof). That's what people here mean when they say "don't guess,
measure" when discussing optimization.
(possibly calibrating the results
w.r.t. CPU power/load)?

Probably not worth trying. There are too many factors involved (cache
characteristics, for example).
I also guess such utility would be great for
developers of standard libraries, since they can both catch
performance regressions and compare with competition, so I hope there
is something available.

They should be interested in performance yes, but they are a tiny,
tiny minority of all C++ users. It's mostly ordinaty users like you
and me who should use such tools, and when we hit a bottleneck it is
almost never a fixable bottleneck in the standard library but our own
fault.

/Jorgen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,764
Messages
2,569,566
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top