Updated performance tests using Boost Serialization lib

B

Brian

I've recently updated the performance section of a comparison between
the Boost Serialization library and the C++ Middleware Writer --
http://webEbenezer.net/comparison.html#perf. The new tests were
done on Fedora 12 and Windows Vista. The previous version of that
file is here -- http://webEbenezer.net/comp138.html#perf.

The most dramatic change occurred on Windows. Previously the
Boost versions were around 2.7 times slower than the Ebenezer
versions.
Now they are between 3.7 and 4.0 times slower than the Ebenezer
versions. I believe some of that difference is due to our switching
from return codes to exceptions. I'm not sure why it shows up more
on Windows than on Linux.


Brian Wood
http://webEbenezer.net
 
J

Joshua Maurice

I've recently updated the performance section of a comparison between
the Boost Serialization library and the C++ Middleware Writer --http://webEbenezer.net/comparison.html#perf.  The new tests were
done on Fedora 12 and Windows Vista.   The previous version of that
file is here --http://webEbenezer.net/comp138.html#perf.

The most dramatic change occurred on Windows.  Previously the
Boost versions were around 2.7 times slower than the Ebenezer
versions.
Now they are between 3.7 and 4.0 times slower than the Ebenezer
versions.  I believe some of that difference is due to our switching
from return codes to exceptions.  I'm not sure why it shows up more
on Windows than on Linux.

Well, exceptions, even when not thrown, introduce some overhead on
windows with the visual studios compiler (2003 and higher), whereas
gcc on Linux does not. However, from my minimal testing, the overhead
is small and will not account for such a drastic difference.
 
B

Brian

Well, exceptions, even when not thrown, introduce some overhead onwindowswith the visual studios compiler (2003 and higher), whereas
gcc on Linux does not. However, from my minimal testing, the overhead
is small and will not account for such a drastic difference.


Several hours ago I received an email from the Boost author asking
for my help to build these tests on his machine. When I went to
reproduce them on my machine, I realized a problem in my
methodology. I had failed to erase the output file (on Windows)
and the existence of the output file from a previous execution
results in significantly better times than when the file doesn't
exist. The test is about 50% slower when the file doesn't exist
than when it does. So when I test more carefully I find that the
Boost version is between 2.6 and 2.7 times slower than the Ebenezer
version and I've updated this page to reflect that --
http://webEbenezer.net/comparison.html#perf.
My apologies to Robert Ramey for some sloppy testing that
resulted in a claim that was not accurate.

On a side note, I have no idea why the existence of the
output file from a previous execution has so little affect on
the performance of the Boost version, but has such a large
affect on the Ebenezer version.

And I have forgotten how to chain commands together on
Windows. On Linux I use a semicolon to separate erasing
the previous output file from the actual running of the test.
If someone would remind me of that I'd appreciate it.


Brian Wood
http://webEbenezer.net
 
R

robertwessel2

Several hours ago I received an email from the Boost author asking
for my help to build these tests on his machine.  When I went to
reproduce them on my machine, I realized a problem in my
methodology.  I had failed to erase the output file (on Windows)
and the existence of the output file from a previous execution
results in significantly better times than when the file doesn't
exist.  The test is about 50% slower when the file doesn't exist
than when it does.  So when I test more carefully I find that the
Boost version is between 2.6 and 2.7 times slower than the Ebenezer
version and I've updated this page to reflect that --http://webEbenezer.net/comparison.html#perf.
My apologies to Robert Ramey for some sloppy testing that
resulted in a claim that was not accurate.

On a side note, I have no idea why the existence of the
output file from a previous execution has so little affect on
the performance of the Boost version, but has such a large
affect on the Ebenezer version.

And I have forgotten how to chain commands together on
Windows.  On Linux I use a semicolon to separate erasing
the previous output file from the actual running of the test.
If someone would remind me of that I'd appreciate it.


While OT, use an ampersand to separate commands on the command line in
Windows. If you want the second command to be conditional, use a
double ampersand or double bar.

http://www.microsoft.com/resources/...docs/en-us/ntcmds_shelloverview.mspx?mfr=true
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,898
Latest member
BlairH7607

Latest Threads

Top