Arguing efficiency. Arg!

C

Christopher

One of the tenured fellows I work with made a file with typedefs for time to long long types, defined his own NULL as max value of long long, and wrote a plethora of functions to manually perform conversion to and from XML string, to database strings, perform UTC and local conversions etc.

I am arguing for the use of boost::eek:ptional<ptime>

When the time is invalid, it is readily apparent that it is invalid! Arithmetic is correct and we don't have to worry about a false NULL value or exceeding max value, or flipping over minimum value. Initialization is more readable and maintainable and a host of other arguments.

The counter argument is speed.

Well, He is correct in that boost::eek:ptional<ptime> is slower. In the same way, he is correct that using stringstreams for conversions to integral types from strings and from string to integral types is slower than using functions from the C runtime.

I already knew this before I wrote a performance test.

My problem with the way my performance test is being interpreted is that they will say, "Look! it takes 5 times as long over a million iterations!" Well, if one method takes 1 millisecond and another takes 2 milliseconds, isn't the difference showing up going to be one million * 1 millisecond?

How do I argue that the difference in performance is negligible. Is it negligible? Or maybe I am just being hard headed in my desire to get away from C-style source code?

What say you?
 
I

Ian Collins

Christopher wrote:

Please wrap your lines!
One of the tenured fellows I work with made a file with typedefs for
time to long long types, defined his own NULL as max value of long
long, and wrote a plethora of functions to manually perform
conversion to and from XML string, to database strings, perform UTC
and local conversions etc.

Sound like an academic...
I am arguing for the use of boost::eek:ptional<ptime>

When the time is invalid, it is readily apparent that it is invalid!
Arithmetic is correct and we don't have to worry about a false NULL
value or exceeding max value, or flipping over minimum value.
Initialization is more readable and maintainable and a host of other
arguments.

The counter argument is speed.

It often is..
Well, He is correct in that boost::eek:ptional<ptime> is slower. In the
same way, he is correct that using stringstreams for conversions to
integral types from strings and from string to integral types is
slower than using functions from the C runtime.

I already knew this before I wrote a performance test.

So does it matter in the context of the application? Does shaving a few
cycles from a string conversion matter when the string is read and
written from a database or serialised as XML?
My problem with the way my performance test is being interpreted is
that they will say, "Look! it takes 5 times as long over a million
iterations!" Well, if one method takes 1 millisecond and another
takes 2 milliseconds, isn't the difference showing up going to be one
million * 1 millisecond?

Ask them at add a million database or XML writes to the test!
How do I argue that the difference in performance is negligible. Is
it negligible? Or maybe I am just being hard headed in my desire to
get away from C-style source code?

What constitutes negligible depends on the context. If the application
really is doing millions of conversion in a time critical loop, then
optimising those operations is worth the effort. If it isn't, the
effort is probably a waste of time and money not to mentions a possible
source of bugs and maintenance arse aches.
 
Ö

Öö Tiib

One of the tenured fellows I work with made a file with typedefs for time
to long long types, defined his own NULL as max value of long long, and
wrote a plethora of functions to manually perform conversion to and from
XML string, to database strings, perform UTC and local conversions etc.

Time (Gregorian Calendar and POSIX time) is a thing that every software developer should implement at least once in his life.
It is inevitable that he fails most miserably on first attempt.
I am arguing for the use of boost::eek:ptional<ptime>

Nooo. Take Boost.Posix Time and Boost.Gregorian if you have boost!!!
What say you?

Prove that his crap is not working unlike Boost. It takes few tests.
Trust me, it does not work.
 
C

Christopher

I'd say string to integral conversions, and back, probably do occur around 50,000
times from the time data comes into the system and the time it goes out.

Time to string and string to time occur much less. Somewhere between 10 and 100
times.

Data is theoretically supposed to make its way through in less than a second.

Sorry about line formatting. It's been awhile since I was on a newsgroup.
I remember we had this problem before. I'll have to try and find a suitable
news client for work.
 
I

Ian Collins

Christopher said:
I'd say string to integral conversions, and back, probably do occur around 50,000
times from the time data comes into the system and the time it goes out.

Time to string and string to time occur much less. Somewhere between 10 and 100
times.

Data is theoretically supposed to make its way through in less than a second.

So does it? That's the test that matters.

There's nothing wrong with using the C standard library string
conversion functions if you know in advance which types you are using
and their sizes. iostreams win when you don't know the types or sizes,
in templates for example.
Sorry about line formatting. It's been awhile since I was on a newsgroup.
I remember we had this problem before. I'll have to try and find a suitable
news client for work.

Google do their uppermost to make correct Usenet posting impossible!
 
C

Christopher

When you profile the application does it really spend a significant

amount of time converting to and from strings? If not, look where it

really spends the time. If so I'd start by asking why you have to

convert in the first place.



You _have profiled it, haven't you?



Andy


Absolutely
Performance is a huge problem, however, so is stability. I will always takestability over performance. Running fast it worthless when it doesn't run.

There are much larger problems, but those huge problems require huge overhauls. For instance, quite a bit of data is represented as text rather than more suitable primitive types and classes, which in profiling, took 130X or worse to process. You know how these things go with supervisors. "It takes time?! Leave it alone!"

Arguments like streams vs c-style is something managers want to address, because it is a going forward decision rather than a let's go back and changethe code decision.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,768
Messages
2,569,575
Members
45,053
Latest member
billing-software

Latest Threads

Top