C
Christopher
One of the tenured fellows I work with made a file with typedefs for time to long long types, defined his own NULL as max value of long long, and wrote a plethora of functions to manually perform conversion to and from XML string, to database strings, perform UTC and local conversions etc.
I am arguing for the use of boost:
ptional<ptime>
When the time is invalid, it is readily apparent that it is invalid! Arithmetic is correct and we don't have to worry about a false NULL value or exceeding max value, or flipping over minimum value. Initialization is more readable and maintainable and a host of other arguments.
The counter argument is speed.
Well, He is correct in that boost:
ptional<ptime> is slower. In the same way, he is correct that using stringstreams for conversions to integral types from strings and from string to integral types is slower than using functions from the C runtime.
I already knew this before I wrote a performance test.
My problem with the way my performance test is being interpreted is that they will say, "Look! it takes 5 times as long over a million iterations!" Well, if one method takes 1 millisecond and another takes 2 milliseconds, isn't the difference showing up going to be one million * 1 millisecond?
How do I argue that the difference in performance is negligible. Is it negligible? Or maybe I am just being hard headed in my desire to get away from C-style source code?
What say you?
I am arguing for the use of boost:
When the time is invalid, it is readily apparent that it is invalid! Arithmetic is correct and we don't have to worry about a false NULL value or exceeding max value, or flipping over minimum value. Initialization is more readable and maintainable and a host of other arguments.
The counter argument is speed.
Well, He is correct in that boost:
I already knew this before I wrote a performance test.
My problem with the way my performance test is being interpreted is that they will say, "Look! it takes 5 times as long over a million iterations!" Well, if one method takes 1 millisecond and another takes 2 milliseconds, isn't the difference showing up going to be one million * 1 millisecond?
How do I argue that the difference in performance is negligible. Is it negligible? Or maybe I am just being hard headed in my desire to get away from C-style source code?
What say you?