Preferred Style Question

M

Mike Copeland

What are some accepted ways to do the following calculation - and
avoid the compiler warnings that expose conversion data loss? (I'm
trying to compute the total time in seconds that a person has to
complete an event, based on a stated pace.) TIA

double dDistance = 6.214;
time_t tPace = 600;
time_t tEstTime;

tEstTime = tPace * dDistance;
 
M

Mike Copeland

Now you immediately see that the datatype for Pace is wrong - why it
should be time_t if it is not in seconds? It should be double as well.

No, it's a computed "seconds mile", derived from a prompted value
that's in "minutes per mile". That is, the user is asked for "pace",
and the user responds in minutes. After that, the "seconds" value is
calculated and used for numerous calculations and displays.
OTOH, in this kind of code one can see immediately that the units are
used correctly. Whenever you see something like esTime_s = distance_m *
pace_hour_km you know there is a bug.

That may be, but I must determine a value that is compared to a
presented scalar value that's in "seconds". The purpose of this derived
value is to place the user's estimate into one of 30 "slots" to
establish groupings. I want the processing to be precise (and not use
f/p comparisons), and I used to use "int". Now I'm trying to use a more
appropriate "time_t" for all "seconds-based" calculations and displays.
The conversion warning comes because you want to squeeze the floating-
point result into an integer, losing precision. If there are no other
prevailing reasons, I would say that precision loss should be avoided. As
we have now got rid of meaningless type prefixes, it is easy to change
the datatype:

double distance_m = 6.214; // distance in meters
double pace_s_m = 600; // pace, in seconds per meter
double estTime_s = distance_m * pace_s_m; // estimated time in seconds

Since I have to work in seconds throughout my processing, use of
"double" isn't going to help here. I really want to produce "time_t"
values and work from them.
 
R

Rui Maciel

Mike said:
What are some accepted ways to do the following calculation - and
avoid the compiler warnings that expose conversion data loss? (I'm
trying to compute the total time in seconds that a person has to
complete an event, based on a stated pace.) TIA

double dDistance = 6.214;
time_t tPace = 600;
time_t tEstTime;

tEstTime = tPace * dDistance;

As the type of dDistance is double and the typeof tPace is time_t, the
expression "tPace * dDistance" is evaluated to an object of type double.
Then, the result of that expression is assigned to tEstTime, which is
another object of type time_t.

The thing with time_t is that it is an unsigned integer type. If the
expression "tPace*dDistance" returns a negative value then maybe all sorts
of hell will breaks loose, as this behavior is left undefind in the
standard.

If, instead, the "tPace*dDistance" expression is evaluated to a positive
number then the assignment will truncate the resulting value towards zero.
There's not much that can be done about it, other than making sure that,
instead of truncating it, we make sure the value is rounded. There are some
simple[¹] and convoluted[2] suggestions floating around, but, at least since
C99, it's possible to avoid all that with a simple call to
nearbyint(tPace*dDistance).

Another detail which is rarely mentioned is the possibility of a floating
point operation having been raised, which includes the possibility of
dDistance ending up storing an invalid value, such as NaN or divide-by-zero.
Checking the floating point exception flags can save some headaches.

So, data will be lost either way. You just need to make sure that dDistance
stores a valid positive value and then assign it to tEstTime. If you can
ensure that then it's safe to assume that the only conversion loss that can
happen is the loss of the fractional part of your floating point value.
Having done that then you will probably get rid of those warnings about
conversion loss once you cast the result back to time_t. And that's about
it.


Hope this helps,
Rui Maciel


[¹] http://c-faq.com/fp/round.html
[2] http://www.cs.tut.fi/~jkorpela/round.html
 
L

Luca Risolia

What are some accepted ways to do the following calculation - and
avoid the compiler warnings that expose conversion data loss? (I'm
trying to compute the total time in seconds that a person has to
complete an event, based on a stated pace.) TIA

double dDistance = 6.214;
time_t tPace = 600;
time_t tEstTime;

tEstTime = tPace * dDistance;

#include <chrono>
#include <ratio>
#include <iostream>

typedef long double meters;
meters operator"" _m(const long double x) { return x; }
meters operator"" _km(const long double x) { return x * 1000; }

std::chrono::seconds operator"" _s_per_m(unsigned long long x) {
return std::chrono::seconds(x);
}

template<typename V, typename R>
std::eek:stream& operator << (std::eek:stream& o,
const std::chrono::duration<V, R>& d) {
return o << d.count() << " ticks of "
<< R::num << '/' << R::den << " seconds";
}

int main() {
auto distance = 6.214_m;
auto pace = 600_s_per_m;
auto estTime = pace * distance;
std::cout << estTime << '\n';
// 3728.4 ticks of 1/1 seconds (= 3728,4 seconds)
}
 
N

Nick Keighley

(e-mail address removed) says...



   No, it's a computed "seconds mile", derived from a prompted value
that's in "minutes per mile".  That is, the user is asked for "pace",
and the user responds in minutes.  After that, the "seconds" value is
calculated and used for numerous calculations and displays.


   That may be, but I must determine a value that is compared to a
presented scalar value that's in "seconds".  The purpose of this derived
value is to place the user's estimate into one of 30 "slots" to
establish groupings.  I want the processing to be precise (and not use
f/p comparisons),

this makes no sense. How does cramming the result into one of 30
integral bins make the result "precise"? There's a floating point
value in there so the result is going to be a floating point value.
and I used to use "int".  Now I'm trying to use a more
appropriate "time_t" for all "seconds-based" calculations and displays.



   Since I have to work in seconds throughout my processing, use of
"double" isn't going to help here.  I really want to produce "time_t"
values and work from them.

why?
 
N

Nick Keighley

Actually, one cannot represent 0.1 in binary floating point with exact
precision - the binary version is a non-terminating, infinite expansion;
rounded to 24 bits, it's 0.100000001490116119384765625 in decimal.

actually you *can* represent 0.1 to at least 10 decimal places, which
is what he said. Which sounds plenty to me. Do runners really time
themselves to the nearest millisecond?
 
F

Fred Zwarts \(KVI\)

"Scott Lurndal" wrote in message news:[email protected]...
Actually, one cannot represent 0.1 in binary floating point with exact
precision - the binary version is a non-terminating, infinite expansion;
rounded to 24 bits, it's 0.100000001490116119384765625 in decimal.

Note that the approximation of 0.1 with an binary integer type is much less
accurate.
 
F

Fred Zwarts \(KVI\)

"Mike Copeland" wrote in message
I want the processing to be precise (and not use >f/p comparisons),

I think that it is a misconception that floating point type are not precise.
They are as precise as integer types. One of the first computers I worked
with, had no integer arithmetic in hardware, only floating point arithmetic.
Integer arithmetic operation in high level languages were performed using
floating point operations.

Take as an example an integer type using N bits (including sign bit). Also
take a floating point type using N bits for the mantissa (including sign
bit). Then each value that can be represented by the integer type can also
be represented EXACTLY by the floating point type. Further, the arithmetic
integer operations (+, -, *, /) can be mapped to floating point operations
that yield EXACTLY the same represented value (as long as the result can
still be represented with N bits). (The most complex one is the integer
divide operation, which maps to a floating point divide followed by a
truncation.)
So, if the floating point type can map EXACTLY all the integer type values
and all integer arithmetic operations, it is simply not true that floating
point types are less precise than integer types.

Why then do we use integer and floating point types. Not because of
precision, but because of other features:

*) Integer types have other operations (like bit shifting, bit-wise logical)
that floating point types have not.

*) Floating point types yield rounded results when the result of an
operation exceeds the minimum or maximum that can be represented with N
bits, whereas an integer type will have an undefined result.

*) Floating point types have an automatic scale factor, to make it easier to
work with very big, or with very small numbers. (A problem is that this
scale factor is usually limited to powers of two.)

*) Many processors have hardware integer instructions that perform better
that the equivalent floating point instructions.

So, floating point types are as exact as integer types (given that there are
enough bits). The confusion arises because many people do not understand the
floating point representation and the corresponding floating point
operations very well. Some people think that floating point numbers can be
used to map real numbers (in some languages this is even suggested by the
name of the type), but that is not true. It is not even a very good mapping
of rational numbers.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,905
Latest member
Kristy_Poole

Latest Threads

Top