Erick-> said:
I've readed some lines about the difference between float and double
data types... but, in the real world, which is the best? when should we
use float or double??
Which is best, a pickup truck, or a half-tonne truck?
float and double are defined in terms of minimum precision allowed
for each. On any given platform, it is not required that there is
*any* difference between the two: if the float data type meets the
minimum requirements that C imposes on the double data type,
then the two could be exactly the same.
Traditionally, float was faster than double but offered less
precision. double never offers -less- precision than float, but
these days, it is not uncommon to find computers on which double
is as fast (or faster than) float. Also, float never occupies -more-
permanent storage than does double, and sometimes memory size
is the biggest factor (but these days you usually just go out and
buy more memory if you need it.)
There are also still computers which do not implement either float or
double in hardware, so from a speed perspective, sometimes both
are significantly worse than using integer arithmetic instead. But
there are also numerous computers these days on which integer arithmetic
is slower than double -- computers being sold into markets where
(say) 95% of the operations requiring speed are likely to be
floating point operations, so the development resources are spent
primarily on accelarating floating point. Thus, from a speed
perspective, you cannot trust that floating point with be either
slower or faster than integer arithmetic. But there -are- times
when using integer arithmetic can be absolutely crucial for
preserving required accuracy.
All of which is to say, "it depends" ;-)
Size, speed, precision: for any given task, any of them might be
the key factor. Speed is particularily variable: a CPU upgrade
without changing anything else might completely alter the speed factors.