Adding int to a float

F

Frank Cisco

How do you get an accurate addition of a float and an int?

int i = 5454697;
float f = 0.7388774F;

float result = new Integer(i).floatValue()+f;

result=5454697.5 ??

Surely it should be 5454697.7388774? If I use double it's fine, but I need
to use float
 
T

Tim Slattery

Frank Cisco said:
How do you get an accurate addition of a float and an int?

int i = 5454697;
float f = 0.7388774F;

float result = new Integer(i).floatValue()+f;

result=5454697.5 ??

Surely it should be 5454697.7388774? If I use double it's fine, but I need
to use float

You looking for an answer with 14 (decimal) digits of precision. A
single-precision floating point number uses 23 bits for its mantissa,
that works out to 6 or 7 decimal digits of precision. You simply
cannot get the precision you want without using doubles. A double has
a 52 bit mantissa, that's something like 15 decimal digits.
 
M

Mike Schilling

Patricia said:
There are situations in which it is very useful.

For example, some years ago I was doing some customer support work at
a geophysical consultancy. I saw they were doing some massive
calculations in float, and asked about it.

They had a numerical algorithms expert on staff who knew far more than
I'll ever know about numerical analysis and algorithm stability. Their
input data, seismic traces, was inherently imprecise. Similarly, they
only needed a few decimal digits in the output. The algorithms expert
had decided that, given the algorithms they were using, they would get
the digits they needed using float. Benchmarking showed a significant
performance difference, because using float halved the data volume.

If they had used double, they would have spent a lot of resources
storing and moving bits that were meaningless.

Note to self: double is ideal for analyzing marketing data.
 
M

markspace

Frank said:
cheers all! why on earth is float used at all then if it's so inaccurate?


The same reason people use ints instead of long: speed and memory
requirements.

Patricia's post explains this very well, I just thought I'd add the
int/long analogy too.
 
R

Roedy Green

Surely it should be 5454697.7388774? If I use double it's fine, but I need
to use float

see http://mindprod.com/jgloss/floatingpoint.html

floats are only accurate to about 6 significant digits. You could try
double for better accuracy. If you want banker absolute precision, you
can't use floating point. You must use BigDecimal or BigInteger.
--
Roedy Green Canadian Mind Products
http://mindprod.com

"Many people tend to look at programming styles and languages like religions: if you belong to one, you cannot belong to others. But this analogy is another fallacy."
~ Niklaus Wirth (born: 1934-02-15 age: 75)
 
R

Roedy Green

cheers all! why on earth is float used at all then if it's so inaccurate?
If you are building a desk, whether it is 70.00000 cm deep or 70.00001
cm makes no difference.

see http://mindprod.com/jgloss/floatingpoint.html
--
Roedy Green Canadian Mind Products
http://mindprod.com

"Many people tend to look at programming styles and languages like religions: if you belong to one, you cannot belong to others. But this analogy is another fallacy."
~ Niklaus Wirth (born: 1934-02-15 age: 75)
 
A

Arne Vajhøj

Frank said:
cheers all! why on earth is float used at all then if it's so inaccurate?

Mostly historic reasons.

Back when some MB's of RAM costed 100000's dollars, then the
difference of having an array of float and having an array
of double could be a lot of money.

Today there is usually no reason at all.

Arne
 
K

Kevin McMurtrie

"Frank Cisco said:
cheers all! why on earth is float used at all then if it's so inaccurate?

Sometimes you don't need a lot of accuracy but you do need speed and the
ability to handle an extremely wide range of values. In data
processing, you might sum any quantity of Input(n) * LUT(n) products
than divide by the sum of LUT(0..n) to normalize. A fixed-point int is
prone to having bits drop off one end or the other. A float holds its
precision for any range. The 22 bits coming out are plenty for
graphics, audio, data trending, performance metrics, etc.


"float result = new Integer(i).floatValue()+f;"

That's just dumb. Did you even look at what the floatValue() method
does? Download src.jar and start reading.
 
T

Tim Slattery

Arne Vajhøj said:
24*log10(2) is a bit more than 6.

Huh?? A single-precision float has a 23-bit mantissa. 2^23 is
8,388,608. That's seven digits, but only six that you can run all the
way from 0 to 9.
 
N

neuneudr

cheers all! why on earth is float used at all then if it's so inaccurate?

While there are good reasons, that were stated here,
it's probably safe to say that floating-point --both
float and double-- are way overused by people that
don't really understand how they work nor when they
should be used.

A typical mistake being to represent a monetary amount
that has two significant number after the dot (eg $123.45)
using floating-point numbers (one solution being to think
not in term of 'dollar' but in term of 'cents' and hence
storing directly 12345).

People often fail to realize that using integer math could
ease their pain (as long as the range of values fit in an
integer or a long or a BigXXX).

Like you can go to great lenght and write complicated
compensated sum algorithm to compute variance... Or you
can preprocess your data, use integer math, and have the
result to the exact, provable, precision you want, without
needing to fear errors accumulating.
 
M

markspace

use integer math, and have the
result to the exact, provable, precision you want, without
needing to fear errors accumulating.


Eww.

Sorry, there might have been good reasons to do this at one time... like
1980... but there's no way I would spend the time or headache trying to
replace floating point math with integer math today. If anything I'd
say this sort of operation requires far more expertise than just using
the existing floating point and double primitives.
 
M

Martin Gregorie

Eww.

Sorry, there might have been good reasons to do this at one time... like
1980... but there's no way I would spend the time or headache trying to
replace floating point math with integer math today. If anything I'd
say this sort of operation requires far more expertise than just using
the existing floating point and double primitives.

If you're doing financial calculations you should always use integer
arithmetic because this avoids nasty surprises due to the inability of
floating point to accurately represent all possible values. This also
applies to currency conversion. All FX dealing institutions and markets
specify precisely how to handle the conversion: using floating point
doesn't figure in their rules.

All reputable financial software uses integers with monetary values held
in the smallest legal unit in the currency (cents in $US and Euros, pence
in the UK).

AFAIK use of floating point only started when people tried to handle
monetary amounts on the early 8-bit micros - the early BASICs would only
handle 16 bit signed integers, so people who should have known better
used floating point to handle values larger than 327.68 and managed to
blind themselves to the rounding errors and limitation to 8 significant
figures. Unfortunately, this habit has carried over into spreadsheets but
there's no need to propagate it into Java as well.
 
J

John B. Matthews

[...]
AFAIK use of floating point only started when people tried to handle
monetary amounts on the early 8-bit micros - the early BASICs would
only handle 16 bit signed integers, so people who should have known
better used floating point to handle values larger than 327.68 and
managed to blind themselves to the rounding errors and limitation to
8 significant figures. Unfortunately, this habit has carried over
into spreadsheets but there's no need to propagate it into Java as
well.

I recall this. In that 8-bit era, students of Forth luxuriated in 32-bit
integers [1] and understood the virtue of fixed-point arithmetic [2].
Java offers a rich variety of numeric types [3] and excellent libraries
of derived types [4], but the need for understanding remains.

[1]<http://home.roadrunner.com/~jbmatthews/a2/proforth.html>
[2]<http://www.forth.com/starting-forth/sf5/sf5.html>
[3]<http://java.sun.com/javase/6/docs/api/java/lang/Number.html>
[4]<http://jscience.org/api/org/jscience/mathematics/number/Number.html>
 
L

Lew

Patricia said:
I would agree with this with two exceptions:

1. There are some financial calculations for which extreme exactness
does not matter, and having transcendental function approximations does.
For example, calculations of the form "How long will it take me to pay
off my home loan if I do X?".

Are there then integral methods for computing amortization schedules and the like?

My instinct for such, e.g., to calculate a monthly payment, is to calculate
the payment using double, convert to long, sum the payments over the term,
then add the rounding error to the final payment.

I did a contract for a major credit-card company a decade or so ago. They
used natural log formulas to calculate amortization. The formula they wanted
me to use included ln(x)^2 in the numerator and ln(x) in the denominator of a
fraction - same x. I simplified to just ln(x) in the numerator. My team lead
was seriously worried that I had screwed up the calculation, on which point I
assured her that I had. I forbore to mention they differed in the singularity.

They didn't use integer math to compute amortization quantities. I would bet
that they did accumulate round-off error into the last payment. I have
received payment schedules from consumer credit wherein the payments varied in
the penny. I guess some companies do use the hybrid approach - use double to
get the payment, convert to long to actually work with it.
 
L

Lew

Lew said:
My team lead was seriously worried that I had screwed up the
calculation, on which point I assured her that I had.

"Had not." That's "had not". No Freudian slip here. Nope. Not at all.
 
A

Arne Vajhøj

Lew said:
Are there then integral methods for computing amortization schedules and
the like?

My instinct for such, e.g., to calculate a monthly payment, is to
calculate the payment using double, convert to long, sum the payments
over the term, then add the rounding error to the final payment.

That may be necessary just due to the fact interest * something
may not always give integral cents.

Arne
 
M

Martin Gregorie

1. There are some financial calculations for which extreme exactness
does not matter, and having transcendental function approximations does.
For example, calculations of the form "How long will it take me to pay
off my home loan if I do X?".
I'm not sure that this counts as a monetary calculation, though it is
financial, because the answer is time (and will be rounded up to months).

The same classification probably applies to NPV and similar calculations
since they're estimates of value rather than an actual monetary value.
2. In some cases, the required rounding rules exactly match one of the
BigDecimal rounding schemes. In that case, I would consider BigDecimal
for convenience with the scale factor equal to the correct number of
digits after the decimal point.
I agree. BigDecimal is an equally valid alternative to integers.

IBM would agree too, since their smaller S/360 mainframes (and maybe the
earlier 1400s as well) couldn't handle integer values. They worked
entirely in BCD, which is a more or less exact equivalent to BigDecimal.
If your BCD arithmetic unit has an overflow capability it is cheap to
make, since its registers only need 4 bits plus Carry and yet can deal
with arbitrary length numeric values. BCD values always had odd numbers
of digits because the leading nibble was reserved for the sign.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,778
Messages
2,569,605
Members
45,238
Latest member
Top CryptoPodcasts

Latest Threads

Top