Scientific notation - no rounding errors?

J

Joe Attardi

Hi all,

Math is not my strongest area so forgive me if I use some of the wrong
terminology.
It seems that scientific notation is immune to rounding errors. For
example:

(4.98 * 100) + 5.51 // returns 503.51000000000005, rounding error!
4.98e2 + 5.51 // returns 503.51, correct!

Why are scientific notation numbers not affected? And if this is true,
does this mean that scientific notation would be safe to use in a
floating-point addition function?

For example, 4.98 + 0.2, which comes out to 5.180000000000001
(incorrect!), would become (498e2 + 0.2e2) / 1e2, which comes out to
5.18 (correct!)

Any insight would be appreciated...
 
E

Evertjan.

Joe Attardi wrote on 16 mei 2006 in comp.lang.javascript:
Math is not my strongest area so forgive me if I use some of the wrong
terminology.
It seems that scientific notation is immune to rounding errors. For
example:

(4.98 * 100) + 5.51 // returns 503.51000000000005, rounding error!
4.98e2 + 5.51 // returns 503.51, correct!

Why are scientific notation numbers not affected?

Wrong, it is untrue.
And if this is true,
does this mean that scientific notation would be safe to use in a
floating-point addition function?
Unanswerable.

For example, 4.98 + 0.2, which comes out to 5.180000000000001
(incorrect!), would become (498e2 + 0.2e2) / 1e2, which comes out to
5.18 (correct!)

Only a single negative example can prove a point [to be incorrect],
while any finite number of positive examples is not enough for
correctness.

Any insight would be appreciated...

Javascript arythmetic and storage use binary [=base 2] numbers.
All fraction numbers or results that are not exact floating point
binaries could introduce rounding errors, seen from a decimal world, or
through overflow of the mantissa.

So decimal 0.5 and 5e-1 are both exactly binary 0.1 and will make no
problems. 0.2 or 2e-1 [1/5] already gives me the creeps binary.
 
J

Joe Attardi

Evertjan. said:
Wrong, it is untrue.
If it is untrue, then why do the two examples come out with different
values?
Only a single negative example can prove a point [to be incorrect],
while any finite number of positive examples is not enough for
correctness.
I realize that. I'm not trying to do a proof here, I'm just asking for
advice.
Javascript arythmetic and storage use binary [=base 2] numbers.
All fraction numbers or results that are not exact floating point
binaries could introduce rounding errors, seen from a decimal world, or
through overflow of the mantissa.
So what can I do to properly show the sum of an addition?
The point of trying to use the scientific notation is to do what I
think is called scaled integer arithmetic? That is, 4.98 + 0.2 becomes
498 + 20, then the addition is integer addition, then the result is
divided back down for the correct result.

If using multiplication and division to move the decimal point won't
work, due to the floating point inaccuracies, what about (I know this
sounds messy) converting them to strings, counting the decimal places
in the string, removing the decimal point from the string, convert back
to numbers, perform the calculations, and merely insert a decimal point
in the string of the result?

I know that sounds inefficient but it at least would get a proper
result, I think.
 
R

Rob

This problem is not limited to JavaSript. You may want to bone up on
the subject in a general cs book. There are a number of solutions to
handle rounding errors and I'm sure I don't know half of them. Your
solution of scaling the number, truncating to an integer and then
scaling back is one. The truncation hides the round off error.

The best solution is to format the numbers to show only the number of
significant digits your application requires. This hides the rounding
errors. It's up to you to design your application so that these errors
do not accumulate until a noticeable error occurs. If you use a
spreadsheet, these same factors apply. They just don't show you all the
digits by default. That's how the errors are hidden in a spreadsheet.

Rob:-]
 
C

cwdjrxyz

Joe said:
Hi all,

Math is not my strongest area so forgive me if I use some of the wrong
terminology.
It seems that scientific notation is immune to rounding errors. For
example:

(4.98 * 100) + 5.51 // returns 503.51000000000005, rounding error!
4.98e2 + 5.51 // returns 503.51, correct!

Why are scientific notation numbers not affected? And if this is true,
does this mean that scientific notation would be safe to use in a
floating-point addition function?

For example, 4.98 + 0.2, which comes out to 5.180000000000001
(incorrect!), would become (498e2 + 0.2e2) / 1e2, which comes out to
5.18 (correct!)

Any insight would be appreciated...

Modern computers use number systems based on 2 or powers thereof such
as 2(binary), 4, 8(octal), 16(hex), 32, 64, 128 and so on. Most western
math is on a base 10 system. Anytime you convert from one number system
to another, there are going to be some errors introduced, as non-exact
numbers result in some cases. If you use decimals and allow division,
this ensures some numbers can not be represented exactly in some
systems. Consider that in the ordinary base 10 number system, division
of 1 by 3 gives a decimal number .33333 to infinity that can not be
written exactly.

The reason for the scientific number system is that some huge and very
small numbers are often used, and you do not want to have to write a
lot of leading or trailing zeros to do calculations. Some scientifc
calculations, if not very carefully programmed, will cause underflows
or overflows even if the computer can handle exp(100).

The 'funny" roundoffs have been with computing for well over 50 years
since digital computers based on a binary system were introduced, and
methods for handling this have been around just as long. People that
did money calculations soon started using cents rather than dollars in
the US so that fractions were avoided in additions and subtractions so
that "funny" numbers such as $1.00000001 would not disturb the bean
counters. In fact IBM had two different programs systems that were
widely used. Fortran was used for scientific calculations, and Cobal
was used for money calculations.
 
R

Rob

var num = 10;
var result = num.toFixed(2); // result will equal 10.00

num = 930.9805;
result = num.toFixed(3); // result will equal 930.981

num = 500.2349;
result = num.toPrecision(4); // result will equal 500.2

num = 5000.2349;
result = num.toPrecision(4); // result will equal 5000

num = 555.55;
result = num.toPrecision(2); // result will equal 5.6e+2
 
J

Joe Attardi

Rob said:
The best solution is to format the numbers to show only the number of
significant digits your application requires.

I basically need to compare a maximum to a computed total. The maximum
isn't known until runtime. So, to get the number of decimal places I
need, I could just count the number of decimal places in the maximum
and round the computed total to that many places?
 
D

Dr John Stockton

JRS: In article <[email protected]>
, dated Tue, 16 May 2006 10:16:56 remote, seen in
news:comp.lang.javascript said:
Math is not my strongest area so forgive me if I use some of the wrong
terminology.
It seems that scientific notation is immune to rounding errors. For
example:

(4.98 * 100) + 5.51 // returns 503.51000000000005, rounding error!
4.98e2 + 5.51 // returns 503.51, correct!

But that is a simpler calculation, one operation instead of two.
4.98*100 does not show 498.

Why are scientific notation numbers not affected? And if this is true,
does this mean that scientific notation would be safe to use in a
floating-point addition function?

For example, 4.98 + 0.2, which comes out to 5.180000000000001
(incorrect!), would become (498e2 + 0.2e2) / 1e2, which comes out to
5.18 (correct!)

It does not come out to 5.18, since a Number, being an IEEE Double,
cannot represent 5.18 exactly. However, conversion to String, for
output, would give something so close to 5.18 that 5.18 would be shown.

Now 5, being an integer no bigger than 2^53, is held exactly;
and 5.18-5 gives 0.17999999999999971
and (5.18-5)*100-18 gives -2.842170943040401e-14

A value exact in cents but handled as euros.cents will generally be
represented inexactly, and arithmetic will generally introduce further
error. But if the work is done in cents, it will be done without error.

I think an exact number of cents divided by 100 and converted to String
will be shown as you would wish, except for suppression of trailing
zeroes; it would be better to convert the integer to String and insert
the decimal point by character manipulation (e.g. RegExp .replace).

Note that multiplication by 0.01 gives inexact results, since 0.01
cannot be held exactly -
try for (J=0; J<1111; J++) document.writeln(J / 100, '<br>')
Any insight would be appreciated...

Read the newsgroup FAQ, sections 4.7, 4.6; read <URL:http://www.merlyn.
demon.co.uk/js-maths.htm> and follow links therein.
 
L

Lasse Reichstein Nielsen

Joe Attardi said:
If it is untrue, then why do the two examples come out with different
values?

Because 498 (aka. 4.98e2) can be represented exactly in the
representation used by Javascript's number type, but 4.98 cannot.

When you write 498 or 4.98e2 you get exactly that value.

The literal "4.98" does not give exactly the value 4.98, but rather
the representable value that is closest to that number. There is a
difference. When you then multiply by 100, the difference is also
multiplied, making it even more visible.
So what can I do to properly show the sum of an addition?

More importantly, how can you ensure that each operand of the
addition is exactly the number you expect? Generally, you can't.
The point of trying to use the scientific notation is to do what I
think is called scaled integer arithmetic? That is, 4.98 + 0.2 becomes
498 + 20, then the addition is integer addition, then the result is
divided back down for the correct result.

I don't know the name, but the approach is sound (as long as you
don't get results so big that they loose precission, i.e., something
that cannot be represented with 53 bits preission).
If using multiplication and division to move the decimal point won't
work, due to the floating point inaccuracies, what about (I know this
sounds messy) converting them to strings, counting the decimal places
in the string, removing the decimal point from the string, convert back
to numbers, perform the calculations, and merely insert a decimal point
in the string of the result?

Workable. You could also start out with two integer literals, one
for the number without the decimal point and one the position of
the point. Harder to read, but saves parsing the string.

Or you could create your own decimal number representation. Something
like:
---
function Dec(n,mag) {
this.n = Math.floor(n);
this.mag = Math.floor(mag);
}
Dec.prototype.add = function add(dec) {
if (this.mag < dec.mag) {
return dec.add(this);
}
var diff = this.mag - dec.mag;
return new Dec(this.n + Math.pow(10,diff) * dec.n, this.mag);
};
Dec.prototype.mult = function mult(dec) {
return new Dec(this.n * dec.n, this.mag + dec.mag);
};
Dec.prototype.toString = function toString() {
var n = this.n;
var mag = this.mag;
var res = [];
var sign = "";
if (n < 0) {
n = -n;
sign = "-";
}
while(mag > 0) {
res.push("0");
mag--;
}
while(n > 0) {
res.push(String(n%10));
n = Math.floor(n / 10);
mag++;
if (mag == 0) {
res.push(".");
}
}
while(mag < 0) {
res.push("0");
mag ++;
if (mag == 0) {
res.push(".");
}
}
res.push(sign);
return res.reverse().join("");
};

var d1 = new Dec(5); // 5.0
var d2 = new Dec(5,-1); // 0.5
var d3 = new Dec(5,1); //50.0
alert([d1.mult(d2),
d1.mult(d3),
d3.mult(d3),
d2.mult(d2)]); // 2.5,250,2500,.25
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,271
Latest member
BuyAtenaLabsCBD

Latest Threads

Top