Double precision numbers.

A

Allagappan

Hello everyone,
Just check this code:

#include<stdio.h>
int main()
{
double a,b;
scanf("%lf",&a);
scanf("%lf",&b);
printf("\nNumber a: %0.16lf",a);
printf("\nNumber b: %0.16lf",b);
printf("\nDifference:%0.16lf\n",(a-b));
return 0;
}

Output
------
[alagu@localhost decimal]$ ./decimal
12.3333
11.1111

Number a: 12.3332999999999995
Number b: 11.1111000000000004
Difference:1.2221999999999991
-----------------------------------
We wanted a clean number like 12.3333 and not
12.3332999.What is the reason for this
and is there anyway to avoid this?We want decimal
points to be exact.How to do it?

Thanks in advance,
Regards,
Allagappan M
 
B

bhuwan.chopra

Actually with very double number there is a precision attached. long
double will have more precision while float will have a less precision.
To avoid the problem you facing you got to device your own data
structure to represent precision you require.

Regards,
Bhuwan Chopra
 
C

Chris Croughton

Hello everyone,
Just check this code:

#include<stdio.h>
int main()
{
double a,b;
scanf("%lf",&a);
scanf("%lf",&b);
printf("\nNumber a: %0.16lf",a);
printf("\nNumber b: %0.16lf",b);
printf("\nDifference:%0.16lf\n",(a-b));
return 0;
}

Output
------
[alagu@localhost decimal]$ ./decimal
12.3333
11.1111

Number a: 12.3332999999999995
Number b: 11.1111000000000004
Difference:1.2221999999999991
-----------------------------------
We wanted a clean number like 12.3333 and not
12.3332999.What is the reason for this
and is there anyway to avoid this?We want decimal
points to be exact.How to do it?

Use COBOL or another language which supports exact fixed point decimal
numbers. Or see the FAQ, it's the first item in the section on floating
point numbers. Since numbers are held (in most machines) as binary,
precision is not infinite. Just as 1/3 is not representable exactly in
decimal (or binary or any other base not a multiple of 3), so 1/10 is
not representable in binary (or ternary or any other base not a multiple
of both 2 and 5).

It's easy to demonstrate with a much simpler program:

#include <stdio.h>

int main(void)
{
double x = 0.1;
printf("%0.16lf\n", x);
return 0;
}

You could always use a lower precision for your output, like %0.12lf,
which would make it look correct even though internally the number is
incorrect in the last bit or so.

Chris C
 
M

Michael

[alagu@localhost decimal]$ ./decimal
12.3333
11.1111

Number a: 12.3332999999999995
Number b: 11.1111000000000004
Difference:1.2221999999999991
-----------------------------------
We wanted a clean number like 12.3333 and not
12.3332999.What is the reason for this
and is there anyway to avoid this?We want decimal
points to be exact.How to do it?
Hi,
because of the binary system numbers that doesn't
end with '5' (or .0) cannot be stored correctly.
Its not a matter of precision. You can store 12.3333
as 123333 and keep in mind that your number has to
be divided by 10,000.

Michael
 
G

Grumble

Allagappan said:
#include<stdio.h>
int main()
{
double a,b;
scanf("%lf",&a);
scanf("%lf",&b);
printf("\nNumber a: %0.16lf",a);
printf("\nNumber b: %0.16lf",b);
printf("\nDifference:%0.16lf\n",(a-b));
return 0;
}

Output
------
[alagu@localhost decimal]$ ./decimal
12.3333
11.1111

Number a: 12.3332999999999995
Number b: 11.1111000000000004
Difference:1.2221999999999991
-----------------------------------
We wanted a clean number like 12.3333 and not
12.3332999.What is the reason for this
and is there anyway to avoid this?We want decimal
points to be exact.How to do it?

Do you understand how binary floating-point works?

http://www2.hursley.ibm.com/decimal/decifaq1.html

http://www2.hursley.ibm.com/decimal/
 
D

Dik T. Winter

> double a,b;
> scanf("%lf",&a);
> scanf("%lf",&b);
> printf("\nNumber a: %0.16lf",a);
> printf("\nNumber b: %0.16lf",b);
> printf("\nDifference:%0.16lf\n",(a-b)); ....
> [alagu@localhost decimal]$ ./decimal
> 12.3333
> 11.1111
>
> Number a: 12.3332999999999995
> Number b: 11.1111000000000004
> Difference:1.2221999999999991
> -----------------------------------
> We wanted a clean number like 12.3333 and not
> 12.3332999.What is the reason for this
> and is there anyway to avoid this?We want decimal
> points to be exact.How to do it?

See the FAQ. The bottom line is that 12.3333 can not be represented
exactly in a double. If you want to be exact you need scaled
integers.
 
O

Old Wolf

Chris said:
It's easy to demonstrate with a much simpler program:

#include <stdio.h>

int main(void)
{
double x = 0.1;
printf("%0.16lf\n", x);
return 0;
}

Easy to demonstrate undefined behaviour :) (in C89 anyway).
%f (not %lf) is the correct printf specifier for 'double'.
 
C

CBFalconer

Allagappan said:
.... snip ...

Number a: 12.3332999999999995
Number b: 11.1111000000000004
Difference:1.2221999999999991
-----------------------------------
We wanted a clean number like 12.3333 and not
12.3332999.What is the reason for this
and is there anyway to avoid this?We want decimal
points to be exact.How to do it?

Use integers.
 
T

Thomas Matthews

Allagappan said:
Hello everyone,
Just check this code:

#include<stdio.h>
int main()
{
double a,b;
scanf("%lf",&a);
scanf("%lf",&b);
printf("\nNumber a: %0.16lf",a);
printf("\nNumber b: %0.16lf",b);
printf("\nDifference:%0.16lf\n",(a-b));
return 0;
}

Output
------
[alagu@localhost decimal]$ ./decimal
12.3333
11.1111

Number a: 12.3332999999999995
Number b: 11.1111000000000004
Difference:1.2221999999999991
-----------------------------------
We wanted a clean number like 12.3333 and not
12.3332999.What is the reason for this
and is there anyway to avoid this?We want decimal
points to be exact.How to do it?

Thanks in advance,
Regards,
Allagappan M

Try using a "%0.4f" specifier in the printf.

--
Thomas Matthews

C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.comeaucomputing.com/learn/faq/
Other sites:
http://www.josuttis.com -- C++ STL Library book
http://www.sgi.com/tech/stl -- Standard Template Library
 
T

Tim Prince

Allagappan said:
Hello everyone,
Just check this code:

#include<stdio.h>
int main()
{
double a,b;
scanf("%lf",&a);
scanf("%lf",&b);
printf("\nNumber a: %0.16lf",a);
printf("\nNumber b: %0.16lf",b);
printf("\nDifference:%0.16lf\n",(a-b));
return 0;
}

Output
------
[alagu@localhost decimal]$ ./decimal
12.3333
11.1111

Number a: 12.3332999999999995
Number b: 11.1111000000000004
Difference:1.2221999999999991
-----------------------------------
We wanted a clean number like 12.3333 and not
12.3332999.What is the reason for this
and is there anyway to avoid this?We want decimal
points to be exact.How to do it?
%.17g format is sufficient to express the full precision of IEEE 64-bit
double. Anything more is nearly guaranteed to produce "unclean" output.
Frequently, you may have to go to %.16g to force rounding. In general,
those decimal digits limits may be calculated from information present in
<float.h>.
 
A

Allagappan

Hello everyone,


snippet:

#include<stdio.h>
int main()
{
double a=144.623353166;
double b=144.623352166;
printf("%0.17lg,%0.17lg\n",a,(a-b));
printf("%0.17lf,%0.17lf\n",a,(a-b));
return 0;
}

Output:
------------
144.62335316599999,9.9999999747524271e-07
144.62335316599998691,0.00000099999999748

As Tim says,%.17g should have given me 0.000001(note the variation in
sixth digit of a and b),but it is not that so.

Regards,
Allagappan
 
T

Tim Prince

Allagappan said:
Hello everyone,


snippet:

#include<stdio.h>
int main()
{
double a=144.623353166;
double b=144.623352166;
printf("%0.17lg,%0.17lg\n",a,(a-b));
printf("%0.17lf,%0.17lf\n",a,(a-b));
return 0;
}

Output:
------------
144.62335316599999,9.9999999747524271e-07
144.62335316599998691,0.00000099999999748

As Tim says,%.17g should have given me 0.000001(note the variation in
sixth digit of a and b),but it is not that so.

Regards,
Allagappan
When you subtract 2 numbers which are equal in the first 8 digits, and have
fractional parts which don't have exact decimal to binary conversion, you
should expect only 8 clean digits of result.
 
M

Mark McIntyre

....
Use COBOL or another language which supports exact fixed point decimal

Is it really possible that people can pass their CS exams without ever
reading Goldberg? I thought it was compulsory...

Heck, I came up from the trenches, and I learned about FP in my first
week aat work...
 
B

Ben Pfaff

Mark McIntyre said:
Is it really possible that people can pass their CS exams without ever
reading Goldberg? I thought it was compulsory...

Heck, I came up from the trenches, and I learned about FP in my first
week aat work...

Systems folk like me don't often encounter floating point. In an
operating system, the floating-point unit is just another set of
registers you have to save and restore.
 
D

Dik T. Winter

Lessee whether you understand it this way:
> double a=144.623353166;
> double b=144.623352166;
> printf("%0.17lg,%0.17lg\n",a,(a-b));

a and b are, when converted to binary and stored as a double precision, in
binary, followed by the difference:
a = 10010000.100111111001010000010010101101011101001111111
b = 10010000.100111111001010000000001111011101101110001011
a - b = 0.000000000000000000010000110001101111011110100

The exact decimal values for these numbers are:
a = 144.623353165999986913448083214461803436279296875
b = 144.623352165999989438205375336110591888427734375
a-b = 0.000000999999997475242707878351211547851562500
The next higher representable decimal numbers are
a : 144.623353166000015335157513618469238281250000000
b : 144.623352166000017859914805740118026733398437500
You may note that both these values are further away from the exact decimal
values than the ones I gave above
> 144.62335316599999,9.9999999747524271e-07
> 144.62335316599998691,0.00000099999999748

Looks pretty much like the correctly rounded values.
> As Tim says,%.17g should have given me 0.000001(note the variation in
> sixth digit of a and b),but it is not that so.

Because you ask for 17 digits after the decimal point, and that is what
is given. If you had asked for 14 digits or less you have gotten what
you appear to wish.
 
J

Joe Wright

Allagappan said:
Hello everyone,


snippet:

#include<stdio.h>
int main()
{
double a=144.623353166;
double b=144.623352166;
printf("%0.17lg,%0.17lg\n",a,(a-b));
printf("%0.17lf,%0.17lf\n",a,(a-b));
return 0;
}

Output:
------------
144.62335316599999,9.9999999747524271e-07
144.62335316599998691,0.00000099999999748

As Tim says,%.17g should have given me 0.000001(note the variation in
sixth digit of a and b),but it is not that so.

Regards,
Allagappan

Hi Al,

Your expectations cannot be met. Most conversions between binary and
decimal are inexact due to the differences in the two number systems.
Please also note that 'real' numbers are stored in your computer in
binary, not decimal. When you do something like ..

float f = 1.2;

... constant 1.2 is converted to binary floating point and stored in f.
Something like this:

00111111 10011001 10011001 10011010
Exp = 127 (1)
00000001
Man = .10011001 10011001 10011010
1.20000005e+00

The above is my explanation of IEEE FP. The Mantissa is a fraction (less
than one) and the Exponent (after applying an offset) is the number of
bits to shift the mantissa left to obtain the Value.

Note that the fractional part of 1.2 expressed in binary is a repeating
series 00110011 etc. in 23 bits but the last series, 0011 runs out of
bits and so is 'rounded' toward infinity as 0100. As a result, the float
is slightly larger than 1.2 and printf with "%.8e" will convert it back
to decimal and show it to you.

But it's still a conversion and prone to error. Converting float to
double is lossless. The double converted to decimal ..

1.2000000476837158e+00

Now if I convert 1.2 to double right away, the fractional 0011 series
has 52 bits to play itself out and no rounding takes place. Converting
the double with "%.16e" gives decimal ..

1.2000000000000000e+00

Go figure. Somebody should write a book.. :)
 
C

Chris Croughton

Is it really possible that people can pass their CS exams without ever
reading Goldberg? I thought it was compulsory...

I don't think I ever read Goldberg, but binary fractional arithmetic was
introduced in our university class along with why it wasn't possible to
represent decimal fractions exactly (as a maths student I already knew
that from using alternate-based numbers).
Heck, I came up from the trenches, and I learned about FP in my first
week aat work...

I have a feeling that it may have been mentioned in Daniel D.
McCracken's FORTRAN IV book, which was the first programming language
book I read, but I can't find my copy...

Chris C
 
R

Randy Howard

I don't think I ever read Goldberg, but binary fractional arithmetic was
introduced in our university class along with why it wasn't possible to
represent decimal fractions exactly (as a maths student I already knew
that from using alternate-based numbers).

When I was a younger pup, you had to take a numerical analysis course,
focused entirely upon how to perform floating point calculations
as accurately possible, error propagation, etc. along with interpolation,
integration, difeqs, solving nonlinear equations, least squares,
singular value decomp, random numbers and monte carlo methods. You
wanted a degree, you passed the class.

The standard text was Forsythe, Malcom and Moler's "Computer Methods
for Mathematical Computations", my copy is (C) 1977. Of course,
George Forsythe was the founder and chair of Stanford's CS department
when he died five years earlier, but Michael Malcolm and Cleve Moler
gave him top billing anyway, quite a noble gesture.

It's interesting to note that the book referred to "desk machines"
and "automatic computers". :)

Some of the computer systems and "desk machines" discussed with respect
to base, precision and exponent range early in the text include the
Univac 1108, Honeywell 6000, PDP-11, Control Data 6600, Cray-1, Illiac-
IV, SETUN, Burroughs B5500, Hewlett Packard HP-45, TI SR-5x, IBM 360
and 370, Telefunken TR440 and Maniac II. Extensive discussion of
the variations between the platforms, single and double precision
usage (and the tradeoffs on the various platforms) is discussed
early on, along with error minimization, then a lot of math and
algorithms to solve them accurately.

It covered a lot of ground that was required for all CS students
once upon a time. I don't remember the last time I met a new
CS grad that had a clue what "machine epsilon" even meant.

--
Randy Howard (2reply remove FOOBAR)
Life should not be a journey to the grave with the intention of arriving
safely in an attractive and well preserved body, but rather to skid in
sideways, chocolate in one hand, martini in the other, body thoroughly
used up, totally worn out and screaming "WOO HOO what a ride!!"
 
C

CBFalconer

Randy said:
.... snip ...

When I was a younger pup, you had to take a numerical analysis
course, focused entirely upon how to perform floating point
calculations as accurately possible, error propagation, etc.
along with interpolation, integration, difeqs, solving nonlinear
equations, least squares, singular value decomp, random numbers
and monte carlo methods. You wanted a degree, you passed the
class.

The standard text was Forsythe, Malcom and Moler's "Computer
Methods for Mathematical Computations", my copy is (C) 1977.
.... snip ...

No such thing when I was in school. I discovered Knuth about the
time I was creating my third floating point system, and had had my
nose rubbed in some of the inherent (and some not inherent)
problems. One of my important references was Margenau and Murphys
"Mathematics of Physics and Chemistry", which predates computers.
I remember it was especially lucid on least square fits, so I could
apply it to other things than polynomials. Unfortunately some
cretin stole my copy.

--
Some informative links:
http://www.geocities.com/nnqweb/
http://www.catb.org/~esr/faqs/smart-questions.html
http://www.caliburn.nl/topposting.html
http://www.netmeister.org/news/learn2quote.html
 
C

Chris Croughton

When I was a younger pup, you had to take a numerical analysis course,
focused entirely upon how to perform floating point calculations
as accurately possible, error propagation, etc. along with interpolation,
integration, difeqs, solving nonlinear equations, least squares,
singular value decomp, random numbers and monte carlo methods. You
wanted a degree, you passed the class.

I took NA, but it didn't cover floating point at all (it was run as part
of the maths course, not the computing one).
The standard text was Forsythe, Malcom and Moler's "Computer Methods
for Mathematical Computations", my copy is (C) 1977. Of course,
George Forsythe was the founder and chair of Stanford's CS department
when he died five years earlier, but Michael Malcolm and Cleve Moler
gave him top billing anyway, quite a noble gesture.

Ah, if it was published in America in 1977 it would be very unlikely to
have made it to the UK in time for my graduation in 1978 <g>. (Still
available from the original publishers (Prentice-Hall) from Amazon and
others, I just looked.)
It's interesting to note that the book referred to "desk machines"
and "automatic computers". :)

Some of the computer systems and "desk machines" discussed with respect
to base, precision and exponent range early in the text include the
Univac 1108, Honeywell 6000, PDP-11, Control Data 6600, Cray-1, Illiac-
IV, SETUN, Burroughs B5500, Hewlett Packard HP-45, TI SR-5x, IBM 360
and 370, Telefunken TR440 and Maniac II. Extensive discussion of
the variations between the platforms, single and double precision
usage (and the tradeoffs on the various platforms) is discussed
early on, along with error minimization, then a lot of math and
algorithms to solve them accurately.

And of course well prior to the IEE standardisation of floating point.
It covered a lot of ground that was required for all CS students
once upon a time. I don't remember the last time I met a new
CS grad that had a clue what "machine epsilon" even meant.

I don't think I've ever heard it called that, but it's pretty obvious
(to me at least) in context. But given by the number of time it's asked
here (it really is a FAQ) I assume that it's no longer being taught...

Chris C
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,535
Members
45,007
Latest member
obedient dusk

Latest Threads

Top