problem with float

B

bala.pandu

Hello Everyone,

When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.

int main()
{
float f1;
printf("Enter a float value : \n");
scanf("%f",&f1);
printf ("float value entered is : %f\n",f1);
}

if i give 22.2 as input, i am getting 22.200001 as output.
if i give 22.22 as input, i am getting 22.219999 as ouput.

but i want 22.200000 and 22.220000 as output.

is there anyway to do it.

P.S : The above is not true for all the values. for example
if i give 22.1 i am getting 22.100000

is there anyway to do it.

Thanks and Regards,
P.Balaji

P.S : The above is not true for all the values. for example
if i give 22.1 i am getting 22.100000. And if i use double instead of
float, i can get the desired result but i want to use only float.
 
R

Richard Heathfield

(e-mail address removed) said:
Hello Everyone,

When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.

int main()
{
float f1;
printf("Enter a float value : \n");

At this point, the behaviour of the program is undefined, because you
called a variadic function without a valid prototype in scope. To fix
this, add

#include <stdio.h>

at the top of your program.
scanf("%f",&f1);

How do you know this worked? You forgot to check the return value of
scanf.
printf ("float value entered is : %f\n",f1);
}

if i give 22.2 as input, i am getting 22.200001 as output.
if i give 22.22 as input, i am getting 22.219999 as ouput.

Neither of these values is capable of being stored precisely in a finite
number of bits, using pure binary representation. Therefore, some
imprecision is inevitable. The 22 part is easy, but 0.2 is impossible.

To demonstrate this, let's try to do the impossible.

Working in binary now...

0.1 (binary) is 1/2 which is too high.
0.01 (binary) is 1/4 which is too high.
0.001 (binary) is 1/8 which is too low.
0.0011 (binary) is 3/16 which is too low.
0.00111 (binary) is 7/32 which is too high.
0.001101 (binary) is 13/64 which is too high.
0.0011001 (binary) is 25/128 which is too low.
0.00110011 (binary) is 51/256 which is too low.
0.001100111 (binary) is 103/512 which is too high.
0.0011001101 (binary) is 205/1024 which is too high.
0.00110011001 (binary) is 409/2048 which is too low.
0.001100110011 (binary) is 819/4192 which is too low.
0.0011001100111 (binary) is 1639/8192 which is too high.
0.00110011001101 (binary) is 3277/16384 which is too high.
0.001100110011001 (binary) is 6553/32768 which is too low.
0.0011001100110011 (binary) is 13107/65536 which is too low.
__^^__^^__^^__^^

Are you seeing a pattern yet? Well, that pattern goes on forever.
but i want 22.200000 and 22.220000 as output.

is there anyway to do it.

Use double instead of float to increase the precision (%f is still fine
in your printf, but for double you'll need %lf for the scanf format
specifier), and use %.6f to restrict the output to six decimal places.
If you find that the numbers are coming out very very slightly low, add
0.0000001 to them before printing.
P.S : The above is not true for all the values. for example
if i give 22.1 i am getting 22.100000. And if i use double instead of
float, i can get the desired result but i want to use only float.

Why? float is mostly an anachronism. There is very little reason to use
it at all, and no reason whatsoever for learners to use it.
 
L

Lew Pitcher

Hello Everyone,

When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave. [snip]
P.S : The above is not true for all the values. for example
if i give 22.1 i am getting 22.100000. And if i use double instead of
float, i can get the desired result but i want to use only float.

Hi, Bala

You /really/ need to read the article "What Every Computer Scientist Should
Know About Floating-Point Arithmetic" originally published in the March, 1991
issue of "Computing Surveys" (the journal of the ACM). It explains why you
never get absolutely accurate results from a floatingpoint number.

http://docs.sun.com/source/806-3568/ncg_goldberg.html

--
Lew Pitcher

Master Codewright & JOAT-in-training | Registered Linux User #112576
http://pitcher.digitalfreehold.ca/ | GPG public key available by request
---------- Slackware - Because I know what I'm doing. ------
 
E

Ernie Wright

Richard said:
(e-mail address removed) said:


Why? float is mostly an anachronism. There is very little reason to
use it at all,

Actually, float is more popular than ever in some domains, where its
smaller precision and range compared to double are either irrelevant or
less important than space and speed. It's also much more widely used in
special-purpose hardware, e.g. graphics processors and the SIMD CPU
extensions.
and no reason whatsoever for learners to use it.

They should start out using double, I agree, but at some point they
should learn the tradeoffs. More importantly, they should be told that
using double is *not* a cure for finite precision.

- Ernie http://home.comcast.net/~erniew
 
R

Richard Heathfield

Ernie Wright said:
Actually, float is more popular than ever in some domains, where its
smaller precision and range compared to double are either irrelevant
or
less important than space and speed. It's also much more widely used
in special-purpose hardware, e.g. graphics processors and the SIMD CPU
extensions.

Yes, but those are very little reasons. Important within their domain,
sure, but *in general* (i.e. outside those specialist domains) it still
makes sense to use double rather than float.
They should start out using double, I agree, but at some point they
should learn the tradeoffs. More importantly, they should be told
that using double is *not* a cure for finite precision.

Quite so. I hope I made that clear in my original reply.
 
K

Keith Thompson

Ernie Wright said:
Actually, float is more popular than ever in some domains, where its
smaller precision and range compared to double are either irrelevant or
less important than space and speed. It's also much more widely used in
special-purpose hardware, e.g. graphics processors and the SIMD CPU
extensions.

Using float rather than double will *probably* save space (though I've
used systems where both are 64 bits), but it won't necessarily give
you greater speed. It will on some systems, but on others float
calculations may be no faster than double calculations.
 
W

Walter Roberson

Keith Thompson said:
Using float rather than double will *probably* save space (though I've
used systems where both are 64 bits), but it won't necessarily give
you greater speed. It will on some systems, but on others float
calculations may be no faster than double calculations.

It may even be slower, especially if the processor does the
calculations themselves in double precision and then has to take
an extra step to step them down to single precision. (If you
are supposedly working with a lower precision number, then
it can be an error to carry around too much precision internally.)
 
E

Eric Sosman

Walter Roberson wrote On 05/23/07 16:05,:
It may even be slower, especially if the processor does the
calculations themselves in double precision and then has to take
an extra step to step them down to single precision. (If you
are supposedly working with a lower precision number, then
it can be an error to carry around too much precision internally.)

<off-topic reason="C don' need no steenkin' speeds">

Even on CPUs where float-to-internal-and-back takes
extra processing, float is often faster than double by
virtue of being smaller. Contemporary CPUs spend a large
amount of time sitting idle, waiting for memory to disgorge
some data to be crunched. If you can deliver twice as many
items per cache line fill or per page fault or whatever,
you'll reduce the amount of time the CPU wastes twiddling
its silicon thumbs. That will usually far outweigh a few
cycles spent converting between formats.

That's assuming good data locality, of course: Nice,
orderly matrix operations or FFTs or convolutions or that
kind of thing. If you're bouncing all over memory at the
mercy of a linked data structure and the whim of a random
number generator, the cache will thrache anyhow.

(Can I claim a coinage for "thrache?" Google finds 523
hits, but few seem computer-related.)

</off-topic>
 
E

Ernie Wright

Keith said:
Using float rather than double will *probably* save space (though I've
used systems where both are 64 bits), but it won't necessarily give
you greater speed. It will on some systems, but on others float
calculations may be no faster than double calculations.

Most of the speed gains are because more floats will fit in a given
amount of cache, or more can be read from disk with a given bandwidth,
and so on, and this is very broadly applicable. The actual calculations
tend to take exactly the same amount of time with modern hardware.

- Ernie http://home.comcast.net/~erniew
 
E

Ernie Wright

Richard said:
Ernie Wright said:



Yes, but those are very little reasons. Important within their domain,
sure, but *in general* (i.e. outside those specialist domains) it still
makes sense to use double rather than float.

Well, that's a step away from anachronism, but I still wouldn't agree.
Doubles should be used by beginners and in "all else being equal" cases.
Beyond that, the choice depends on what you're doing. The use of floats
isn't and shouldn't be confined to esoteric domains.

"Always use double" has a cargo cult tinge to it. I'm not implying
that's what you said, just that the distance from it to what you said
seems uncomfortably short. It's often the advice for every numerical
ill, and it's usually wrong. Like casts, double precision can hide
problems temporarily, but it isn't a cure for them. Much better to
actually understand the numerical behavior of your code.

- Ernie http://home.comcast.net/~erniew
 
R

Richard Heathfield

Ernie Wright said:

"Always use double" has a cargo cult tinge to it. I'm not implying
that's what you said, just that the distance from it to what you said
seems uncomfortably short.

I prefer to think of it as a "lie-to-students" - that is, it's a
strategy that works perfectly well until they know enough about
programming in general and floating-point in particular to realise that
there are circumstances in which it *doesn't* work, by which time
they'll also know enough to deal with those circumstances appropriately
and effectively.

It's a bit like the claim that main always returns int. This is
*sufficiently* true to be a good "lie-to-students". By the time they
realise there are circumstances in which it isn't true, they will also
realise *why* they were led down that particular garden path,
appreciate that the reasoning of the "liar-to-students" was valid, and
understand when and why main sometimes doesn't return int (and indeed
may not even be the entry point to the program).
 
J

Joe Wright

Lew said:
Hello Everyone,

When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave. [snip]
P.S : The above is not true for all the values. for example
if i give 22.1 i am getting 22.100000. And if i use double instead of
float, i can get the desired result but i want to use only float.

Hi, Bala

You /really/ need to read the article "What Every Computer Scientist Should
Know About Floating-Point Arithmetic" originally published in the March, 1991
issue of "Computing Surveys" (the journal of the ACM). It explains why you
never get absolutely accurate results from a floatingpoint number.

http://docs.sun.com/source/806-3568/ncg_goldberg.html
The "never" is overstated. The value 6.75 will be represented absolutely
accurately. So 5.5, 1.25 etc. because the values have 'binary' fraction
parts, 1/2, 1/4, 1/8 etc.

01000000 11011000 00000000 00000000
Exp = 129 (3)
00000011
Man = .11011000 00000000 00000000
6.75000000e+00
 
F

Francine.Neary

Well, that's a step away from anachronism, but I still wouldn't agree.
Doubles should be used by beginners and in "all else being equal" cases.
Beyond that, the choice depends on what you're doing. The use of floats
isn't and shouldn't be confined to esoteric domains.

"Always use double" has a cargo cult tinge to it. I'm not implying
that's what you said, just that the distance from it to what you said
seems uncomfortably short. It's often the advice for every numerical
ill, and it's usually wrong. Like casts, double precision can hide
problems temporarily, but it isn't a cure for them. Much better to
actually understand the numerical behavior of your code.

It's true that the difference in space taken up by a double and a
float may be tiny, but a huge number multiplied by a tiny number can
still end up as a large number - if you have very large arrays then
float is probably the sensible choice.
 
J

JimS

Ernie Wright said:

Yes, but those are very little reasons. Important within their domain,
sure, but *in general* (i.e. outside those specialist domains) it still
makes sense to use double rather than float.

I understand why you're saying this, but surely the golden rule is to
use the sharpest tool for the job?

Another little reason I have to use float instead of double is I have
a variety of CPUs that only have a 32bit FPU and the double type is
all emulated.

Jim
 
J

Joe Wright

It's true that the difference in space taken up by a double and a
float may be tiny, but a huge number multiplied by a tiny number can
still end up as a large number - if you have very large arrays then
float is probably the sensible choice.

The difference in size of float and double objects is not 'tiny' and is
usually a factor of two.

What do you mean by huge and tiny numbers in terms of floating point?
 
C

CBFalconer

When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.

floats (and doubles) are not exact. They are closest
approximations. They can usually represent exactly only some
limited range of integers, and a few other specific values.

You can round the printed result to some specified level by
specification in printf.

--
If you want to post a followup via groups.google.com, ensure
you quote enough for the article to make sense. Google is only
an interface to Usenet; it's not Usenet itself. Don't assume
your readers can, or ever will, see any previous articles.
More details at: <http://cfaj.freeshell.org/google/>
 
C

CBFalconer

Eric said:
.... snip ...

(Can I claim a coinage for "thrache?" Google finds 523
hits, but few seem computer-related.)

You can also claim 'steenkin' if you wish. At least as far as I am
concerned. Enjoy.

--
If you want to post a followup via groups.google.com, ensure
you quote enough for the article to make sense. Google is only
an interface to Usenet; it's not Usenet itself. Don't assume
your readers can, or ever will, see any previous articles.
More details at: <http://cfaj.freeshell.org/google/>
 
C

CBFalconer

Lew said:
.... snip ...

You /really/ need to read the article "What Every Computer
Scientist Should Know About Floating-Point Arithmetic" originally
published in the March, 1991 issue of "Computing Surveys" (the
journal of the ACM). It explains why you never get absolutely
accurate results from a floatingpoint number.

http://docs.sun.com/source/806-3568/ncg_goldberg.html

That statement is wrong (the 'never'). Most systems will handle
some range of integers with perfect accuracy.

--
If you want to post a followup via groups.google.com, ensure
you quote enough for the article to make sense. Google is only
an interface to Usenet; it's not Usenet itself. Don't assume
your readers can, or ever will, see any previous articles.
More details at: <http://cfaj.freeshell.org/google/>
 
C

Charlton Wilbur

RH> I prefer to think of it as a "lie-to-students" - that is, it's
RH> a strategy that works perfectly well until they know enough
RH> about programming in general and floating-point in particular
RH> to realise that there are circumstances in which it *doesn't*
RH> work, by which time they'll also know enough to deal with
RH> those circumstances appropriately and effectively.

When I engage in a lie-to-students this way -- and for several years,
I taught music theory, which is founded on several major
lies-to-students[1], I made a point of mentioning that it was, in
fact, a lie-to-students. They seemed to appreciate the honesty, and
when they came up with a counterexample that didn't fit the lie, it
was much easier to say "remember that I told you I was going to lie to
you about some things? That's one of them, and you'll probably be
talking about how it *really* works this time next year" than "well,
yes, I know that Bach wrote that, and no, I'm not saying he's wrong,
but I'd have to fail him if he turned in an exercise like that!"

(Or, famously, "When *you* are Beethoven, then you, too, can write
like that.")

Charlton


[1] OT, but at least not about C++ or Windows: The biggest lie is that
there are pitch combinations that are objectively consonant and
dissonant. If you limit your study only to European and American
music from AD 1000 to the present, you can see that this is not the
case: consonance and dissonance are elements of style. But you have
to pick a starting place, and if you hedge too much, you don't get
anywhere and the students don't have the grounding to understand it
anyway, and so freshman music theory students are told that some pitch
combinations are objectively consonant and dissonant.
 
J

Jean-Marc Bourguet

When i am assigning a float value and try to print the same,
i found the value stored is not the exact value which i gave.

The floating point numbers are a subset of the real numbers.

This subset contains all integers whose absolute value is small.

As it is a subset, it should be evident that the result of an operation on
one or two floating point numbers is usually not a floating point number,
so the exact result has to be rounded to stay in the subset. So when you
compute with floating point numbers, most operation introduces a rounding
error and the combination of these errors can become significant.

Usually the subset is based on a binary representation (*) and so most non
integer numbers represented in a decimal notion are not in the set, even
such simple number as 0.1 (this is similar to the fact that simple numbers
like 1/3 or 1/7 have no finite representation in a decimal notation). So
when you use a decimal notation to give a floating point number, you can't
expect that a rounding error won't be introduced during the conversion to
floating point.

(*) Some are pushing for the use of decimal floating point, which would
solve this second problem but increase the previous one and also increase
the difficulty of the analysis needed to get precise estimations of the
rounding error. IBM just announced a processor with hardware support for
decimal floating point. The C commitee has in project a Technical report
describing extentions to handle them.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,077
Latest member
SangMoor21

Latest Threads

Top