Large numbers, floating point number questions

N

Noob

Ben said:
[ Using a float as a loop index is not a good idea. ]

I emphatically agree.
An integer type is preferable. I don't think there is anything actually
wrong with the above as far as C is concerned (because of the guarantees
C makes about floating point arithmetic) but you'll make everyone read
the code several times just to be sure.

Considering that the condition (f == f+1) is true for (relatively) small
values of f, I would argue that using a float variable as a loop index
is a disaster waiting to happen.

#include <stdio.h>
void foo(unsigned long n)
{
float f;
for (f = 0; f < n; ++f)
{
float g = f+1;
if (f == g) break;
}
printf("f=%f n=%lu\n", f, n);
}
int main(void)
{
foo(10*1000*1000);
foo(20*1000*1000);
return 0;
}

$ gcc -Wall -Wextra tutu.c
$ ./a.out
f=10000000.000000 n=10000000
f=16777216.000000 n=20000000

Regards.
 
C

Chad

[...]
How many numbers would 1e300 produce? What can I say, my math isn't up
to par.

How many numbers?

1e300 is one number.

It has 301 decimal digits, if that's what you mean: a 1 followed by 300
0s.  And you'd need 997 bits to represent it exactly.

--

How do you figure that you need 997 bits to represent 301 decimal
digits exactly?
 
K

Keith Thompson

Chad said:
How do you figure that you need 997 bits to represent 301 decimal
digits exactly?

The base 2 log of 1e300 is approximately 996.58.

To put it another way, 10**300 is approximately 1.49 * 2**996 (where
"**" denotes exponentiation).
 
C

Chad

Chad said:
[...]
How many numbers would 1e300 produce? What can I say, my math isn't up
to par.
How many numbers?
1e300 is one number.
It has 301 decimal digits, if that's what you mean: a 1 followed by 300
0s.  And you'd need 997 bits to represent it exactly.
How do you figure that you need 997 bits to represent 301 decimal
digits exactly?

The base 2 log of 1e300 is approximately 996.58.

To put it another way, 10**300 is approximately 1.49 * 2**996 (where
"**" denotes exponentiation).

--

<off topic>
I've noticed I will sometimes forget things that I learned 6 or even 9
months ago because the material never quite *sunk in* the first couple
of times around.
</off topic>
 
C

Chad

On 4/20/2010 5:24 PM, Chad wrote:
The following program is my attempt to calculate the formula at the
following url...
[... code that needs >525-bit integers ...]
      I haven't studied the code to find why it "breaks," because
the fundamental approach simply isn't going to work with the
native types on today's machines.  Find a different formula, or
find a multiple-precision arithmetic package.
What other formula do have in mind?

     None.  What are you trying to calculate?  Can you calculate
its logarithm, say, instead of the quantity itself?  (That's how
I got to the ">525-bit" estimate, for example: by calculating
lgamma(101) instead of attempting 100 factorial.)

--

I guess the answer would be I don't know because this was just
something I did as a programming exercise.
 
C

Chad

If your 'unsigned long' is, say, 32-bits, then the largest factorial it can
store is 12!

13! would require about 33 bits I think, and if you try and do it using 32
bits, it will give funny results.

Summing the factorials has similar problems; integers just have a too narrow
range.

Floats have a much wider range: typically a float might go up to around
1e38, and a double up to about 1e300. But a double only gives some 16
digits of precision.

Simplify your program to *only* show factorials from 1 to 100 (forget
command line input, just use a loop), and you will
see where it goes wrong. Then put in doubles, and see the difference.

--

I think I'm missing the broader concept here. Given the following....

#include <stdio.h>
#include <stdlib.h>

double fact(double n)
{
double acc = 1.0;
unsigned long i;

for (i = 1; i <= n; i++)
acc *= i;

return acc;
}

int main(void)
{
int i;

for (i = 0; i < 150; i++) {
printf("%d! = %.4e\n",i, fact(i));
}

return 0;
}


How can my 32 bit computer computer calculate numbers like 100! if it
can only store 32 bits.
 
K

Keith Thompson

Chad said:
How can my 32 bit computer computer calculate numbers like 100! if it
can only store 32 bits.

Floating-point gives up some precision to get greater range.
The larger the number, the less precisely it can represent it.

Consider a small, simplified version of floating point where a
number can be represented as:

X.XXXeXX

That's 4 digits of precision and 2 digits of exponent (decimal,
not binary, just for this example) (and I'm ignoring signs).
With a 6-digit integer, you can only represent numbers up to 999999.
With this simplifed floating-point representation, you can represent
much bigger numbers, up to 9.999e99 -- but you can't represent 999999
exactly. The nearest representable values are 9.999e05 and 1.000e06.

Picture the real number line. Mathemetically, it's continuous and
infinitely long. Numbers representable in an integer type are spaced
evenly, one unit apart, over a finite range. Numbers representable
in a floating-point type are spaced unevenly, more tightly near
zero and more widely far from zero.

By the way, these questions are mostly not about C, but about computer
numeric representations in general. An appropriate Google search
or a decent book will probably give you more and better information.

The classic paper on the topic is David Goldberg's "What Every Computer
Scientist Should Know About Floating-Point Arithmetic". It might be too
advanced for your current level of knowledge, but you should take a look
and decide for yourself.
 
B

BruceS

I think I'm missing the broader concept here. Given the following....

#include <stdio.h>
#include <stdlib.h>

double fact(double n)
{
  double acc = 1.0;
  unsigned long i;

  for (i = 1; i <= n; i++)
    acc *= i;

  return acc;

}

int main(void)
{
  int i;

  for (i = 0; i < 150; i++) {
    printf("%d! = %.4e\n",i, fact(i));
  }

  return 0;

}

How can my 32 bit computer computer calculate numbers like 100! if it
can only store 32 bits.

The trick is that it's not storing 100! in 32 bits---it's storing a
number *like* 100! for some definition of "like". Floating point
formats split up the bits into mantissa and exponent (and sign, but
let's leave that out for now). It's easy to see if you think of the
"scientific notation", which does the same with base-10. Imagine you
have two decimal digits for the mantissa and two for the exponent, so
99000 would be represented as 9.9E+04. You can count by one to a
hundred, but once you hit 100, you start counting by 10, since you
don't have a third digit in your mantissa (e.g. 1.0E+2, 1.1E+2, 1.2+E2
are 100, 110, 120). This skims right past the sign, biasing of the
exponent, etc., but should give you the general idea. Once you hit
1000, you start counting by 100's. With this scheme, you could
represent huge numbers (like googol), using just those four decimal
digits. You just lose detail with larger numbers. Now imagine the
same thing using e.g. 11 digits of exponent, 20 digits of mantissa,
and all the digits being binary, and you have the idea. Below a
certain threshold you can represent every integer; above it, you
represent some of them, getting lower "density" as the magnitude
increases. It's possible that you get 100! exactly with a 32-bit
floating point representation, but if so you won't get 100! + 1.
That's why people were asking how accurate the result had to be.

Does that make it clear?
 
C

Chad

[...]
How can my 32 bit computer computer calculate numbers like 100! if it
can only store 32 bits.

Floating-point gives up some precision to get greater range.
The larger the number, the less precisely it can represent it.

Consider a small, simplified version of floating point where a
number can be represented as:

    X.XXXeXX

That's 4 digits of precision and 2 digits of exponent (decimal,
not binary, just for this example) (and I'm ignoring signs).
With a 6-digit integer, you can only represent numbers up to 999999.
With this simplifed floating-point representation, you can represent
much bigger numbers, up to 9.999e99 -- but you can't represent 999999
exactly.  The nearest representable values are 9.999e05 and 1.000e06.

Picture the real number line.  Mathemetically, it's continuous and
infinitely long.  Numbers representable in an integer type are spaced
evenly, one unit apart, over a finite range.  Numbers representable
in a floating-point type are spaced unevenly, more tightly near
zero and more widely far from zero.

By the way, these questions are mostly not about C, but about computer
numeric representations in general.  An appropriate Google search
or a decent book will probably give you more and better information.

The classic paper on the topic is David Goldberg's "What Every Computer
Scientist Should Know About Floating-Point Arithmetic".  It might be too
advanced for your current level of knowledge, but you should take a look
and decide for yourself.

I tried to read David Goldberg's "What Every Computer Scientist Should
Know About Floating-Point Arithmetic". I was totally lost after the
first paragraph.
 
E

Eric Sosman

I think I'm missing the broader concept here. Given the following....

#include<stdio.h>
#include<stdlib.h>

double fact(double n)
{
double acc = 1.0;
unsigned long i;

for (i = 1; i<= n; i++)
acc *= i;

return acc;
}

int main(void)
{
int i;

for (i = 0; i< 150; i++) {
printf("%d! = %.4e\n",i, fact(i));
}

return 0;
}


How can my 32 bit computer computer calculate numbers like 100! if it
can only store 32 bits.

How can *you* express a hundred-digit number if you can
store only five decimal digits?


Da da da da da da daaah,

Da da da da daah, da-da-da-da-da,

Da da da da da da daaah,

Da! da-da da da da-da.




Da da da da da da daaah,

Da da da da daah, da-da-da-da-da,

Da da da da da da daaah,

Da! da-da da da ... da ... da.



Hint: What does 123e97 mean in a C source module?

Extra credit: When you use your five digits this way to
get an enormous range, what do you sacrifice by comparison to
using them in the "ordinary" way? There ain't no such thing
as a free lunch, after all.
 
B

bartc

double fact(double n)

Actually the n can be an int (since it only needs to store fairly small,
whole numbers)
{
double acc = 1.0;
unsigned long i;

for (i = 1; i <= n; i++)

A floating point n *might* cause trouble in some cases (but OK in this case,
as n will be a whole number).
How can my 32 bit computer computer calculate numbers like 100! if it
can only store 32 bits.

Floating point numbers are like that. Magic. They sacrifice some of the
32-bits to form an exponent, to increase the range, in the same way my
12-digit Casio calculator uses 2-digits for the exponent, leaving 10 for the
significant figures.

Anyway 'double' is likely 64-bits. Change double to float (usually 32-bits),
and the program will go wrong after 34!, as 32-bit floats can only represent
up to about 1e38.

As it is, will go wrong after 170! (change the 150 to 180), as it hits the
top limit around 1e306.
 
T

Tim Streater

Chad said:
[...]
How can my 32 bit computer computer calculate numbers like 100! if it
can only store 32 bits.

Floating-point gives up some precision to get greater range.
The larger the number, the less precisely it can represent it.

Consider a small, simplified version of floating point where a
number can be represented as:

    X.XXXeXX

That's 4 digits of precision and 2 digits of exponent (decimal,
not binary, just for this example) (and I'm ignoring signs).
With a 6-digit integer, you can only represent numbers up to 999999.
With this simplifed floating-point representation, you can represent
much bigger numbers, up to 9.999e99 -- but you can't represent 999999
exactly.  The nearest representable values are 9.999e05 and 1.000e06.

Picture the real number line.  Mathemetically, it's continuous and
infinitely long.  Numbers representable in an integer type are spaced
evenly, one unit apart, over a finite range.  Numbers representable
in a floating-point type are spaced unevenly, more tightly near
zero and more widely far from zero.

By the way, these questions are mostly not about C, but about computer
numeric representations in general.  An appropriate Google search
or a decent book will probably give you more and better information.

The classic paper on the topic is David Goldberg's "What Every Computer
Scientist Should Know About Floating-Point Arithmetic".  It might be too
advanced for your current level of knowledge, but you should take a look
and decide for yourself.

I tried to read David Goldberg's "What Every Computer Scientist Should
Know About Floating-Point Arithmetic". I was totally lost after the
first paragraph.

You might do better looking up floating point in wikipedia.
 
K

Keith Thompson

Dann Corbit said:
{snip}

You can't use a native data type all by itself. You need an array of
objects.

Not if you don't need complete accuracy (which the OP apparently
doesn't).

On the other hand, 32-bit float typically doesn't have the range to
represent 100!, though 64-bit double typically does. 100! is about
2**525, which means you need slightly more than 10 bits of exponent.

[snip]
 
C

Chad

     How can *you* express a hundred-digit number if you can
store only five decimal digits?

     Da da da da da da daaah,

     Da da da da daah, da-da-da-da-da,

     Da da da da da da daaah,

     Da! da-da da da da-da.

     Da da da da da da daaah,

     Da da da da daah, da-da-da-da-da,

     Da da da da da da daaah,

     Da! da-da da da ... da ... da.

     Hint: What does 123e97 mean in a C source module?

     Extra credit: When you use your five digits this way to
get an enormous range, what do you sacrifice by comparison to
using them in the "ordinary" way?  There ain't no such thing
as a free lunch, after all.


Can you give me another 4 years before I can answer that? This is I
was having similar issues 4 years ago...

http://groups.google.com/group/comp...q=cdalten+range+vs+precision#ab39f1b0632812d2
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,059
Latest member
cryptoseoagencies

Latest Threads

Top