How are JS numbers represented internally??

B

bobalong

Hi

I'm have some problem understanding how JS numbers are represented
internally.

Take this code for an example of weirdness:

var biggest = Number.MAX_VALUE;
var smaller = Number.MAX_VALUE - 1;

alert(biggest > smaller);
alert(biggest == smaller);

This outputs "false" then "true" not "true" then "false" as I'd expect!

What's going on here? Is this to do with precision?

What I'm looking for is the largest possible integer representable by
javascript, but I want it in non-exponential form, i.e.
123456789012345678901234567890 NOT 1.234e+123.

Thx
 
L

Lasse Reichstein Nielsen

I'm have some problem understanding how JS numbers are represented
internally.

They are specified to work as 64 bit IEEE floating point numbers.
Take this code for an example of weirdness:

var biggest = Number.MAX_VALUE;
var smaller = Number.MAX_VALUE - 1;

alert(biggest > smaller);
alert(biggest == smaller);

This outputs "false" then "true" not "true" then "false" as I'd expect!

What's going on here? Is this to do with precision?

Yes. The first integer that cannot be represented by a 64 bit
floating point number is 2^52+1. This is because the number is
represented as 52 bits of mantissa (+ 1 sign bit) and 10 bit exponent
(+ 1 sign bit). You can at most have 52 significant bits in this way,
and 2^52+1 is binary
10000000000000000000000000000000000000000000000000001
which needs 53 bits of precission.
What I'm looking for is the largest possible integer representable by
javascript, but I want it in non-e0ponential form, i.e.
123456789012345678901234567890 NOT 1.234e+123.

The number is (2^52-1)*2^(2^10-52). It is this number that Javascript
typically outputs as 1.7976931348623157e+308 (which is not exact,
but does suggest that you need 309 decimal digits to write it :)

It's easy to do in binary: 52 "1"'s followed by 972 "0"'s.
In decimal it's:

179769313486231550856124328384506240234343437157459335924404872448581845754556114388470639943126220321960804027157371570809852884964511743044087662767600909594331927728237078876188760579532563768698654064825262115771015791463983014857704008123419459386245141723703148097529108423358883457665451722744025579520

(look out for line breaks :)

/L
 
D

Dr John Stockton

JRS: In article <[email protected]>, dated Fri, 17 Mar 2006
21:44:55 remote, seen in Lasse Reichstein
Nielsen said:
(e-mail address removed) writes:
Yes. The first integer that cannot be represented by a 64 bit
IEEE

floating point number is 2^52+1. This is because the number is
represented as 52 bits of mantissa (+ 1 sign bit) and 10 bit exponent
(+ 1 sign bit).

Strictly, not quite. The exponent is 11-bit offset binary, rather than
sign-and-10-bit-magnitude. My js-misc0.htm#CDC code shows that; you may
recall the question here that prompted the work.


Strings can be used to represent integers, so the largest possible is
probably two or four gigabytes of nines. If that's too small, use a
base higher than 10. If one restricts it to a javascript Number, the
answer is about 10^308 and the answer to the probably-intended question
is about 9x10^15, both as given by LRN.
 
B

bobalong

What I'm looking for is the largest possible integer representable by
The number is (2^52-1)*2^(2^10-52). It is this number that Javascript
typically outputs as 1.7976931348623157e+308 (which is not exact,
but does suggest that you need 309 decimal digits to write it :)

It's easy to do in binary: 52 "1"'s followed by 972 "0"'s.
In decimal it's:

179769313486231550856124328384506240234343437157459335924404872448581845754556114388470639943126220321960804027157371570809852884964511743044087662767600909594331927728237078876188760579532563768698654064825262115771015791463983014857704008123419459386245141723703148097529108423358883457665451722744025579520

(look out for line breaks :)

Thank you, that's great!

Do you know of a way to output the above number (or any arbitrary
number) in javascript as a string?

Number.MAX_VALUE.toString() just gives me the exponential form.

I guess it's got something to do with manipulating the binary number
directly and converting it into decimal form using bitwise shifts and
iteration (??), but I have no clue as to where to start (not used to
working directly with binary numbers). Could you point me in the right
direction? Thanks!
 
R

Randy Webb

(e-mail address removed) said the following on 3/17/2006 8:00 PM:
Thank you, that's great!

Do you know of a way to output the above number (or any arbitrary
number) in javascript as a string?

var
maxValue="179769313486231550856124328384506240234343437157459335924404872448581845754556114388470639943126220321960804027157371570809852884964511743044087662767600909594331927728237078876188760579532563768698654064825262115771015791463983014857704008123419459386245141723703148097529108423358883457665451722744025579520";

Now, its a string :)
Number.MAX_VALUE.toString() just gives me the exponential form.

Due to it's precision abilities.
I guess it's got something to do with manipulating the binary number
directly and converting it into decimal form using bitwise shifts and
iteration (??), but I have no clue as to where to start (not used to
working directly with binary numbers).

Has nothing to do with that.
Could you point me in the right direction? Thanks!

See above.
 
B

bobalong

Do you know of a way to output the above number (or any arbitrary
var
maxValue="179769313486231550856124328384506240234343437157459335924404872448581845754556114388470639943126220321960804027157371570809852884964511743044087662767600909594331927728237078876188760579532563768698654064825262115771015791463983014857704008123419459386245141723703148097529108423358883457665451722744025579520";

Now, its a string :)


Due to it's precision abilities.

Thanks, but I'm really looking for a way to do this for any *abritrary*
number that's too long to be represented in standard decimal form.

So for example if given an integer between 1 and 1000 (x), how could I
output the decimal (not exponential) form of the following:

Number.MAX_VALUE - x

Basically I need an algorithm for how you obtained the long form above,
but for any integer, not just Number.MAX_VALUE.

Thanks for your help so far!
 
V

VK

Thanks, but I'm really looking for a way to do this for any *abritrary*
number that's too long to be represented in standard decimal form.

So for example if given an integer between 1 and 1000 (x), how could I
output the decimal (not exponential) form of the following:

Number.MAX_VALUE - x

Basically I need an algorithm for how you obtained the long form above,
but for any integer, not just Number.MAX_VALUE.

Doesn't answer directly to your question, but a good consideration
point:

The biggest JavaScript/JScript integer still returned by toString
method "as it is" 999999999999999930000 or round that. Bigger integer
will be brought into exponential form.

But long before that number the build-in math will stop working
properly (from the human point of view of course). Say
999999999999999930000 and 999999999999999900000 have the same string
form 999999999999999900000 so your error may be up to 50000 and even
higher which is doubtfully acceptable :)

Usually (unless custom BigMath ligraries are used) on 32bit platforms
like Windows you can work reliable only with numbers up to 0xFFFFFFFF
(decimal 4294967295). After this "magic border" you already dealing not
with real numbers, but with machine fantasies.

As 0xFFFFFFFF and lesser are not converted into exponential form by
toString method, your problem has simple solution: do not go over
0xFFFFFFFF, there is nothing useful there anyway.
 
B

bobalong

Usually (unless custom BigMath ligraries are used) on 32bit platforms
like Windows you can work reliable only with numbers up to 0xFFFFFFFF
(decimal 4294967295). After this "magic border" you already dealing not
with real numbers, but with machine fantasies.

As 0xFFFFFFFF and lesser are not converted into exponential form by
toString method, your problem has simple solution: do not go over
0xFFFFFFFF, there is nothing useful there anyway.

Thank you, that'll be absolutely fine for what I'm doing. Makes perfect
sense as well... and don't like the sound of machine fantasies too much
:)

Thanks everyone.
 
T

Thomas 'PointedEars' Lahn

VK said:
Doesn't answer directly to your question, but a good consideration
point:

The biggest JavaScript/JScript integer still returned by toString
method "as it is" 999999999999999930000 or round that. Bigger integer
will be brought into exponential form.

But long before that number the build-in math will stop working
properly (from the human point of view of course). Say
999999999999999930000 and 999999999999999900000 have the same string
form 999999999999999900000 so your error may be up to 50000 and even
higher which is doubtfully acceptable :)

Usually (unless custom BigMath ligraries are used) on 32bit platforms
like Windows you can work reliable only with numbers up to 0xFFFFFFFF
(decimal 4294967295). After this "magic border" you already dealing not
with real numbers, but with machine fantasies.

Utter nonsense.

1. It is only a secondary matter of the operating system. It is rather
a matter of Integer arithmetic (with Integer meaning the generic
machine data type), which can only performed if there is a processor
register that can hold the input and output value of that operation.
On a 32-bit platform, with a 32 bits wide data bus, the largest
register is also 32 bits wide, therefore the largest (unsigned)
integer value that can be stored in such a register is 2^32-1
(0..4294967295, 0x0..0xFFFFFFF hexadecimal)

2. If the input or output value exceeds that value, floating-point
arithmetic has to be used, through use or emulation of a Floating-Point
Unit (FPU); such a unit is embedded in the CPU since the Intel
80386DX/486DX and Pentium processor family. Using an FPU inevitably
involves a potential rounding error in computation, because the number
of bits available for storing numbers is still limited, and so the
value is no longer displayed as a sequence of bits representing the
decimal value in binary, but as a combination of bits representing the
mantissa, and bits representing the exponent of that floating-point
value.

3. ECMAScript implementations, such as JavaScript, use IEEE-754
(ANSI/IEEE Std 754-1985; IEC-60559) double-precision floating-point
(doubles) arithmetics always. That means they reserve 64 bits for
each value, 52 for the mantissa, 11 bits for the exponent, and 1 for
the sign bit. Therefore, there can be no true representation of an
integer number above a certain value; there are just not enough bits
left to represent it as-is.

There is no magic and no fantasy involved here, it is all pure mechanical
logic implemented in hardware (FPU) and software (in this case: the OS and
applications built for the platform, and any ECMAScript implementation
running on that platform and within the application's environment).


PointedEars
 
B

bobalong

Thanks but I need to work within a precision of 1:

alert(Number.MAX_VALUE == (Number.MAX_VALUE - 1)) evaluates to true.

I need to replace Number.MAX_VALUE in the above with the *highest
integer capable of making the expression evaluate to false*.

I think I'm gonna go with the 0xFFFFFFFF suggestion above, but this is
a 32-bit number and someone else said that numbers are represented as
64-bit internally. Can you confirm this or am I safest working within
the 32-bit limits?

Thanks
 
V

VK

Thanks but I need to work within a precision of 1:

alert(Number.MAX_VALUE == (Number.MAX_VALUE - 1)) evaluates to true.

I need to replace Number.MAX_VALUE in the above with the *highest
integer capable of making the expression evaluate to false*.

I think I'm gonna go with the 0xFFFFFFFF suggestion above, but this is
a 32-bit number and someone else said that numbers are represented as
64-bit internally. Can you confirm this or am I safest working within
the 32-bit limits?

That's going to be a lot of excited advises here very soon (I think).
So you better just spit on everyone (including myself) and check the
precision borders by yourself. You may start with the numbers in my
post and play with other numbers in either side (up and down).

My bye-bye hint: a number cannot be "presented 64-bit internally" on a
32bit platform for the same reason as double-byte Unicode character
cannot be sent "as it is" in 8bit TCP/IP stream or 4-dimensional
tesseract drawn on a flat sheet of paper: there is no "unit" to hold
it. Everything has to be emulated by the available units: Unicode char
brought into 8bit sequence, 64bit number split onto 32bit parts.

I did not look yet on this part of ECMA specs, but if it indeed says
"presented 64-bit _internally_" then it's just a clueless statement.
 
B

bobalong

I played with integers around 0xFFFFFFFF and I seem to be able to add
and subtract integers from that number no problem with no loss of
precision, but I'm not sure if this behaviour will be consistent on all
machines.

PointedEars' post was very informative (thanks) but not that
practically useful due to my experiment.

I also need to be able to determine the minimum integer value that can
be represented to a precision of 1, and again I added/subtracted
integers to -0xFFFFFFFF and it worked ok too.

Like I said before it's not (yet) necessary for my application to work
with signed integers outside the range +/- 0xFFFFFFFF, but I'd like to
find out what these across-the-board limits are, out of interest.

Cheers
 
T

Thomas 'PointedEars' Lahn

I played with integers around 0xFFFFFFFF and I seem to be able to add
and subtract integers from that number no problem with no loss of
precision,

Of course.
but I'm not sure if this behaviour will be consistent on all machines.

Of course it will.
PointedEars' post was very informative (thanks)

You are welcome.
but not that practically useful due to my experiment.

Well, it is rather a matter of understanding ...
I also need to be able to determine the minimum integer value that can
be represented to a precision of 1, and again I added/subtracted
integers to -0xFFFFFFFF and it worked ok too.

Of course it did. You have not read thoroughly enough. VK was right
about the precision limit for integer values, but his explanation was
wrong/gibberish. At first, I said there is a "potential rounding error"
when floating-point arithmetic is done; that should read as a possibility,
not a necessity. Second, I said that ECMAScript implementations use
IEEE-754 doubles always, so the 32-bit Integer border does not really
matter here. If you follow the specified algorithm defined by the
latter international standard, the representation of

n = 4294967295 (or 2^32-1)

can be computed as follows (unless specified otherwise with "(digits)base",
all values are decimal):

1. Convert the number n to binary.

,---------- M: 32 bits --------. e
N := (11111111111111111111111111111111)2 * 2^0

2. Let the mantissa m be 1 <= m < 2.

,---------- 31 bits ----------. e
N := (1.1111111111111111111111111111111)2 * 2^31

(e := 31)

3. Ignore the 1 before the point (normalization, allows for greater
precision), and round the mantissa to 52 bits (since we needed
less than 52 bits for n, rounding it merely fills the remaining
bits with zeroes).

,---------------------- 52 bits -------------------. e
N := (1111111111111111111111111111111000000000000000000000)2 * 2^31

or, IOW: M := (1111111111111111111111111111111000000000000000000000)2
e := 31

4. Add the bias value 1023 (for double precision) to the value of e.

e := 31 + 1023 = 1054 = (10000011110)2 =: E

5. n is a positive number, so the sign bit S of N is 0.

6. n is stored as

S ,--- E ---. ,---------------------- M -------------------------.
|0|10000011110|1111111111111111111111111111111000000000000000000000|
`-11 bits-' `------------------- 52 bits ----------------------'

As you can see, there is plenty of bits left for greater precision
(greater integer numbers, or more decimals). No wonder you do not
experience any problems with this "small" and "unprecise" a number
as 2^32-1 (and neighbors). Likewise for -2^31-1.
Like I said before it's not (yet) necessary for my application to work
with signed integers outside the range +/- 0xFFFFFFFF, but I'd like to
find out what these across-the-board limits are, out of interest.

Reversing the (above) algorithm with extremal input/output values is left
as an exercise to the reader. Bear in mind that there are special values:
denormalized numbers, NaN, -Infinity, and Infinity.

See also <URL:http://en.wikipedia.org/wiki/IEEE_floating-point_standard>

(I had expected you to find this and similar Web resources by yourself,
now that you had been given so many hints.)


HTH

PointedEars
 
L

Lasse Reichstein Nielsen

Thanks but I need to work within a precision of 1:

alert(Number.MAX_VALUE == (Number.MAX_VALUE - 1)) evaluates to true.
I need to replace Number.MAX_VALUE in the above with the *highest
integer capable of making the expression evaluate to false*.

That would be 2^53 (not 2^52 as I said earlier - IEEE floating point
numbers are smart and add an implicit 1 in some cases, so you can
get 53 bits of precission (they are pretty complicated, so if you
want to understand them in details, read Dr. Stockton's link and/or
the IEEE 754 specification, I'm sure to have forgotten details)).

Actually, since rounding is downwards, 2^53+2 will satisfy your equation,
but only because (2^53+2)-1 evaluates to 2^53. A better comparison would
be
MAXNUMBER == MAXNUMBER + 1
and your MAXNUBER is the lowest number satsifying this, i.e., one
below the first integer that cannot be represented..
I think I'm gonna go with the 0xFFFFFFFF suggestion above, but this is
a 32-bit number and someone else said that numbers are represented as
64-bit internally. Can you confirm this or am I safest working within
the 32-bit limits?

If you ned to do bit-operations (shifts, and/or/xor), your restricted
to 32 bit numbers. Otherwise, you can stay in the range [-2^53..2^53]
where all integers can be represented exactly.

/L
 
D

Dr John Stockton

JRS: In article <[email protected]>
, dated Sat, 18 Mar 2006 13:52:30 remote, seen in
Thanks but I need to work within a precision of 1:

alert(Number.MAX_VALUE == (Number.MAX_VALUE - 1)) evaluates to true.

I need to replace Number.MAX_VALUE in the above with the *highest
integer capable of making the expression evaluate to false*.

I think I'm gonna go with the 0xFFFFFFFF suggestion above, but this is
a 32-bit number and someone else said that numbers are represented as
64-bit internally. Can you confirm this or am I safest working within
the 32-bit limits?

Please read the newsgroup FAQ on how to construct Usenet responses in
Google.
 
D

Dr John Stockton

JRS: In article <[email protected]>
, dated Sat, 18 Mar 2006 10:42:52 remote, seen in
news:comp.lang.javascript said:
Usually (unless custom BigMath ligraries are used) on 32bit platforms
like Windows you can work reliable only with numbers up to 0xFFFFFFFF
(decimal 4294967295). After this "magic border" you already dealing not
with real numbers, but with machine fantasies.

You are inadequately informed.

Current Delphi has 64-bit integers.

For over a decade, at least, the standard PC FPU has supported,
directly, a 64-bit integer type, called "comp" in Borland Pascal and
Delphi.

It is never _necessary_ to use a library, since one can always write the
corresponding code in the main body of the program.

The OP needs to read up about floating-point formats and properties.
 
D

Dr John Stockton

JRS: In article <[email protected]>, dated Sat, 18 Mar
2006 22:50:04 remote, seen in Thomas
'PointedEars' Lahn said:
Utter nonsense.

1. It is only a secondary matter of the operating system. It is rather
a matter of Integer arithmetic (with Integer meaning the generic
machine data type), which can only performed if there is a processor
register that can hold the input and output value of that operation.
On a 32-bit platform, with a 32 bits wide data bus, the largest
register is also 32 bits wide, therefore the largest (unsigned)
integer value that can be stored in such a register is 2^32-1
(0..4294967295, 0x0..0xFFFFFFF hexadecimal)

Incorrect. For example, Turbo Pascal runs on 16-bit machines, and does
not need (though can use) 32-bit registers and/or a FPU. But, since
1988 or earlier, it has provided the 32-bit LongInt type. LongInt
addition, for example, is provided by two 16-bit ops and a carry.

Note that integer multiplication frequently involves the use of a
register pair for the result.

2. If the input or output value exceeds that value, floating-point
arithmetic has to be used, through use or emulation of a Floating-Point
Unit (FPU); such a unit is embedded in the CPU since the Intel
80386DX/486DX and Pentium processor family. Using an FPU inevitably
involves a potential rounding error in computation, because the number
of bits available for storing numbers is still limited, and so the
value is no longer displayed as a sequence of bits representing the
decimal value in binary, but as a combination of bits representing the
mantissa, and bits representing the exponent of that floating-point
value.

Insufficiently correct. The 64-bit "comp" type is implemented *exactly*
in the FPU (and is 2's complement IIRC). It has integer values.

Also, longer arithmetic can be implemented outside the FPU; floating-
point is not necessary.

Your use of the word "decimal" is superfluous and potentially
misleading.

3. ECMAScript implementations, such as JavaScript, use IEEE-754
(ANSI/IEEE Std 754-1985; IEC-60559) double-precision floating-point
(doubles) arithmetics always. That means they reserve 64 bits for
each value, 52 for the mantissa, 11 bits for the exponent, and 1 for
the sign bit. Therefore, there can be no true representation of an
integer number above a certain value; there are just not enough bits
left to represent it as-is.

Incorrect. 2^99 is an integer, and it is represented exactly. I know
what you have in mind; but your words do not express it.
 
V

VK

Thomas said:
Of course it did. You have not read thoroughly enough. VK was right
about the precision limit for integer values, but his explanation was
wrong/gibberish. At first, I said there is a "potential rounding error"
when floating-point arithmetic is done; that should read as a possibility,
not a necessity. Second, I said that ECMAScript implementations use
IEEE-754 doubles always, so the 32-bit Integer border does not really
matter here. If you follow the specified algorithm defined by the
latter international standard, the representation of

n = 4294967295 (or 2^32-1)

can be computed as follows (unless specified otherwise with "(digits)base",
all values are decimal):

When it's asked "how to retrieve a form element value":- are we also
starting with form definition, history of Web, CGI standard etc,
leaving the OP question to be answered independently? ;-)

That was a clearly stated question: "From what point JavaScript/JScript
math for integers gets too unaccurate to be useful?".

The answer:
IEEE-754 reference in ECMA is gibberish: it was a "reserved for future
use" statement. In the reality JavaScript still has a relatively very
weak math which mainly emulates IEEE behavior but by its precision and
"capacity" stays below many other known languages, even below VBA
(Visual Basic for Applications).
That was one of main improvement planned in JavaScript 2.0, but the
project seems never came to the successfull end.

In application to positive integers there are three main borders anyone
has to be avare of:

1) 0x0 - 0xFFFFFFFF (0 - 4294967295)
"Level of the reality". Here we are dealing with regular "human" math
where for example
( x > (x-1) ) is always true.
Another important feature of this range is that we can apply both
regular math operations and bitwise operations w/o
loosing/transforming/converting the nature of the involved number.
Not less important feature of this range is that these numbers can be
handled by 32bit systems natively thus with the maximum speed.
Unless your are using Itanium or other 64bit environment (or unless you
really have to) it is always wise to stay within this range. One have
to admit that it is big enough for the majority of the most common
tasks :)

2) 0x100000000 - 0x38D7EA4C67FFF (4294967296 - 999999999999999)
"Level of fluctuations"
Primitive math is still mainly working so say ( x > (x-1) ) is still
*mainly* true, but all kind of implementation differences may take
effect in math-intensive expressions.
Also these numbers do not fit to 32bit so bitwise operations are their
killers.
Also on 32bit systems all of them have to be emulated by 32bit numbers
so you have a serious impact on productivity.

3) 0x38D7EA4C68000 - 0x2386F26FC10000 (999999999999999 -
9999999999999999)
"Twilight zone"
Spit over your shoulder before any operation - and do not take the
results too seriously. Say ( x > (x-1) ) very rarely will be true -
but it may happen once with good weather conditions.

4) 0x16345785D8A0000 - Number.MAX_VALUE (100000000000000000 -
Number.MAX_VALUE)
"Crazy Land"
IEEE emulators are still working so you will continue to get different
cool looking numbers. But nothing of it has any correlation with the
human math and one time error can be anywhere from 10,000 to 100,000.

P.S. A "rule of thumb": the Crazy Land in JavaScript starts guaranteed
for any number containing 17 digits or more. It is absolutely
irrelevant to the number value: only amout of digits used to write this
number is important. So if you are wondering is you can do anything
useful with some long number, just count its digits.

P.P.S. Math specialists are welcome to scream now. But before one may
want to test and to read the Web a bit.
 
T

Thomas 'PointedEars' Lahn

VK said:
When it's asked "how to retrieve a form element value":- are we also
starting with form definition, history of Web, CGI standard etc,
leaving the OP question to be answered independently? ;-)

Troll elsewhere.
That was a clearly stated question: "From what point JavaScript/JScript
math for integers gets too unaccurate to be useful?".

To be able to answer this question, one must first understand how
numbers work in JavaScript/JScript. Making wild assumptions based
on misconceptions and flawed testing, as you do, does not help.
The answer:
IEEE-754 reference in ECMA is gibberish:

Nonsense. It works in practice as it is specified in theory, you are just
unable to draw meaning from technical language. And it is the _ECMAScript_
specification, with the ECMA being the standardization body that issued it.
This is about the ... uh ... tenth time you have been told this.


PointedEars
 
V

VK

Thomas said:
Troll elsewhere.

Troll? I'm answering the OP's question. The border numbers collected
from different math related articles and described behavior checked on
IE, FF, Opera before posting. There is always a place for adjustments
and clarifications of course.
From the developer point of view IMHO it is important to know exactly
the border after wich say ((x-1) == x) is true or say alert(x) displays
a value which is 50,000 (fifty thousands) lesser then the actual value.

It is great of course to also know why it is correct and expected for
given value by IEEE standards, but that is already a secondary question
for math savvies.

That may doesn't have any sense - but it sounds rather reasonnable for
my twisted mind. :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Staff online

Members online

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,045
Latest member
DRCM

Latest Threads

Top