Thomas said:
Of course it did. You have not read thoroughly enough. VK was right
about the precision limit for integer values, but his explanation was
wrong/gibberish. At first, I said there is a "potential rounding error"
when floating-point arithmetic is done; that should read as a possibility,
not a necessity. Second, I said that ECMAScript implementations use
IEEE-754 doubles always, so the 32-bit Integer border does not really
matter here. If you follow the specified algorithm defined by the
latter international standard, the representation of
n = 4294967295 (or 2^32-1)
can be computed as follows (unless specified otherwise with "(digits)base",
all values are decimal):
When it's asked "how to retrieve a form element value":- are we also
starting with form definition, history of Web, CGI standard etc,
leaving the OP question to be answered independently? ;-)
That was a clearly stated question: "From what point JavaScript/JScript
math for integers gets too unaccurate to be useful?".
The answer:
IEEE-754 reference in ECMA is gibberish: it was a "reserved for future
use" statement. In the reality JavaScript still has a relatively very
weak math which mainly emulates IEEE behavior but by its precision and
"capacity" stays below many other known languages, even below VBA
(Visual Basic for Applications).
That was one of main improvement planned in JavaScript 2.0, but the
project seems never came to the successfull end.
In application to positive integers there are three main borders anyone
has to be avare of:
1) 0x0 - 0xFFFFFFFF (0 - 4294967295)
"Level of the reality". Here we are dealing with regular "human" math
where for example
( x > (x-1) ) is always true.
Another important feature of this range is that we can apply both
regular math operations and bitwise operations w/o
loosing/transforming/converting the nature of the involved number.
Not less important feature of this range is that these numbers can be
handled by 32bit systems natively thus with the maximum speed.
Unless your are using Itanium or other 64bit environment (or unless you
really have to) it is always wise to stay within this range. One have
to admit that it is big enough for the majority of the most common
tasks
2) 0x100000000 - 0x38D7EA4C67FFF (4294967296 - 999999999999999)
"Level of fluctuations"
Primitive math is still mainly working so say ( x > (x-1) ) is still
*mainly* true, but all kind of implementation differences may take
effect in math-intensive expressions.
Also these numbers do not fit to 32bit so bitwise operations are their
killers.
Also on 32bit systems all of them have to be emulated by 32bit numbers
so you have a serious impact on productivity.
3) 0x38D7EA4C68000 - 0x2386F26FC10000 (999999999999999 -
9999999999999999)
"Twilight zone"
Spit over your shoulder before any operation - and do not take the
results too seriously. Say ( x > (x-1) ) very rarely will be true -
but it may happen once with good weather conditions.
4) 0x16345785D8A0000 - Number.MAX_VALUE (100000000000000000 -
Number.MAX_VALUE)
"Crazy Land"
IEEE emulators are still working so you will continue to get different
cool looking numbers. But nothing of it has any correlation with the
human math and one time error can be anywhere from 10,000 to 100,000.
P.S. A "rule of thumb": the Crazy Land in JavaScript starts guaranteed
for any number containing 17 digits or more. It is absolutely
irrelevant to the number value: only amout of digits used to write this
number is important. So if you are wondering is you can do anything
useful with some long number, just count its digits.
P.P.S. Math specialists are welcome to scream now. But before one may
want to test and to read the Web a bit.