FAQ Topic - How do I convert a Number into a String with exactly 2 decimal places?

F

FAQ server

-----------------------------------------------------------------------
FAQ Topic - How do I convert a Number into a String with
exactly 2 decimal places?
-----------------------------------------------------------------------

When formatting money for example, to format 6.57634 to
6.58, 6.5 to 6.50, and 6 to 6.00?

Rounding of x.xx5 is uncertain, as such numbers are not
represented exactly. See section 4.7 for Rounding issues.

N = Math.round(N*100)/100 only converts N to a Number of value
close to a multiple of 0.01; but document.write(N) does not give
trailing zeroes.

ECMAScript Ed. 3.0 (JScript 5.5 [but buggy] and JavaScript 1.5)
introduced N.toFixed, the main problem with this is the bugs in
JScripts implementation.

Most implementations fail with certain numbers, for example 0.07.
The following works successfully for M>0, N>0:

function Stretch(Q, L, c) { var S = Q
if (c.length>0) while (S.length<L) { S = c+S }
return S
}
function StrU(X, M, N) { // X>=0.0
var T, S=new String(Math.round(X*Number("1e"+N)))
if (S.search && S.search(/\D/)!=-1) { return ''+X }
with (new String(Stretch(S, M+N, '0')))
return substring(0, T=(length-N)) + '.' + substring(T)
}
function Sign(X) { return X<0 ? '-' : ''; }
function StrS(X, M, N) { return Sign(X)+StrU(Math.abs(X), M, N) }

Number.prototype.toFixed= new Function('n','return StrS(this,1,n)')

http://www.merlyn.demon.co.uk/js-round.htm

http://msdn.microsoft.com/library/d...html/b5f03400-865e-4ab2-818c-f734c0f6d6f0.asp


===
Postings such as this are automatically sent once a day. Their
goal is to answer repeated questions, and to offer the content to
the community for continuous evaluation/improvement. The complete
comp.lang.javascript FAQ is at http://jibbering.com/faq/index.html.
The FAQ workers are a group of volunteers.
 
V

VK

Rounding of x.xx5 is uncertain, as such numbers are not
represented exactly.

Another thing to fix - together with the rounding proc after mine is
done.

1.035 is happily stored in IEEE-754 DP-FP without bit loss.

Same for say 1.055 - with a bit loss on IEEE-754 single-precision but
it is irrelevant for IEEE-754 topics.

If "exactly" is used in some other special meaning then please could
anyone explain? AFAICT it is some legacy errouneus results
interpretation to be corrected.
See section 4.7 for Rounding issues.

That is really an interesting question to solve before a robust
rounding algorithm released.

For the results check one needs any version of IE installed, others
have to trust me :)

By specs both JavaScript and JScript implements IEEE-754 DP-FP, so
same for VBScript Double numbers (?) I'm not totally sure about
VBScript, but with JavaScript/JScript it is what is taken as given.

Now let's take two addition operations made by IEEE-754 DP-FP rules.
I'm taking values from FAQ 4.7 and around them. Sorry for unwanted
line breaks if anyone has them: the matter requires rather long
sequences. You may want to re-adjust your news reader settings then.
So:

Addition 1 : 0.05 + 0.01

A + 1.1001100110011001100110011001100110011001100110011010 *2-5 =
0.05
B + 1.0100011110101110000101000111101011100001010001111011 *2-7 =
0.01

Alignment Step
A + 1.1001100110011001100110011001100110011001100110011010|000 *2-5
B + 0.0101000111101011100001010001111010111000010100011110|110 *2-5
A+B + 1.1110101110000101000111101011100001010001111010111000|110 *2-5

Postnormalization Step
A+B + 1.1110101110000101000111101011100001010001111010111000|11 *2-5

Possible outcome by the implicit rounding rule:

Round to Zero
A+B + 1.1110101110000101000111101011100001010001111010111000 *2-5 =
0.06

Round to Nearest Even
A+B + 1.1110101110000101000111101011100001010001111010111001 *2-5 =
0.060000000000000005

Round to Plus Infinity
A+B + 1.1110101110000101000111101011100001010001111010111001 *2-5 =
0.060000000000000005

Round to Minus Infinity
A+B + 1.1110101110000101000111101011100001010001111010111000 *2-5 =
0.06

Actual JS outcome: 0.060000000000000005
Actual VBS outcome: 0.06

----------------------------

Addition 2 : 0.06 + 0.01

A + 1.1110101110000101000111101011100001010001111010111000 *2-5 =
0.06
B + 1.0100011110101110000101000111101011100001010001111011 *2-7 =
0.01

Alignment Step
A + 1.1110101110000101000111101011100001010001111010111000|000 *2-5
B + 0.0101000111101011100001010001111010111000010100011110|110 *2-5
A+B + 10.0011110101110000101000111101011100001010001111010110|110
*2-5

Postnormalization Step
A+B + 1.0001111010111000010100011110101110000101000111101011|11 *2-4

Possible outcome by the implicit rounding rule:

Round to Zero
A+B + 1.0001111010111000010100011110101110000101000111101011 *2-4 =
0.06999999999999999

Round to Nearest Even
A+B + 1.0001111010111000010100011110101110000101000111101100 *2-4 =
0.07

Round to Plus Infinity
A+B + 1.0001111010111000010100011110101110000101000111101100 *2-4 =
0.07

Round to Minus Infinity
A+B + 1.0001111010111000010100011110101110000101000111101011 *2-4 =
0.06999999999999999

Actual JS outcome: 0.06999999999999999
Actual VBS outcome: 0.07

-----------------------------

By simple comparison it is obvious that neither J(ava)Script nor
VBScript are conforming IEEE-754 DP-FP Round To Nearest Even rule -
which is supposed to be the default internal rounding by IEEE-754
specs. Moreover it is difficult to say _what_ rounding rule is the
default one in either case. I have an impression that there is some
extra run-time logic added atop of pure IEEE-754.
It is as well possible that I misinterpreted the results.

-----------------------------

The test results are obtained over two nearly identical pages below.
The type support in VBScript is pretty much bastardized in comparison
to VBA and other more powerful Basic dialects. So to ensure that there
is not some hidden douncasting I used a round-around way via VarType.
In either case JavaScript results alone are rather strange - again if
I'm right with the production schema.

-----------------------------

<html>
<head>
<title>Addition 1</title>
<meta http-equiv="Content-Type"
content="text/html; charset=iso-8859-1">
<script type="text/javascript">
function init() {
var MyForm = document.forms[0];
MyForm.output.value += 'JS:\n' +
'0.05 + 0.01 = ' + (0.05 + 0.01) +
'\nIEEE-754 Double-Precision Floating-Point number';
}

window.onload = init;
</script>

<script type="text/vbscript">
Sub Window_OnLoad

Dim MyForm
Dim Result, ResultType
Dim NL

Set MyForm = Document.forms.item(0)

Result = 0.05 + 0.01

If VarType(Result) = 5 Then
ResultType = "(IEEE-754 ?) Double-Precision Floating-Point number"
Else
ResultType = "a under-precision type"
End If

NL = vbNewLine

MyForm.output.value = MyForm.output.value & _
NL & NL & "VBS:" & NL & "0.05 + 0.01 = " & _
Result & NL & ResultType

End Sub
</script>
</head>

<body>
<form action="">
<fieldset>
<legend>Output</legend>
<textarea name="output" cols="64" rows="8"></textarea>
</fieldset>
</form>
</body>
</html>

-----------------------------

<html>
<head>
<title>Addition 2</title>
<meta http-equiv="Content-Type"
content="text/html; charset=iso-8859-1">
<script type="text/javascript">
function init() {
var MyForm = document.forms[0];
MyForm.output.value += 'JS:\n' +
'0.06 + 0.01 = ' + (0.06 + 0.01) +
'\nIEEE-754 Double-Precision Floating-Point number';
}

window.onload = init;
</script>

<script type="text/vbscript">
Sub Window_OnLoad

Dim MyForm
Dim Result, ResultType
Dim NL

Set MyForm = Document.forms.item(0)

Result = 0.06 + 0.01

If VarType(Result) = 5 Then
ResultType = "(IEEE-754 ?) Double-Precision Floating-Point number"
Else
ResultType = "a under-precision type"
End If

NL = vbNewLine

MyForm.output.value = MyForm.output.value & _
NL & NL & "VBS:" & NL & "0.06 + 0.01 = " & _
Result & NL & ResultType

End Sub
</script>
</head>

<body>
<form action="">
<fieldset>
<legend>Output</legend>
<textarea name="output" cols="64" rows="8"></textarea>
</fieldset>
</form>
</body>
</html>
 
R

Richard Cornford

VK said:
Another thing to fix - together with the rounding proc after
mine is done.

1.035 is happily stored in IEEE-754 DP-FP without bit loss.

If you are going to disagree with everyone about that the least you could
do is post some sort of demonstration of whatever it was that resulted in
our making that conclusion. Then someone could tell you which of your
misconceptions resulted in your erroneous conclusion.
Same for say 1.055 - with a bit loss on IEEE-754 single-precision but
it is irrelevant for IEEE-754 topics.

If "exactly" is used in some other special meaning then please
could anyone explain? AFAICT it is some legacy errouneus results
interpretation to be corrected.

Exactly means exactly. What you need to do is explain why you think the
statement is incorrect.

By simple comparison it is obvious that neither J(ava)Script
nor VBScript are conforming IEEE-754 DP-FP Round To Nearest
Even rule - which is supposed to be the default internal
rounding by IEEE-754 specs. Moreover it is difficult to say
_what_ rounding rule is the default one in either case.
I have an impression that there is some extra run-time logic
added atop of pure IEEE-754. It is as well possible that I
misinterpreted the results.
<snip>

It is dam near certain you misinterpret the results. Stat with looking at
how javascript transforms a numeric literal in its source code into an
IEEE double precision floating-point number (as I told you last time; it
is in ECMA 262, 3rd Ed. Section 7.8.3). Where you find similar details
for VBScript is a different matter, and JScript is not without its own
bugs, but you will not be able to say anything useful about operations
performed upon IEEE double precision floating point numbers until you
know what numbers you are really handing to start with. Until then all
this noise from you is a waste of everyone's time.

Richard.
 
V

VK

If you are going to disagree with everyone about that the least you could
do is post some sort of demonstration of whatever it was that resulted in
our making that conclusion.

I have no intention to disagree with "everyone". I have no intention
to argue with IEEE-754 standards for instance - though the actual
functionning may differ by implementations.

1.035 in IEEE-754 DP-FP form is stored as

0011111111110000100011110101110000101000111101011100001010001111
SEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

Can you point where exactly a bit loss may occur?

More probably someone just couldn't find the leading 1 in mantissa -
because it is _implied_ but not presented with a non-zero exponent:
and from this a false "precision loss" conclusion was drawn.
 
R

RobG

I have no intention to disagree with "everyone". I have no intention
to argue with IEEE-754 standards for instance - though the actual
functionning may differ by implementations.

1.035 in IEEE-754 DP-FP form is stored as

0011111111110000100011110101110000101000111101011100001010001111
SEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

Can you point where exactly a bit loss may occur?

When doing certain mathematic operations that are commonly used with
simple rounding agorithms:

var x = 1.035;

// Typical problematic rounding algorithm
var y = Math.round(x*100)/100;

// Confusion is caused by...
alert('x = ' + x + '\n'
+ '100x = ' + (100*x)
+ '\ny = ' + y);


In Firefox I see:

x = 1.035
100x = 103.49999999999999 (or 103.49999999999998 in IE)
y = 1.03


Whereas most would expect y = 1.04

Confusion arises because the apparent anomaly occurs for only certain
cases.
 
D

Dr J R Stockton

In comp.lang.javascript message <[email protected]
I have no intention to disagree with "everyone". I have no intention
to argue with IEEE-754 standards for instance - though the actual
functionning may differ by implementations.

1.035 in IEEE-754 DP-FP form is stored as

0011111111110000100011110101110000101000111101011100001010001111
SEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

The exponent is 15 bits, not 11.

The EXACT value of that IEEE Double is not 1.035, but
1.0349999999999999200639422269887290894985198974609375

In Javascript,
String(1.035 - 1.0) => "0.03499999999999992"
String(1.035 ) => "1.035"
 
R

Richard Cornford

I have no intention to disagree with "everyone". I have no intention
to argue with IEEE-754 standards for instance - though the actual
functionning may differ by implementations.

1.035 in IEEE-754 DP-FP form is stored as

0011111111110000100011110101110000101000111101011100001010001111
SEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

What makes you think that? But even if it does that bit pattern does
not precisely represent 1.035, it is smaller.
Can you point where exactly a bit loss may occur?

In the bit pattern above (assuming you have reported it accurately. In
the mantissa above the bits set (counting the left most as one and
increasing to the right) are 5, 9, 10, 11, 12, 14, 16, 17, 18, 23, 25,
29, 30, 31, 32, 34, 36, 37, 38, 43, 45, 49, 50, 51 and 52. The set bit
implied to the left of the mantissa (and so the binary point) provides
the value 1, and so these set bits account for the fractional part of
the value, which you are asserting would be 0.035.

The value contributed to the total by each bit is (1/Math.pow(2,
Bit)), where 'Bit' is the number of the bit from the left starting at
one. Thus the first bit would contribute 1/2 to the value if it were
set [(1/Math.pow(2, 1)) or (1/2)], and the next bit ¼ if set.

The bits actually set contribute:-

Bit | 1/Math.pow(2, Bit) |
------------------------------------------------------------
5 -> 1/32 == 140737488355328/4503599627370496
9 -> 1/512 == 8796093022208/4503599627370496
10 -> 1/1024 == 4398046511104/4503599627370496
11 -> 1/2048 == 2199023255552/4503599627370496
12 -> 1/4096 == 1099511627776/4503599627370496
14 -> 1/16384 == 274877906944/4503599627370496
16 -> 1/65536 == 68719476736/4503599627370496
17 -> 1/131072 == 34359738368/4503599627370496
18 -> 1/262144 == 17179869184/4503599627370496
23 -> 1/8388608 == 536870912/4503599627370496
25 -> 1/33554432 == 134217728/4503599627370496
29 -> 1/536870912 == 8388608/4503599627370496
30 -> 1/1073741824 == 4194304/4503599627370496
31 -> 1/2147483648 == 2097152/4503599627370496
32 -> 1/4294967296 == 1048576/4503599627370496
34 -> 1/17179869184 == 262144/4503599627370496
36 -> 1/68719476736 == 65536/4503599627370496
37 -> 1/137438953472 == 32768/4503599627370496
38 -> 1/274877906944 == 16384/4503599627370496
43 -> 1/8796093022208 == 512/4503599627370496
45 -> 1/35184372088832 == 128/4503599627370496
49 -> 1/562949953421312 == 8/4503599627370496
50 -> 1/1125899906842624 == 4/4503599627370496
51 -> 1/2251799813685248 == 2/4503599627370496
52 -> 1/4503599627370496 == 1/4503599627370496
----------------------------------------------------------
Total:- 157625986957967/4503599627370496

- and the total of these represents the fraction part of the number
represented. So:-

fractionalPart == 157625986957967/4503599627370496

- and therefore:-

fractionalPart * 4503599627370496 == 157625986957967

To avoided having to work with fractions here both sides can be
multiplied by 1000:-

(fractionalPart * 1000) * 4503599627370496 == (157625986957967 * 1000)

- to give:-

(fractionalPart * 1000) * 4503599627370496 == 157625986957967000

The fractional part of 1.035 is 0.035, and multiplying that by 1000
gives 35.

If the bit pattern above precisely represents the number 1.035 then:-

35 * 4503599627370496 == 157625986957967000

But (35 * 4503599627370496) is actually 157625986957967360, which
differs from 157625986957967000 by 360. The bit pattern presented
above _is_not_ the value 0.035, and the value 0.035 cannot be
precisely represented as an IEEE 754 double precision floating point
number (the next bigger number that this bit pattern can represent is
greater than 0.035).
More probably someone just couldn't find the leading 1 in mantissa -
because it is _implied_ but not presented with a non-zero exponent:
and from this a false "precision loss" conclusion was drawn.

Given the evidence I don't think you should be assuming any failure to
understand on the part of anyone else, or any superior understanding
on your part. You are wasting everyone's time wittering on about a
subject that is clearly beyond you.

Richard.
 
V

VK

Thanks for the helpful profound analysis - though your binary-to-
decimal conversation seems a bit too labor intensive. The conventional
IEEE <=> decimal pattern would show that I'm wrong in much lesser
steps.

Yes, 1.035 is not representable as a dyadic fraction - thus it cannot
be represented by a finite binary sequence.

Namely the sentence "Rounding of x.xx5 is uncertain, as such numbers
are not represented exactly" states that any decimal number in form of
n.nn5 is not representable as a dyadic fraction. More formally it
could be spelled as:

Any decimal floating-point number having form of x.xx5 cannot be
represented as a rational number X/2^Y (X divided by 2 in power of Y)
where:
X is an integer and
Y is an natural number in CS sense thus including 0

I'm not sure what would it be - an axiom or a theorem - seems like a
theorem to me, but clueless about proof pattern.

This way the sentence from the FAQ is formally correct - and I was
factually wrong. Everyone is allowed to enjoy :)
From the other side - and how else ;-) - it is semi-misleading IMO as
it fixates on a very narrow subset of non-dyadic fractions. Placed at
the top of the FAQ it makes you think that "ending on 5" floats are
the main source of evil and rounding errors in IEEE-754.
In fact some innocent looking decimal 0.01 or 9.2 are not dyadic
fractions as well - and with much more nasty rounding outcomes than
say 0.035. Overall a decimal float representable as a dyadic rational
is more of a happy coincidence than something to expect on a daily
run. This is why implied internal rounding is a vital part of IEEE-754
specs.

Note: whoever doesn't like the term "happy coincidence" is welcome to
study the topological group of dyadic solenoid. I'm humbly passing on
this deadly business :)

This way all decimal integers are representable as dyadic rational
with 2 in power of 0 so can be represented by a finite binary
sequence: INT == INT/1 == INT/2^0
Respectively the prevailing majority of floats is a "fragile stuff"
being continuously rounded and adjusted by internal algorithms.

The acing on the cake is that the given dyadic fraction definition
applies to the abstract math - thus the binary sequence has to be
finite in formally infinite space. On IEEE-754-DP-FP systems we have
well finite space of 52 bits in mantissa part with implied msb set to
1 as 53rd "hidden" bit.

That calls to add new kind of dyadic fractions called "53-dyadic" or
"PC-dyadic" or "IEEE-dyadic" or whatever. If an official term already
exists then I will gladly switch on it.

53-diadic fraction would be the one which is fully representable as a
binary sequence where - going from msb to lsb - the distance between
the first bit set to the end of the sequence is lesser than or equal
to 53.

Otherwise no matter dyadic or not it will be stored with precision
loss in IEEE-754-DP-FP.

The final note: with all the "mess" above as a general rule IEEE-754-
DP-FP numbers are ambiguous. Once stored in IEEE-754-DP-FP format it
is not technically possible to determine if say 0.6999999999999999 is
"IEEE's sh** happens" or something intended.
That gives pretty of much of freedom to decide what rounding to use to
represent final results - so transform them into string value.
 
V

VK

By simple comparison it is obvious that neither J(ava)Script nor
VBScript are conforming IEEE-754 DP-FP Round To Nearest Even rule -
which is supposed to be the default internal rounding by IEEE-754
specs. Moreover it is difficult to say _what_ rounding rule is the
default one in either case. I have an impression that there is some
extra run-time logic added atop of pure IEEE-754.

Eric Lippert remains my hero ! :)

<http://blogs.msdn.com/ericlippert/archive/2005/01/26/361041.aspx>

Once again I see I have a talent to make right conclusions without
having the necessary information - and even sometimes based on a wrong
reasoning. IMO it is a sure sign of a genius mind.

P.S. :) / a.k.a. joking above /

P.P.S. I'm currently going through the entire IEEE-related blog
series, it took some times to get them all together. Who's interested
in the matter the links are:

1 <http://blogs.msdn.com/ericlippert/archive/2005/01/10/350108.aspx>
2 <http://blogs.msdn.com/ericlippert/archive/2005/01/13/352284.aspx>
3 <http://blogs.msdn.com/ericlippert/archive/2005/01/17/354658.aspx>
4 <http://blogs.msdn.com/ericlippert/archive/2005/01/18/355351.aspx>
5 <http://blogs.msdn.com/ericlippert/archive/2005/01/20/357407.aspx>
 
R

Randy Webb

VK said the following on 2/13/2007 12:29 PM:

This way the sentence from the FAQ is formally correct - and I was
factually wrong. Everyone is allowed to enjoy :)

It is nice to see you finally realized what everybody else already knew.
On both accounts.
 
D

Dr J R Stockton

In comp.lang.javascript message <[email protected]
Yes, 1.035 is not representable as a dyadic fraction - thus it cannot
be represented by a finite binary sequence.

Namely the sentence "Rounding of x.xx5 is uncertain, as such numbers
are not represented exactly" states that any decimal number in form of
n.nn5 is not representable as a dyadic fraction. More formally it
could be spelled as:

Any decimal floating-point number having form of x.xx5 cannot be
represented as a rational number X/2^Y (X divided by 2 in power of Y)
where:
X is an integer and
Y is an natural number in CS sense thus including 0

I'm not sure what would it be - an axiom or a theorem - seems like a
theorem to me, but clueless about proof pattern.

This way the sentence from the FAQ is formally correct - and I was
factually wrong. Everyone is allowed to enjoy :)


It's interesting that you claim to have shown that the sentence in the
FAQ is factually correct, because it is actually not completely correct.

To be accurate, it should say "Rounding of x.xx5 is generally uncertain,
as most such numbers are not represented exactly." (addition of
generally & most).

Until the integer part x gets very large, x.125 x.375 x.625 x.875 are
each represented exactly in an IEEE Double. If you had sufficiently
studied what the FAQ links to, you might have been aware of that.

Those numbers will always round in the manner which a fairly [*] simple-
minded appreciation of the situation would lead you to believe.

[*] Having read ISO 16262 15.8.2.15.
 
V

VK

The exponent is 15 bits, not 11.

On IEEE-754-DP-FP the exponent is 11 bits biased by 1023 to represent
both positive and negative power, so the actual value to apply is
ExpValue-1023

01111111111 == 1023 thus the actual value is 1023-1023 = 0 thus zero
exponent.
The EXACT value of that IEEE Double is not 1.035

Yes, I already adimitted my mistake.
In Javascript,
String(1.035 - 1.0) => "0.03499999999999992"
String(1.035 ) => "1.035"

The first is the rounding result of the actual value. The second one
is a "convenience cheating" added into calculator logic atop of
IEEE-754-DP-FP rules. Otherwise the life of a regular user would
become a really confusing hell.
 
R

Richard Cornford

VK said:
Thanks for the helpful profound analysis - though your
binary-to- decimal conversation seems a bit too labor
intensive.

Its your not understanding why that virtually guarantees your much
vaunted next rounding function will be as big a joke as your last effort.
The conventional IEEE <=> decimal pattern would show that
I'm wrong in much lesser steps.

So your taking a few simple steps before hand may have saved you form
wasting everyone's time posting your fictions in this thread, and if
taken before posting in response to my request that you justify your
nonsense assertion you could have avoided compounding your error with
repetition.
Yes, 1.035 is not representable as a dyadic fraction - thus
it cannot be represented by a finite binary sequence.
<snip>

Hasn't someone already mentioned that to you, a dozen times or so by now.

Richard.
 
V

VK

It's interesting that you claim to have shown that the sentence in the
FAQ is factually correct, because it is actually not completely correct.

To be accurate, it should say "Rounding of x.xx5 is generally uncertain,
as most such numbers are not represented exactly." (addition of
generally & most).

Until the integer part x gets very large, x.125 x.375 x.625 x.875 are
each represented exactly in an IEEE Double. If you had sufficiently
studied what the FAQ links to, you might have been aware of that.

At my free time I was studying IEEE papers and related math topics.
Did not have so much pure math ever since my bachelory tortures way
over a decade ago :) First started for the FAQ arguments, then simply
went curious. I never had to read so much incomplete or erroneous junk
ever since I was studying prototype matter in javascript.

1.035 nor 1.375 nor 1.625 cannot be represented as dyadic fraction so
by definition cannot be stored as finite binary sequences. So in order
to argue with the quoted statement one has to do either of two things:

-1-
Prove wrong the "1st VK's theorem" which is - and I quote -

Any decimal floating-point number having form of x.xx5 cannot be
represented as a rational number X/2^Y (X divided by 2 in power of Y)
where:
X is an integer and
Y is an natural number in CS sense thus including 0

-2-
Prove wrong the underlaying lemma (not mine!) that "Any non-dyadic
fraction cannot be represented as a finite binary sequence". I really
hope the 2nd will not happen as in this case a good part of the
current math will go to the trash can :)


P.S. I also asked at comp.arch.arithmetic as my mind - however
brilliant it would be :) - still needs an extra check. See:
<http://groups.google.com/group/comp.arch.arithmetic/browse_frm/thread/
d27030f7c099d140>
 
R

Richard Cornford

At my free time I was studying IEEE papers and related math topics.

Just looking at the documents does not qualify as 'studying'.
Did not have so much pure math ever since my bachelory tortures
way over a decade ago :)

In the past you have frequently demonstrated an inability to do basic
addition.
First started for the FAQ arguments, then simply went curious.
I never had to read so much incomplete or erroneous junk
ever since I was studying prototype matter in javascript.

So once again everyone else is wrong because the VK understanding must
be correct?
1.035 nor 1.375 nor 1.625 cannot be represented as dyadic fraction
so by definition cannot be stored as finite binary sequences.
So in order to argue with the quoted statement one has to do either
of two things:

-1-
Prove wrong the "1st VK's theorem" which is - and I quote -

A theorem is proven wrong when one single example is demonstrated to
contradict it.
Any decimal floating-point number having form of x.xx5

Try - 0.125 - as it has that form.
cannot be represented as a rational number X/2^Y
(X divided by 2 in power of Y)
where:
X is an integer

The integer - 1 - will do in this case.
and
Y is an natural number in CS sense thus including 0

Is - 3 - natural enough for you?

1/(2 to the power of 3) is 1/8, which is also 0.125, and 0.125 has the
form x.xx5.

Now can you give up wasting people's time with this stream of
inaccurate and uninformed posts and either write and post the rounding
function or admit the task is beyond you.

P.S. I also asked at comp.arch.arithmetic as my mind - however
brilliant it would be :) - still needs an extra check. See:
<http://groups.google.com/group/comp.arch.arithmetic/browse_frm/thread/
d27030f7c099d140>

You didn't manage to learn anything form that exchange because your
statements about what is happening in ECMAScript were false (and/or
incoherent), so the response you elicited is no more than an
impression based upon false data.

Richard.
 
V

VK

A theorem is proven wrong when one single example is demonstrated to
contradict it.

Correct: that is not strict but often the quickest way of proving.
Try - 0.125 - as it has that form.
OK


The integer - 1 - will do in this case.
OK


Is - 3 - natural enough for you?
Perfect

1/(2 to the power of 3) is 1/8, which is also 0.125 and 0.125 has the
form x.xx5.

Congratulations! You just successfully constructed a valid "negating
case" for the 1st VK's theorem. Alas the Academy of Science didn't set
prise yet for this theorem, so nothing but verbal congratulations so
far.
:)

This way Dr.Stockton's correction should go into production on the
next scheduled update:
is:
"Rounding of x.xx5 is uncertain, as such numbers are not represented
exactly."
should be:
"Rounding of x.xx5 is often uncertain, as the majority of such numbers
is not represented exactly." (can be a better wording of course).
Now can you give up wasting people's time with this stream of
inaccurate and uninformed posts and either write and post the rounding
function or admit the task is beyond you.

First I want to disambiguate what "is to God and what is to Caesar" -
so I want to define what is coming out from the core of IEEE-754-DP-FP
and what is per-implementation neuristic added atop of it. Some may be
happy with patching black box outcomes: I do not feel comfortable with
it.
You didn't manage to learn anything form that exchange because your
statements about what is happening in ECMAScript were false (and/or
incoherent), so the response you elicited is no more than an
impression based upon false data.

You didn't follow the thread I guess. My question was not about my
"1st VK's theorem" but about 1.035 and the possibility to get back the
original value despite it was not stored exactly, namely:
var probe = 1.035;
window.alert(1.035); // "1.035"

btw the fact that no one pointed to 0.125 case suggests that the
respondents might be not attentive or knowlegeable enough, so the
point needs a bit of more studies.
 
V

VK

This way Dr.Stockton's correction should go into production on the
next scheduled update:
is:
"Rounding of x.xx5 is uncertain, as such numbers are not represented
exactly."
should be:
"Rounding of x.xx5 is often uncertain, as the majority of such numbers
is not represented exactly." (can be a better wording of course).

The funny thing is that the JRS correction was initially right by
itself - despite that the spelled rationale behind it was completely
wrong.

Yet another proof that such things happen in the real life.
 
R

Richard Cornford

VK said:
The funny thing is that the JRS correction was initially
right by itself

Not as humorous as your not being able to tell that he was correct from
what he wrote, and instead deciding to embarrass yourself even more with
yet another nonsense post.
- despite that the spelled rationale behind it was
completely wrong.

Dr Stockton's post included no 'rationale', only a statement of
self-evident facts. If you see one, and see it as wrong, then that is
probably just another symptom of your deluded mind.
Yet another proof that such things happen in the real life.

You are always happiest to think that you know something that someone
else doesn't. Regardless of the fact that whenever you manage to put any
of these notions into statements that can be understood they are promptly
demonstrated to be false statements, as is happening here repeatedly.

Richard.
 
R

Richard Cornford

VK said:
Correct: that is not strict

It is absolutely strict. Any clear, non-metaphysical statement
contradicted by a valid empirical test is a false statement.
but often the quickest way of proving.

It is not a way of proving anything, it as a way of disproving things.
Congratulations!

Hardly, the number 1.125 was listed in Dr Stockton's post so it should
have been obviously to anyone (else) that if that number could be
precisely represented then 0.125 also could be, along with many other
numbers in the form x.xx5.
You just successfully constructed a valid "negating
case" for the 1st VK's theorem. Alas the Academy of
Science didn't set prise yet for this theorem,

Well of course not, it was just more "off the top of your head" bullshit.

First I want to disambiguate what "is to God and what is to Caesar" -
so I want to define what is coming out from the core of IEEE-754-DP-FP
and what is per-implementation neuristic added atop of it.

The "per-implementation neuristic added atop" is a figment of your
imagination, so you are wasting your time trying to attribute anything to
it.
Some may be happy with patching black box outcomes: I do not
feel comfortable with it.

Some will also read what the language specification has to say about
turning numeric literals into numbers, strings to numbers and numbers
into strings. But you do not feel comfortable with that either.
You didn't follow the thread I guess.

And exactly what do you base that assumption upon?
My question was not about my "1st VK's theorem"

Did I say it was?
but about 1.035 and the possibility to get back the
original value despite it was not stored exactly, namely:
var probe = 1.035;
window.alert(1.035); // "1.035"

Which is just misdirection. You don't have a number to start with, you
have a numeric literal, and you don't have a number in the end, you have
a string. There is no "original value" and you never get back to it.
btw the fact that no one pointed to 0.125 case suggests
that the respondents might be not attentive or knowlegeable
enough,

My impression of the respondents in that thread was that they were
bewildered by the incoherence and irrelevance of it.
so the point needs a bit of more studies.

Just remember that if those studies are successful you will finally know
as much as the people you have been disagreeing with here. In the
meanwhile we don't need to here any more about the misconceptions and
false conclusions to come to along the way.

Richard.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,020
Latest member
GenesisGai

Latest Threads

Top