complex calculation, possible?

R

Richard Cornford

VK said:
On Feb 11, 4:01 pm, Richard Cornford wrote:
<snip crap>

The point of posting references to examples of your code that demonstrate
you making basic mistakes that no programmer would ever make was to
render my assertion that you are no programmer objective. Anyone who
cares to can take a look and see for themselves that you really don't
understand what you are attempting to do. Your delusion prevent you from
seeing that simple truth for yourself, but others may appreciate being
put in the picture.
The last two weeks were very intensive for me, so I had no
enough time for that. As you may guess posting at clj is
not my primary job ;-)

The code of this kind by itself is pretty much a trivia

It is not as trivial as the code needed to round representations of
numbers in string form, and you spectacularly failed at that when you
tried:-

<URL:
http://groups.google.com/group/comp.lang.javascript/msg/2f869add6d8dfcad
- but - clear
answers have to be found for:
1) what rounding rule to make the default one.
2) define exact borders where IEEE-754 DP-FP rounding has
sense, so to say clearly "from here and from here go to hell
or use BigMath".

Nonsense. You don't have the considerations of a specific application of
rounding to worry about. So just make a decision, state what the
decisions was and implement it.
I really hope to find some time this week.

I don't expect anyone will be holding their breath.

Richard.
 
V

VK

Self-correction:
When pushed into corner by big math lovers :) it may take extra bit
from exponent so with some simplifications - I would be killed for in
Berkley for sure - one can tell "normally 52 bits, sometimes 53 bits".

Still over-simplified IMO. Mantissa (aka significand aka coefficient)
phisically takes 52 bits no matter what. But in normalized form the
radix point is put after the first non-zero digit, like 150 would be
1.5 * 10^2. But in base two the only possible non-zero digit is 1.
Whis allows to optimize the storage space by storing only the part
after the radix point with 1. part always assumed ("implicit bit").
Denormalized numbers override the meaning of the implicit bit by
setting the exponent to all zeros.
So OK: mantissa has 52 physical bit of storage space and one "virtual"
bit provided over IEEE-754 algorithm, this way 53 bits in total are
used in operations.

P.S. JScript uses VBScript math module, and respectively inherits a
bit different denormalized numbers - those 53-bit outlaws I mentioned
just before - production. It allows a unique of its kind math-based
browser sniffing, like:

if (new Number(999999999999999934469).toString().indexOf('e') == -1) {
// Internet Explorer}

else {
// all others

}

The given explanation doesn't seem satisfactory anymore. I'll try MSDN
support first. With some luck I may get a useful answer instead of the
regular bot's "Thank-you-for-your-interest-blah-blah-get-off-
sucka'" :)
 
R

Richard Cornford

VK said:
Self-correction:


Still over-simplified IMO.

Inarticulate gibberish in mine.
Mantissa (aka significand aka coefficient) phisically takes
52 bits no matter what. But in normalized form the radix point
is put after the first non-zero digit, like 150 would be
1.5 * 10^2. But in base two the only possible non-zero digit
is 1. Whis allows to optimize the storage space by storing only
the part after the radix point with 1. part always assumed
("implicit bit").

What would be the point of attempting to "optimise the storage space"?
You would need to store extra information to explain the meaning of the
bits stored (increasing the space required in many cases) and the
overheads in converting any other form stored into the bit pattern
expected by the FPU would be counter-productive.
Denormalized numbers override the meaning of the implicit bit
by setting the exponent to all zeros.
So OK: mantissa has 52 physical bit of storage space and one
"virtual" bit provided over IEEE-754 algorithm, this way 53
bits in total are used in operations.
<snip>

It is incredible. You will go off and concoct the most fantastic
explanations for the simplest things.

IEEE double precision floating point numbers can represent all integers
in the range +(2 to the power of 53) to -(2 to the power of 53)
(including both +0 and -0).

In the same way as 3 bits can represent integers from zero to ((2 to the
power of 2) + (2 to the power of 1) + (2 to the power of 0), or 7 (== 4 +
2 + 1), 52 bits can represent numbers from zero to ((2 to the power of
52) + (2 to the power of 51) + (2 to the power of 50) + ... + (2 to the
power of 1) + (2 to the power of 0), or 9007199254740991 (===
4503599627370496 + 2251799813685248 + 1125899906842624 + ... + 2 + 1).

The next number in a continuous sequence of integers is (2 to the power
of 53) and while it cannot be represented with 52 bits it happens to be
((2 to the power of 52) * 2), or (4503599627370496 * 2). So when you have
a binary exponent you can achieve ((2 to the power of 52) * 2) by taking
the representation of (2 to the power of 52) and adding one to its
exponent. The result is still precise in the same way as (1 * (10 to the
power of 2)), or 10e2, is precisely 100.

This characteristic of mantissa + exponent representations of numbers may
be illustrated with a very simplified example. Suppose a number is to be
represented by a one decimal digit mantissa and a one decimal digit
exponent, both are signed and the decimal digit is assumed to be to the
left of the exponent digit. Thus:-

+1e+0 [or, +0.1 * (10 to the power of 0)] is the number +0.1
+1e+1 [or, +0.1 * (10 to the power of 1)] is the number +10
+2e-2 [or, +0.2 * (10 to the power of -2)] is the number +0.002
- and so on

The range of the continuous sequence of integers that can be represented
with such a format is 10 to -10, but the mantissa can only accommodate
the digits 0 to 9. Where 9 is +9e+1 and -9 is -9e+1

-9e+1 == 9
-8e+1 == 8
-7e+1 == 7
-6e+1 == 6
-5e+1 == 5
-4e+1 == 4
-3e+1 == 3
-2e+1 == 2
-1e+1 == 1
+0e+1 == 0
-1e+1 == -1
-2e+1 == -2
-3e+1 == -3
-4e+1 == -4
-5e+1 == -5
-6e+1 == -6
-7e+1 == -7
-8e+1 == -8
-9e+1 == -9

- but the continuous sequence of integers that can be represented can be
extended beyond that which can be accommodated in the mantissa because
+10 and -10 (the next steps in either direction) can be represented as
+1e+2 and -1e+2 respectively.

The next integers in the possible sequence, +11 and -11, cannot be
represented in this format at all, and the next integers greater than +10
and less than -10 that can be represented are +20 and -20 respectively.
The nearest available number to +11 in this number representation is +10.

And that is how it is with IEEE double precision floating-point numbers
also; ((2 to the power of 53) + 1) cannot be represented at all. Instead
it is approximated to the nearest number that can be represented, which
is (2 to the power or 53). And so the continuous sequence of integers
that can be represented comes to an end at (2 to the power of 53).

Richard.
 
R

Richard Cornford

Richard Cornford wrote:
The result is still precise in the same way as (1 * (10 to the
power of 2)), or 10e2, is precisely 100.
^^^^
Should be 1e2

+1e+1 [or, +0.1 * (10 to the power of 1)] is the number +10
<snip> ^^^

Should be +1

Richard.
 
R

Richard Cornford

Richard Cornford wrote:
... and the decimal digit is assumed to be to the left of the exponent
<snip> ^^^^^ ^^^^^^^^

- and that should be "decimal point" and "left of the mantissa".

Richard.
 
V

VK

Denormalized numbers override the meaning of the implicit bit
<snip>

It is incredible. You will go off and concoct the most fantastic
explanations for the simplest things.

IEEE double precision floating point numbers can represent all integers
in the range +(2 to the power of 53) to -(2 to the power of 53)
(including both +0 and -0).

Nice try, but still long way to go. Your homework to do - do not
worry, I'm still doing mine as well though this part is passed for me
-

1) What is "integer number" and "mantissa" and how do they correlate?

2) What is the algorithm difference for resolving IEEE-754 DP-FP
numbers like ( S E F sequence)
0 00000000000 100...0
and
0 00000000001 100...0
Note: the question is not about the stored values, but about the
resolving algorithm change.
Hint: what are normalized numbers, denormalized numbers and assumed
leading 1 (aka implicit bit) in IEEE-754 ?

3) What kind of number zero (0) is in IEEE-754?

4) Can be zero (0) represented exactly in IEEE-754?
Hint: Why not?
 
I

Isaac Schlueter

C programmers (used to) do that all the time.
You know without doubt how big "unsigned int" is and that
there isn't any engine to try to correct anything.

That's why we had to memorize the sizeof() results from the type table
in my CSC 110 class in college :)

--i
 
R

Randy Webb

VK said the following on 2/12/2007 1:49 PM:
Self-correction:


Still over-simplified IMO.

It could be as simplified as possible and you would still f**k it up.
 
D

Dr J R Stockton

In comp.lang.javascript message <[email protected]
The given explanation doesn't seem satisfactory anymore. I'll try MSDN
support first. With some luck I may get a useful answer instead of the
regular bot's "Thank-you-for-your-interest-blah-blah-get-off-
sucka'" :)

Did you want an accurate answer, or something that you can understand?

How's your rounding code doing? fit for use yet?

It's a good idea to read the newsgroup and its FAQ. See below.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,776
Messages
2,569,602
Members
45,182
Latest member
BettinaPol

Latest Threads

Top