Floating point bug?

D

Dennis Lee Bieber

By that logic, we should see this:

'8'

Why? len() is a function that /counts/ the elements of the argument
-- said count remains in integral value (I presume we don't have a 1.5
character long string)
And rightly rejected by many other programming languages, including
modern Python, not to mention calculators, real mathematics and common
sense.

I know of no calculator that has "integers" for normal math -- and
the HP50 even emphasizes this by putting a decimal point into "integer"
quantities. Heck -- most calculators work in BCD floats. Most merely
suppress the decimal point if the trailing digits are all 0s

Existing Python follows the rule I mentioned.
0
integer/integer => integer
0.25
integer/float promotes to float => float0.5
float/integer promotes to float * integer promoted to float => float0.0
integer/integer => integer promotes to float * float => float0.125
integer promotes to float / (integer promotes to float * float) =>
float(0.25+0j)
integer/(complex) promotes to complex

--
Wulfraed Dennis Lee Bieber KD6MOG
(e-mail address removed) (e-mail address removed)
HTTP://wlfraed.home.netcom.com/
(Bestiaria Support Staff: (e-mail address removed))
HTTP://www.bestiaria.com/
 
T

Terry Reedy

| Ross Ridge wrote:
| > You're just going to have to accept that there that there is no
| > concensus on this issue and there never was.
|
| >But that's not true. The consensus, across the majority of people (both
| >programmers and non-programmers alike) is that 1/2 should return 0.5.
|
| You're deluding yourself.

As a major participant in the discussion, who initially opposed the change,
I think Steven is right.

| If there were a concensus then this issue then
| it wouldn't be so controversial.

The controversy was initially inflamed by issues that did not directly bear
on the merit of the proposal. Worst was its cloaking it in a metaphysical
argument about the nature of integers. It also did not help that Guido
initially had trouble articulating the *practical*, Pythonic reason for the
proposal.

To me, the key is this (very briefly): The current overloading of '/' was
copied from C. But Python is crucially different from C in that
expressions can generally be generic, with run-time rather than compile
time typing of variables. But there are no practical use cases that anyone
ever presented for expr_a / expr_b having two different numerical values,
giving fixed numerical values for expr_a and expr_b, depending on the
number types of the two expressions.

Beyond the change itself, another issue was its timing. When I proposed
that the version making 1/2=.5 the default be called 3.0, and Guido agreed,
many who agreed with the change in theory but were concerned with stability
of the 2.x series agreed that that would make it more palatable.

A third issue was the work required to make the change. The future
mechanism eased that, and the 2to3 conversion program will also issue
warnings.

Terry Jan Reedy
 
T

Terry Reedy

| What screws me is that I'm going to have to type p//q in the future.

When I compare that pain to the gain of not having to type an otherwise
extraneous 'float(...)', and the gain of disambiguating the meaning of a/b
(for builtin numbers at least), I think there will be a net gain for the
majority.

tjr
 
A

Arnaud Delobelle

| What screws me is that I'm going to have to type p//q in the future.

When I compare that pain to the gain of not having to type an otherwise
extraneous 'float(...)', and the gain of disambiguating the meaning of a/b
(for builtin numbers at least), I think there will be a net gain for the
majority.

You may be right. I can see the rationale for this change (although
many aspects feel funny, such as doing integral arithmetic with
floats, i.e. 3.0//2.0).

Perhaps it'll be like when I quit smoking six years ago. I didn't
enjoy it although I knew it was good for me... And now I don't regret
it even though I still have the occasional craving.
 
L

Lie

More examples:

x = 1
y = len(s) + x

=> ok, decides that x is an int

x = 1
y = x + 3.0

=> ok, decides that x is a float

x = 1
y = x + 3.0
z = len(s) + x

=> forbidden, x cannot be an int and float at the same time.


This is how Haskell works and I don't notice much complaints about it.

Ok, that means the line "y = x + 3.0" have a side effect of "x =
float(x)"? I think I would say that is an implicit behavior.


What makes you say they weren't? Calculating machines that handled
floating point are older than Python by far.

But much younger than the first programming language (if it could be
called as a language) that set the convention of int op int should
result in int 'cause our hardware can't handle floats.

Can you name an example of a calculating machine that both:
2) says 1/2 = 0.5 ?

Any pocket calculator to scientific calculator. Except perhaps my
calculator which is fraction scientific calculator where it stores 1/2
as rational and have another division operator for 1/2 == 0.5
 
D

Dan Bishop

Why? len() is a function that /counts/ the elements of the argument
-- said count remains in integral value (I presume we don't have a 1.5
character long string)

The relevant point is that len() is a function that returns a
DIFFERENT type than its argument, and nobody ever complains about is.
I know of no calculator that has "integers" for normal math -- and
the HP50 even emphasizes this by putting a decimal point into "integer"
quantities. Heck -- most calculators work in BCD floats. Most merely
suppress the decimal point if the trailing digits are all 0s

My TI-89 treats them differently: 1.0/2.0 is 0.5, while 1/2 is the
symbolic expression 1/2.
 
S

Steve Holden

Dan said:
The relevant point is that len() is a function that returns a
DIFFERENT type than its argument, and nobody ever complains about is.
A language that restricted its functions to returning the same types as
its arguments wouldn't be very much use, would it? And what about
functions that have multiple arguments of different types?
My TI-89 treats them differently: 1.0/2.0 is 0.5, while 1/2 is the
symbolic expression 1/2.

Any you don't even have to import anything from __future__!

regards
Steve
 
T

Terry Reedy

| Perhaps it'll be like when I quit smoking six years ago. I didn't
| enjoy it although I knew it was good for me... And now I don't regret
| it even though I still have the occasional craving.

In following the development of Py3, there have been a few decisions that I
wish had gone otherwise. But I agree with more than the majority and am
not going to deprive myself of what I expect to be an improved experience
without giving Py3 a fair trial.

tjr
 
A

Arnaud Delobelle

| Perhaps it'll be like when I quit smoking six years ago.  I didn't
| enjoy it although I knew it was good for me... And now I don't regret
| it even though I still have the occasional craving.

In following the development of Py3, there have been a few decisions that I
wish had gone otherwise.  But I agree with more than the majority and am
not going to deprive myself of what I expect to be an improved experience
without giving Py3 a fair trial.

I realise that my comment can be easily misunderstood! I wasn't
implying that I would be quitting python, rather that I would have to
adjust (with some discomfort) to a new division paradigm.

I know from experience that I end up as a staunch defender of most of
the design decisions in Python, particularly those that I disagreed
with at the start!
 
M

Marc 'BlackJack' Rintsch

Ok, that means the line "y = x + 3.0" have a side effect of "x =
float(x)"? I think I would say that is an implicit behavior.

But the type of `x` must be specialized somehow. `x` doesn't start as
`Int` or `Integer` but the very generic and AFAIK abstract type class `Num`.

After seeing the second line the compiler finds an implementation for `+`
and the type class `Fractional` for both operands and now thinks `x` must
be a `Fractional`, a subclass of `Num`.

Then comes the third line with `length` returning an `Int` and the
`Fractional` `x` but there is no implementation for a `+` function on
those types.

Ciao,
Marc 'BlackJack' Rintsch
 
L

Lie

But the type of `x` must be specialized somehow.  `x` doesn't start as
`Int` or `Integer` but the very generic and AFAIK abstract type class `Num`.

After seeing the second line the compiler finds an implementation for `+`
and the type class `Fractional` for both operands and now thinks `x` must
be a `Fractional`, a subclass of `Num`.

Then comes the third line with `length` returning an `Int` and the
`Fractional` `x` but there is no implementation for a `+` function on
those types.

I see, but the same arguments still holds true: the second line have
an implicit side-effect of redefining x's type into Fractional type.
If I were the designer of the language, I'd leave x's type as it is
(as Num) and coerce x for current calculation only.
 
P

Paul Rubin

Lie said:
I see, but the same arguments still holds true: the second line have
an implicit side-effect of redefining x's type into Fractional type.
If I were the designer of the language, I'd leave x's type as it is
(as Num) and coerce x for current calculation only.

Num in Haskell is not a type, it is a class of types, i.e. if all you
know about x is that it is a Num, then its actual type is
indeterminate. Types get inferred, they do not get coerced or
"redefined". The compiler looks at expressions referring to x, to
deduce what x's type actually is. If it is not possible to satisfy
all constraints simultaneously, then the compiler reports an error.
 
L

Lie

Num in Haskell is not a type, it is a class of types, i.e. if all you
know about x is that it is a Num, then its actual type is
indeterminate.  Types get inferred, they do not get coerced or
"redefined".  The compiler looks at expressions referring to x, to
deduce what x's type actually is.  If it is not possible to satisfy
all constraints simultaneously, then the compiler reports an error.

So basically they refused to satisfy everything that is still possible
individually but would conflict if done together. (I know Haskell not
more than just its name so I don't understand the rationale behind the
language design at all) But I'm interested in how it handles these
cases:

x = 1
a = x + 1 << decides it's an int
b = x + 1.0 << error? or redefine to be float?
c = x + 1 << would this cause error while it worked in line 2?

A slightly obfuscated example:
l = [1, 1.0, 1]
x = 1
for n in l:
c = x + n
 
L

Lie

I didn't say they were. Please parse my sentence again.

In the widest sense of the term computer and programming language,
actually calculators and real mathematics are programming languages.
Programming languages is a way to communicate problems with computer.
Computer is anything that does computation (and that includes
calculator and a room full of a bunch of people doing calculation with
pencil and paper[1]). The expressions we write in a calculator is a
(very limited) programming language, while mathematics conventions is
a language to communicate a mathematician's problems with the
computers[2] and other mathematicians

[1] Actually the term computer was first used to refer to this bunch
of people
[2] The computers in this sense is people that does computation
 
P

Paul Rubin

Lie said:
So basically they refused to satisfy everything that is still possible
individually but would conflict if done together.

I can't understand that.
x = 1
a = x + 1 << decides it's an int

No, so far a and x are both Num (indeterminate)
b = x + 1.0 << error? or redefine to be float?

This determines that a, b, and x are all floats. It's not "redefined"
since the types were unknown prior to this.

Actually, I'm slightly wrong, 1.0 is not a float, it's a "Fractional"
which is a narrower class than Num but it could still be Float, Double,
or Rational. Nums support addition, subtraction, multiplication, but
not necessarily division. So int/int is an error. Fractionals support
division.
c = x + 1 << would this cause error while it worked in line 2?

No, c is also a float (actually Fractional)
A slightly obfuscated example:
l = [1, 1.0, 1]

This is the same as l = [1.0, 1.0, 1.0]. In Haskell, all elements
of a list must have the same type, so the 1.0 determines that l is
a list of fractionals.
x = 1
for n in l:
c = x + n

Haskell does not have loops, but if it did, all these values would be
fractionals.
 
P

Paul Rubin

Steven D'Aprano said:
def mean(data): return sum(data)/len(data)

That does the right thing for data, no matter of what it consists of:
floats, ints, Decimals, rationals, complex numbers, or a mix of all of
the above.

One of those types is not like the others: for all of them except int,
the quotient operation actually is the inverse of multiplication.
So I'm unpersuaded that the "mean" operation above does the "right
thing" for ints. If the integers being averaged were prices
in dollars, maybe the result type should even be decimal.

For this reason I think // is a good thing and I've gotten accustomed
to using it for integer division. I can live with int/int=float but
find it sloppy and would be happier if int/int always threw an error
(convert explicitly if you want a particular type result).
 
L

Lie

Lie said:
So basically they refused to satisfy everything that is still possible
individually but would conflict if done together.

I can't understand that.
x = 1
a = x + 1 << decides it's an int

No, so far a and x are both Num (indeterminate)
b = x + 1.0 << error? or redefine to be float?

This determines that a, b, and x are all floats. It's not "redefined"
since the types were unknown prior to this.

Actually, I'm slightly wrong, 1.0 is not a float, it's a "Fractional"
which is a narrower class than Num but it could still be Float, Double,
or Rational. Nums support addition, subtraction, multiplication, but
not necessarily division. So int/int is an error. Fractionals support
division.
c = x + 1 << would this cause error while it worked in line 2?

No, c is also a float (actually Fractional)
A slightly obfuscated example:
l = [1, 1.0, 1]

This is the same as l = [1.0, 1.0, 1.0]. In Haskell, all elements
of a list must have the same type, so the 1.0 determines that l is
a list of fractionals.
x = 1
for n in l:
c = x + n

Haskell does not have loops, but if it did, all these values would be
fractionals.

That's quite complex and restrictive, but probably it's because my
mind is not tuned to Haskell yet. Anyway, I don't think Python should
work that way, because Python have a plan for numerical integration
which would unify all numerical types into an apparent single type,
which requires removal of operator's limitations.

One of those types is not like the others: for all of them except int,
the quotient operation actually is the inverse of multiplication.
So I'm unpersuaded that the "mean" operation above does the "right
thing" for ints. If the integers being averaged were prices
in dollars, maybe the result type should even be decimal.

In __future__ Python or Python 3.0, that mean function would work for
all types. And divisions on int is also inverse of multiplications,
just like subtraction is the inverse of addition:
from __future import division
a = 10
b = 5
c = a / b
if c * b == a: print 'multiplication is inverse of division'
 
P

Paul Rubin

Lie said:
That's quite complex and restrictive, but probably it's because my
mind is not tuned to Haskell yet.

That aspect is pretty straightforward, other parts like only being
able to do i/o in functions having a special type are much more confusing.
Anyway, I don't think Python should
work that way, because Python have a plan for numerical integration
which would unify all numerical types into an apparent single type,
which requires removal of operator's limitations.

Well I think the idea is to have a hierarchy of nested numeric types,
not a single type.
from __future import division
a = 10
b = 5
c = a / b
if c * b == a: print 'multiplication is inverse of division'

Try with a=7, b=25
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,277
Latest member
VytoKetoReview

Latest Threads

Top