int/long unification hides bugs

K

kartik

Andrew Dalke said:
My guess is the not unusual case of someone who works mostly alone
and doesn't have much experience in diverse projects nor working
with more experience people.

Hey, that's right.

I've seen some similar symptoms working with, for example,
undergraduate students who are hotshot programmers ... when
compared to other students in their non-CS department but not
when compared to, say, a CS student, much less a experienced
developer.

Oops. I'm a grad student, and I certainly don't have as much
experience as a professional developer

-kartik
 
S

Steve Holden

kartik said:
i don't care what mathematical properties are satisfied; what matters
is to what extent the type system helps me in writing bug-free code
.... and the point most of your respondents are trying to make is that an
arbitrary restriction - ANY arbitrary restriction - on the upper limit
of integers is unhelpful, and that's precisely why it's been removed
from the language.
[...]However, the limit n could be
anything, so fixing it at, say, 2**31 - 1 is almost always useless.


i dont think so. if it catches bugs that cause numbers to increase
beyond 2**31, that's valuable.
But only if an increase beyond 2**31 IS a bug, which for many problem
domains it isn't. Usually the required upper limit is either above or
below 2**31, which is why that limit (or 2**63, or 2**7) is useless and
unhelpful.
on what basis do u say that
A few more years' programming experience will teach you the truth of
this assertion. It appears that no amount of good advice will change
your opinion in the meantime.

regards
Steve
 
S

Steve Holden

kartik said:
i was inaccurate. what i meant was that overflow errors provide a
certain amount of sanity checking in the absence of explicit testing -
& do u check every assignment for bounds?
If limiting the range of integers is critical to a program's function
then I will happily make range assertions. Frankly I can't remember when
the last overflow error came up in my code.
2)no test (or test suite) can catch all errors, so language support 4
error detection is welcome.

Yes, but you appear to feel that an arbitrary limit on the size of
integers will be helpful [...] Relying on hardware overflows as error
detection is pretty poor, really.


i'm not relying on overflow errors to ensure correctness. it's only a
mechanism that sometimes catches bugs - & that's valuable.
But your original assertion was that the switch to unbounded integers
should be reversed because of this. Personally I think it's far more
valuable to be able to ignore the arbitrary limitations of supporting
hardware.
agreed, but do u test your code so thoroughly that u can guarantee
your code is bug-free. till then, overflow errors help.
No they don't.

regards
Steve
 
S

Steve Holden

kartik said:
Hey, that's right.





Oops. I'm a grad student, and I certainly don't have as much
experience as a professional developer

-kartik

You're certainly not short on arrogance, though.

regards
Steve
 
K

kartik

Peter Hansen said:
I feel the need to point out in the above the parallel (and equally
mistaken) logic with your comments in the rest of the thread.

In the thread you basically are saying "I want high quality
code, but I refuse to do the thing that will give it to me
(writing good tests) as long as a tiny subset of possible bugs
are caught by causing overflow errors at an arbitrary limit".

Above you are basically saying "I want to be understood,
but I refuse to do the thing that will make it easy for me
to be understood (using proper grammer and spelling) as
long as it's possible for people to laboriously decipher
what I'm trying to say".

Sorry
 
K

kartik

If the many analogies, arguments, and practical examples that have been
offered to help you see why, help you accept the fact, good.

They have. Thank you.
-kartik
 
M

Michael Hoffman

kartik said:
> [Andrew Dalke]
My guess is the not unusual case of someone who works mostly alone
and doesn't have much experience in diverse projects nor working
with more experience people.

Hey, that's right.

I'm sorry for questioning whether you were a troll. Like I said before,
I spend waaaaay too much time hanging out on troll-infested fora and it
means certain behaviors cause me to automatically dismiss posters. You
seem to have recognized and stopped some of these behaviors.
 
K

kartik

Steve Holden said:
You're certainly not short on arrogance, though.

I didn't mean to (except in a couple of posts where I got a little
pissed off). Sorry for that.

-kartik
 
K

kartik

Michael Hoffman said:
I'm sorry for questioning whether you were a troll. Like I said before,
I spend waaaaay too much time hanging out on troll-infested fora and it
means certain behaviors cause me to automatically dismiss posters. You
seem to have recognized and stopped some of these behaviors.

No problem (at least compared to some of the other comments ;) ). It's
nice to know that my posts have not been completely useless!

-kartik
 
J

Jeremy Fincher

there seems to be a serious problem with allowing numbers to grow in a
nearly unbounded manner, as int/long unification does: it hides bugs.

I think one important clarification needs to be made to this
statement: it hides bugs in code that depends on the boundedness of
integers, written before int/long unification.

The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.

I'm by no means claiming that int/long unification is bad, only that
it leaves a hole in Python's toolbox where none existed before. I
challenge anyone here to write a pure-Python class that does bounded
integer arithmetic, without basically reimplementing all of integer
arithmetic in Python.

Jeremy
 
A

Andrew Dalke

Jeremy said:
The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.

What's wrong with the example Number wrapper I posted a couple days
ago to this thread? Here are the essential parts


class RangedNumber:
def __init__(self, val, low = -sys.maxint-1, high = sys.maxint):
if not (low <= high):
raise ValueError("low(= %r) > high(= %r)" % (low, high))
if not (low <= val <= high):
raise ValueError("value %r not in range %r to %r" %
(val, low, high))
self.val = val
self.low = low
self.high = high
....

def _get_range(self, other):
if isinstance(other, RangedNumber):
low = max(self.low, other.low)
high = min(self.high, other.high)
other_val = other.val
else:
low = self.low
high = self.high
other_val = other

return other_val, low, high

def __add__(self, other):
other_val, low, high = self._get_range(other)
x = self.val + other_val
return RangedNumber(x, low, high)


...

and some of the example code
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "spam.py", line 39, in __add__
return RangedNumber(x, low, high)
File "spam.py", line 8, in __init__
raise ValueError("value %r not in range %r to %r" %
ValueError: value 101 not in range 0 to 100Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "spam.py", line 67, in __rmul__
return RangedNumber(x, low, high)
File "spam.py", line 8, in __init__
raise ValueError("value %r not in range %r to %r" %
ValueError: value 110 not in range 0 to 100
Andrew
(e-mail address removed)
 
J

Josiah Carlson

I think one important clarification needs to be made to this
statement: it hides bugs in code that depends on the boundedness of
integers, written before int/long unification.

The problem with int/long unification is that there is no simple
pure-Python alternative for those of us who need a bounded integer
type. If our code depended on that raised OverflowError in order to
ensure that our computations were bounded, we're left high and dry
with ints and longs unified. We must either drop down to C and write
a bounded integer type, or we're stuck with code that no longer works.

And as others have said more than once, it is not so common that the
boundedness of one's integers falls on the 32 bit signed integer
boundary. Ages and money were given as examples.

I'm by no means claiming that int/long unification is bad, only that
it leaves a hole in Python's toolbox where none existed before. I
challenge anyone here to write a pure-Python class that does bounded
integer arithmetic, without basically reimplementing all of integer
arithmetic in Python.

And what is so wrong with implementing all of integer arithmetic in
Python? About all I can figure is speed, in which case, one could do
the following...

class BoundedInt(object):
a = [0]
def f(a):
a[0] = 1
return 1
assert f(a)
if a[0]:
def __new__(cls, val, lower, upper):
#body for creating new bounded int object...
#only gets called if Python _is not run_ with -O option.
else:
def __new__(cls, val, lower, upper):
#optimized version is a plain integer
return val
del f;del a #clean out the namespace
#insert code for handling bounded integer arithmetic here

Which uses plain integers when Python is run with the -O option, but
bounds them during "debugging" without the -O option.

- Josiah
 
J

Jeff Epler

class BoundedInt(object):
a = [0]
def f(a):
a[0] = 1
return 1
assert f(a)
if a[0]:
def __new__(cls, val, lower, upper):
#body for creating new bounded int object...
#only gets called if Python _is not run_ with -O option.
else:
def __new__(cls, val, lower, upper):
#optimized version is a plain integer
return val
del f;del a #clean out the namespace
#insert code for handling bounded integer arithmetic here

Is there a reason you didn't use 'if __debug__' here?

class BoundedInt(object):
...
if __debug__:
def __new__ ...
else:
def __new__ ...

Jeff

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFBgU6SJd01MZaTXX0RAhHwAJ0WtYQB51PDqJXRyVmD72j9NQAFlACfSuWJ
ptCbe2SgUaJ5CNi52Z4TZNE=
=s9Ml
-----END PGP SIGNATURE-----
 
A

Antoon Pardon

Op 2004-10-28 said:
What's wrong with the example Number wrapper I posted a couple days
ago to this thread? Here are the essential parts

What I think is wrong with it, is that it distributes its constraineds
too far. The fact that you want constraints on a number doesn't imply
that all operations done on this number are bound by the same
constraints

The way I see myself using constrains, would mean I need them on
a name, not on an object.
 
A

Andrew Dalke

Antoon said:
The way I see myself using constrains, would mean I need them on
a name, not on an object.

Then you'll have to use the approach Bengt Richter used
with his LimitedIntSpace solution, posted ealier in this thread.
Variable names are just references. Only attribute names will
do what you want in Python.

Andrew
(e-mail address removed)
 
J

Jeremy Fincher

Andrew Dalke said:
What's wrong with the example Number wrapper I posted a couple days
ago to this thread?

How long with this take to run?

I think our inability to write a RangedNumber that piggybacks on
Python's integers should be obvious.

Jeremy
 
J

Jeremy Fincher

Josiah Carlson said:
And as others have said more than once, it is not so common that the
boundedness of one's integers falls on the 32 bit signed integer
boundary. Ages and money were given as examples.

I'm not quite sure how this is relevant. My issue is with the
unboundedness of computations, not the unboundedness of the numbers
themselves.
And what is so wrong with implementing all of integer arithmetic in
Python?

It's a whole lot of extra effort when a perfectly viable such
"battery" existed in previous versions of Python.

Jeremy
 
J

Jeremy Fincher

Jeff Epler said:
class BoundedInt(object):
a = [0]
def f(a):
a[0] = 1
return 1
assert f(a)
if a[0]:
def __new__(cls, val, lower, upper):
#body for creating new bounded int object...
#only gets called if Python _is not run_ with -O option.
else:
def __new__(cls, val, lower, upper):
#optimized version is a plain integer
return val
del f;del a #clean out the namespace
#insert code for handling bounded integer arithmetic here

Is there a reason you didn't use 'if __debug__' here?

__debug__ can be re-assigned. It has no effect on asserts (anymore;
this formerly was not the case, and I much preferred it that way) but
reassignments to it are still visible to the program.

Jeremy-Finchers-Computer:~/src/my/python/supybot/plugins jfincher$
python -O
Python 2.3 (#1, Sep 13 2003, 00:49:11)
[GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
 
J

Josiah Carlson

How long with this take to run?

Considering that 2**31 is already a long int, you wouldn't get the
overflow error that is being argued about anyways. You would eventually
get a MemoryError though, as the answer is, quite literally, 2**(31 +
2**31).

Certainly if one were to implement the standard binary exponentiation
algorithm in Python, it fails quite early due to violating the range
constraint.

I think our inability to write a RangedNumber that piggybacks on
Python's integers should be obvious.

I don't quite follow what you mean. The provided RangedNumber uses
Python integers to store information as attributes of the RangedNumber
instances.


- Josiah
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,021
Latest member
AkilahJaim

Latest Threads

Top