is None or == None ?

C

Carl Banks

Note that the object implementation's complexity doesn't have to affect to any
other code since it's trivial to provide abstract accessors (even macros), i.e.,
this isn't part of a trade-off except if the original developer(s) had limited
resources  --  and if so then it wasn't a trade-off at the language design level
but a trade-off of getting things done then and there.

I totally disagree with this; it would be like squaring the
implementation complexity. It is far from "trivial" as you claim.
Even if it were just a matter of accessor macros (and it isn't) they
don't write themselves, especially when you focused on speed, so
that's a non-trivial complexity increase already. But you besides
writing code you now have reading code (which is now cluttered with
all kinds of ugly accessor macros, as if the Python API wasn't ugly
enough), debugging code, maintaining code, understanding semantics and
nuances, handling all the extra corner cases. To say it's trivial is
absurd.

Don't know about the implementation of C#, but whatever it is, if it's bad in
some respect then that has nothing to do with Python.

C# is a prototypical example of a language that does what you were
suggesting (also it draws upon frameworks like COM, which you
mentioned) so it is a basis of comparison of the benefits versus
drawbacks of the two approaches.


Carl Banks
 
S

Stefan Behnel

mk, 06.11.2009 15:32:
Err, I don't want to sound daft, but what is wrong in this example? It
should work as expected:

... def __eq__(self, other):
... return other == None
...
True

Yes, and it shows you that things can compare equal to None without being None.

Or perhaps your example was supposed to show that I should test for
identity with None, not for value with None?

Instead of "value" you mean "equality" here, I suppose. While there are
certain rare use cases where evaluating non-None objects as equal to None
makes sense, in normal use, you almost always want to know if a value is
exactly None, not just something that happens to return True when
calculating its equality to None, be it because of a programmer's concious
consideration or buggy implementation.

Stefan
 
S

Steven D'Aprano

Using "x is y" with integers
makes no sense and has no guaranteed behaviour AFAIK

Of course it makes sense. `x is y` means *exactly the same thing* for
ints as it does with any other object: it tests for object identity.
That's all it does, and it does it perfectly.

Python makes no promise whether x = 3; y = 3 will use the same object for
both x and y or not. That's an implementation detail. That's not a
problem with `is`, it is a problem with developers who make unjustified
assumptions.
 
T

Terry Reedy

Steven said:
Of course it makes sense. `x is y` means *exactly the same thing* for
ints as it does with any other object: it tests for object identity.
That's all it does, and it does it perfectly.

Python makes no promise whether x = 3; y = 3 will use the same object for
both x and y or not. That's an implementation detail. That's not a
problem with `is`, it is a problem with developers who make unjustified
assumptions.

Which is to say, it normally makes no sense to write 'm is n' for m, n ints.

The *exception* is when one is exploring implementation details, either
to discover them or to test that they are as intended. So, last I
looked, the test suite for ints makes such tests. If the implementation
changes, the test should change also.

The problem comes when newbies use 'is' without realizing that they are
doing black-box exploration of otherwise irrelevant internals.
(White-box exploration would be reading the code, which makes it plain
what is going on ;-).

Terry Jan Reedy
 
S

sturlamolden

As I understand it, 'is' will always work and will always be efficient (it just
checks the variable's type), while '==' can depend on the implementation of
equality checking for the other operand's class.

'==' checks for logical equality. 'is' checks for object identity.

None is a singleton of type NoneType. Since None evaluates to True
only when compared against itself, it is safe to use both operators.
 
S

sturlamolden

Dynamic allocation isn't hare-brained, but doing it for every stored integer
value outside a very small range is, because dynamic allocation is (relatively
speaking, in the context of integer operations) very costly even with a
(relatively speaking, in the context of general dynamic allocation) very
efficient small-objects allocator - here talking order(s) of magnitude.

When it matters, we use NumPy and/or Cython.
 
S

sturlamolden

But wow. That's pretty hare-brained: dynamic allocation for every stored value
outside the cache range, needless extra indirection for every operation.

First, integers are not used the same way in Python as they are in C+
+. E.g. you typically don't iterate over them in a for loop, but
rather iterate on the container itself. Second, if you need an array
of integers or floats, that is usually not done with a list: you would
use numpy.ndarray or array.array, and values are stored compactly.

A Python list is a list, it is not an array. If you were to put
integers in dynamic data structures in other languages (Java, C++),
you would use dynamic allocation as well. Yes a list is implemented as
an array of pointers, amortized to O(1) for appends, but that is an
implementation detail.

Python is not the only language that works like this. There are also
MATLAB and Lisp. I know you have a strong background in C++, but when
you are using Python you must unlearn that way of thinking.

Finally: if none of these helps, we can always resort to Cython.

In 99% of cases where integers are bottlenecks in Python, it is
indicative of bad style. We very often see this from people coming
form C++ and Java background, and subsequent claims that "Python is
slow". Python is not an untyped Java. If you use it as such, it will
hurt. Languages like Python, Perl, Common Lisp, and MATLAB require a
different mindset from the programmer.
 
S

Steven D'Aprano

'==' checks for logical equality. 'is' checks for object identity.

So far so good, although technically == merely calls __eq__, which can be
over-ridden to do (nearly) anything you like:
.... def __eq__(self, other):
.... return self.payload + other
....
15


None is a singleton of type NoneType. Since None evaluates to True only
when compared against itself,

That's wrong. None never evaluates to True, it always evaluates as None,
in the same way that 42 evaluates as 42 and [1,2,3] evaluates as [1,2,3].
Python literals evaluate as themselves, always.

Perhaps you mean that *comparisons* of None evaluate to True only if both
operands are None. That's incorrect too:
False

You have to specify the comparison. It would be a pretty strange language
if both None==None and None!=None returned True.


it is safe to use both operators.

Only if you want unexpected results if somebody passes the wrong sort of
object to your code.

.... def __eq__(self, other):
.... if other is None: return True
.... return False
....True

You should use == *only* if you want to test for objects which are equal
to None, *whatever that object may be*, and is if you want to test for
None itself.
 
H

Hrvoje Niksic

Alf P. Steinbach said:
Speedup would likely be more realistic with normal implementation (not
fiddling with bit-fields and stuff)

I'm not sure I understand this. How would you implement tagged integers
without encoding type information in bits of the pointer value?
 
A

Alf P. Steinbach

* Hrvoje Niksic:
I'm not sure I understand this. How would you implement tagged integers
without encoding type information in bits of the pointer value?

A normal tag field, as illustrated in code earlier in the thread.


Cheers & hth.,

- Alf
 
H

Hrvoje Niksic

Alf P. Steinbach said:
* Hrvoje Niksic:

A normal tag field, as illustrated in code earlier in the thread.

Ah, I see it now. That proposal effectively doubles the size of what is
now a PyObject *, meaning that lists, dicts, etc., would also double
their memory requirements, so it doesn't come without downsides. On the
other hand, tagged pointers have been used in various Lisp
implementations for decades, nothing really "baroque" (or inherently
slow) about them.
 
A

Alf P. Steinbach

* Hrvoje Niksic:
Ah, I see it now. That proposal effectively doubles the size of what is
now a PyObject *, meaning that lists, dicts, etc., would also double
their memory requirements, so it doesn't come without downsides.

Whether it increases memory usage depends on the data mix in the program's
execution.

For a program primarily handling objects of atomic types (like int) it saves
memory, since each value (generally) avoids a dynamically allocated object.

Bit-field fiddling may save a little more memory, and is nearly guaranteed to
save memory.

But memory usage isn't an issue except to the degree it affects the OS's virtual
memory manager.

Slowness is an issue -- except that keeping compatibility is IMO a more
important issue (don't fix, at cost, what works).

On the
other hand, tagged pointers have been used in various Lisp
implementations for decades, nothing really "baroque" (or inherently
slow) about them.

Unpacking of bit fields generally adds overhead. The bit fields need to be
unpacked for (e.g.) integer operations.

Lisp once ran on severely memory constrained machines.


Cheers & hth.,

- Alf
 
T

Terry Reedy

Alf said:
* Hrvoje Niksic:

Whether it increases memory usage depends on the data mix in the
program's execution.

For a program primarily handling objects of atomic types (like int) it
saves memory, since each value (generally) avoids a dynamically
allocated object.

Bit-field fiddling may save a little more memory, and is nearly
guaranteed to save memory.

But memory usage isn't an issue except to the degree it affects the OS's
virtual memory manager.

Slowness is an issue -- except that keeping compatibility is IMO a
more important issue (don't fix, at cost, what works).

I believe the use of tagged pointers has been considered and so far
rejected by the CPython developers. And no one else that I know of has
developed a fork for that. It would seem more feasible with 64 bit
pointers where there seem to be spare bits. But CPython will have to
support 32 bit machines for several years.

Terry Jan Reedy
 
G

Grant Edwards

I've seen that mistake made twice (IBM 370 architecture (probably 360 too,
I'm too young to have used it) and ARM2/ARM3). I'd rather not see it a
third time, thank you.

MacOS applications made the same mistake on the 68K. They
reserved the high-end bits in a 32-bit pointer and used them to
contain meta-information. After all, those bits were "extra" --
nobody could ever hope to actually address more than 4MB of
memory, right? Heck, those address lines weren't even brought
out of the CPU package.

Guess what happened?

It wasn't the decades-long global debacle that was the MS-DOS
memory model, but it did cause problems when CPUs came out that
implemented those address lines and RAM became cheap enough
that people needed to use them.
 
M

Marco Mariani

Grant said:
MacOS applications made the same mistake on the 68K.

And and awful lot of the Amiga software, with the same 24/32 bit CPU.

I did it too, every pointer came with 8 free bits so why not use them?

It wasn't the decades-long global debacle that was the MS-DOS
memory model, but it did cause problems when CPUs came out that
implemented those address lines and RAM became cheap enough
that people needed to use them.

I suppose that's the reason many games didn't work on the 68020+
 
G

Grant Edwards

And and awful lot of the Amiga software, with the same 24/32
bit CPU.

I did it too, every pointer came with 8 free bits so why not
use them?

TANSTAFB ;)

I should probably add that MacOS itself used the same trick
until system 7.
I suppose that's the reason many games didn't work on the 68020+

Probably. IIRC, it took a while for some vendors to come out
with "32-bit clean" versions of products.

http://en.wikipedia.org/wiki/Mac_OS_memory_management#32-bit_clean
 
S

Steven D'Aprano

MacOS applications made the same mistake on the 68K. They reserved the
high-end bits in a 32-bit pointer and used them to contain
meta-information.


Obviously that was their mistake. They should have used the low-end bits
for the metadata, instead of the more valuable high-end.


High-end-is-always-better-right?-ly y'rs,
 
V

Vincent Manis

MacOS applications made the same mistake on the 68K. They
reserved the high-end bits
At the time the 32-bit Macs were about to come on the market, I saw an internal confidential document that estimated that at least over 80% of the applications that the investigators had looked at (including many from that company named after a fruit, whose head office is on Infinite Loop) were not 32-bit clean. This in spite of the original edition of Inside Mac (the one that looked like a telephone book) that specifically said always to write 32-bit clean apps, as 32-bit machines were expected in the near future.

It's not quite as bad as the program I once looked at that was released in 1999 and was not Y2K compliant, but it's pretty close.

--v
 
S

Steven D'Aprano

At the time the 32-bit Macs were about to come on the market, I saw an
internal confidential document that estimated that at least over 80% of
the applications that the investigators had looked at (including many
from that company named after a fruit, whose head office is on Infinite
Loop) were not 32-bit clean. This in spite of the original edition of
Inside Mac (the one that looked like a telephone book) that specifically
said always to write 32-bit clean apps, as 32-bit machines were expected
in the near future.

That is incorrect. The original Inside Mac Volume 1 (published in 1985)
didn't look anything like a phone book. The original Macintosh's CPU (the
Motorola 68000) already used 32-bit addressing, but the high eight pins
were ignored since the CPU physically lacked the pins corresponding to
those bits.

In fact, in Inside Mac Vol II, Apple explicitly gives the format of
pointers: the low-order three bytes are the address, the high-order byte
is used for flags: bit 7 was the lock bit, bit 6 the purge bit and bit 5
the resource bit. The other five bits were unused.

By all means criticize Apple for failing to foresee 32-bit apps, but
criticizing them for hypocrisy (in this matter) is unfair. By the time
they recognized the need for 32-bit clean applications, they were stuck
with a lot of legacy code that were not clean. Including code burned into
ROMs.
 
G

Grant Edwards

By all means criticize Apple for failing to foresee 32-bit
apps, but criticizing them for hypocrisy (in this matter) is
unfair. By the time they recognized the need for 32-bit clean
applications, they were stuck with a lot of legacy code that
were not clean. Including code burned into ROMs.

They did manage to climb out of the hole they had dug and fix
things up -- something Microsoft has yet to do after 25 years.

Maybe it's finally going to be different this time around with
Windows 7...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,898
Latest member
BlairH7607

Latest Threads

Top