What is the semantics meaning of 'object'?

A

Antoon Pardon

Op 23-06-13 18:35, Steven D'Aprano schreef:
Please don't. This is false economy. The time you save will be trivial,
the overhead of inheritance is not going to be the bottleneck in your
code, and by ignoring super, you only accomplish one thing:

- if you use your class in multiple inheritance, it will be buggy.

Which is why I don't understand that the python standard library still
contains that kind of code. At least it did in 3.2 and I saw nothing
in the 3.3 release notes that would make me suspect this has changed.
 
R

Rick Johnson

If you're worried about efficiency, you can also
explicitly name the superclass in order to call the method
directly, like:

I'm NOT worried about efficiency, i worried about
readability, and using super (when super is NOT absolutely
required) is folly.
A.__init__(self, arg)

What you've done here is correct, the only flaw is that you
choose to do this for all the wrong reasons.

....GET YOUR PRIORITIES IN ORDER!
 
R

Rick Johnson

For what it's worth, I never bother to inherit from object
unless I know there's something I need from new style
classes. Undoubtedly, this creates a disturbance in The
Force, but such is life.

Well, in Python 3000, if you don't explicitly inherit from
"object", Python (aka: Guido's Master Control Program) is
going to implicitly do it for you; so i would suggest you be
explicit for the sake of the reader.

PS: I love how people say so confidently:

"In Python 3, inheriting from object is optional!"

Obviously they don't understand the definition of optional.
 
S

Steven D'Aprano

Well, as James Knight points out in the "Super Considered Harmful"
article, the equivalent in Dylan is called "next-method", which isn't a
valid identifier in Python but seems like a reasonable starting point.


I don't believe it is. Dylan's next-method is an implicit, automatically
generated method parameter (a little like Python's "self") which holds
the current value of the next method in the inheritance chain. Unlike
super, it's only defined inside methods, because it is an implicit
parameter to each method. It does not return a proxy object like Python's
super, it's merely an alias to the next method in the chain (or Dylan's
equivalent of False, when there is no such method), so you can't use it
for arbitrary attribute lookups like you can with super.

I am not an expert on Dylan, but I'm pretty sure the above is correct.
Here are some definitive references:

http://opendylan.org/books/drm/Method_Dispatch#HEADING-50-32

http://opendylan.org/documentation/intro-dylan/objects.html


next-method and super can be used for similar things, but they work in
completely different ways, and next-method is quite limited compared to
super. But suppose super had been named "next_method", as you suggest.
Given a class C and an instance c, if I say `x = next_method(C, c)`,
which method is x the next method of?

That's a trick question, of course. x is not a method at all, it is a
proxy object such that when you do attribute lookup on it, it will return
the appropriate attribute.
 
S

Steven D'Aprano

Well, mro_lookup() would have been a better choice. Super() has an
obvious meaning, which just happens to be wrong.

This "obvious but wrong" meaning isn't the least bit obvious to me. Care
to give me a hint? The only thing I can think of is:

- if you are familiar with single inheritance;

- but unfamiliar with multiple inheritance;

- and you make the incorrect assumption that there can be only one
superclass of a given class;

- then you might assume that super means "return the superclass of this
class" (or possibly instance).

I don't think that counts as "obvious". Or at least not "intuitive" :)


In any case, I don't think that the name mro_lookup is appropriate. It's
misleading because it suggests that you pass something to be looked up,
like a class, or perhaps an attribute name:

mro_lookup(starting_class, target_class)

mro_lookup(starting_class, 'method_name')
 
R

Roy Smith

Steven D'Aprano said:
This "obvious but wrong" meaning isn't the least bit obvious to me. Care
to give me a hint? The only thing I can think of is:

- if you are familiar with single inheritance;
True.

- but unfamiliar with multiple inheritance;

False. Although, I'm pretty sure that all the times I've used MI (in
both Python and C++), it was of the mix-in variety.
- then you might assume that super means "return the superclass of this
class" (or possibly instance).

That's exactly what I assumed. And, since you correctly surmised that
that's what I would assume, I would suggest that it was pretty obvious
to you too. Of course, given that assumption, it was not at all clear
what it would do in a class with multiple ancestors.
I don't think that counts as "obvious". Or at least not "intuitive" :)

Obvious is in the mind of the observer.
 
R

Rotwang

[...]

Can you elaborate or provide a link? I'm curious to know what other
reason there could be for magic methods to behave differently from
normal methods in this regard.

It's an efficiency optimization. I don't quite get the details, but when
you run something like "a + b", Python doesn't search for __add__ using
the normal method lookup procedure. That allows it to skip checking the
instance __dict__, as well as __getattribute__ and __getattr__.

It's not just an efficiency optimisation, it's actually necessary in
cases where a dunder method gets called on a type. Consider what happens
when one calls repr(int), for example - if this tried to call
int.__repr__() by the normal lookup method, it would call the unbound
__repr__ method of int with no self argument:
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
int.__repr__()
TypeError: descriptor '__repr__' of 'int' object needs an argument

By bypassing the instance-first lookup and going straight to the
object's type's dictionary, repr(int) instead calls type.__repr__(int),
which works:
"<class 'int'>"

This is explained here:

http://docs.python.org/3.3/reference/datamodel.html#special-lookup
 
S

Steven D'Aprano

[...]

Can you elaborate or provide a link? I'm curious to know what other
reason there could be for magic methods to behave differently from
normal methods in this regard.

It's an efficiency optimization. I don't quite get the details, but
when you run something like "a + b", Python doesn't search for __add__
using the normal method lookup procedure. That allows it to skip
checking the instance __dict__, as well as __getattribute__ and
__getattr__.

It's not just an efficiency optimisation, it's actually necessary in
cases where a dunder method gets called on a type. Consider what happens
when one calls repr(int), for example - if this tried to call
int.__repr__() by the normal lookup method, it would call the unbound
__repr__ method of int with no self argument:


I don't know about *necessary*, after all, classic classes manage just
fine in Python 2.x:

py> class OldStyle:
.... def __repr__(self):
.... return "Spam"
....
py> repr(OldStyle())
'Spam'
py> repr(OldStyle)
'<class __main__.OldStyle at 0xb7553e0c>'


I daresay that there are good reasons why new-style classes don't do the
same thing, but the point is that had the Python devs had been
sufficiently interested in keeping the old behaviour, and willing to pay
whatever costs that would require, they could have done so.

But your point is well taken. It's not just purely a speed optimization.



Nice link, thank you.
 
S

Steven D'Aprano

Steven D'Aprano said:
False. Although, I'm pretty sure that all the times I've used MI (in
both Python and C++), it was of the mix-in variety.

Mixins are such a limited version of MI that it's often not even counted
as MI, and even when it is, being familiar with mixins is hardly
sufficient to count yourself as familiar with MI. That's kind of like me
saying I'm familiar with life in Italy on the strength of a three-week
holiday back in 1982 :)

If you still think of "the" superclass, then you haven't done enough MI
to learn better :)

That's exactly what I assumed. And, since you correctly surmised that
that's what I would assume, I would suggest that it was pretty obvious
to you too. Of course, given that assumption, it was not at all clear
what it would do in a class with multiple ancestors.

That's exactly why it *isn't* obvious. Too many assumptions need to be
made, and questions left unanswered, for the conclusion to be obvious.
Just because some people might jump to an unjustified conclusion, doesn't
mean that the conclusion is obvious. That's like saying that it's
"obvious" that the sun goes around the earth, because that's what it
looks like. What would it look like if it was the other way around?


Obvious is in the mind of the observer.

Well that's obvious :)
 
R

Roy Smith

Steven D'Aprano said:
Mixins are such a limited version of MI that it's often not even counted
as MI, and even when it is, being familiar with mixins is hardly
sufficient to count yourself as familiar with MI.

OK, fair enough.
That's exactly why it *isn't* obvious. Too many assumptions need to be
made, and questions left unanswered, for the conclusion to be obvious.

I think we're using different definitions of "obvious". I'm using it to
mean, "What you would conclude from a first look at a problem". The
fact that it is proven to be wrong upon closer examination doesn't
change the fact that it's obvious.
That's like saying that it's "obvious" that the sun goes around the
earth, because that's what it looks like. What would it look like if
it was the other way around?

Well, it is obvious. It's just wrong, based on our current
understanding. Humans have been theorizing about how the heavenly
bodies work for thousands of years. It's only in the past 400 that
they're figured out how the solar system works.

So, to bring this back to Python, the goal of designing
easy-to-understand things is that the obvious explanation also happens
to be the correct one. Giving super() the name that it has failed at
this.
 
R

Rotwang

]

Can you elaborate or provide a link? I'm curious to know what other
reason there could be for magic methods to behave differently from
normal methods in this regard.

It's an efficiency optimization. I don't quite get the details, but
when you run something like "a + b", Python doesn't search for __add__
using the normal method lookup procedure. That allows it to skip
checking the instance __dict__, as well as __getattribute__ and
__getattr__.

It's not just an efficiency optimisation, it's actually necessary in
cases where a dunder method gets called on a type. Consider what happens
when one calls repr(int), for example - if this tried to call
int.__repr__() by the normal lookup method, it would call the unbound
__repr__ method of int with no self argument:


I don't know about *necessary*, after all, classic classes manage just
fine in Python 2.x:

py> class OldStyle:
... def __repr__(self):
... return "Spam"
...
py> repr(OldStyle())
'Spam'
py> repr(OldStyle)
'<class __main__.OldStyle at 0xb7553e0c>'

Point taken. It's also possible to override the __repr__ method of an
old-style instance and have the change recognised by repr, so repr(x)
isn't simply calling type(x).__repr__(x) in general.

I daresay that there are good reasons why new-style classes don't do the
same thing, but the point is that had the Python devs had been
sufficiently interested in keeping the old behaviour, and willing to pay
whatever costs that would require, they could have done so.

Sure, though the above behaviour was probably easier to achieve with
old-style classes than it would have been with new-style classes because
all instances of old-style classes have the same type. But I don't doubt
that you're correct that they could have done it if they wanted.
 
M

Mark Janssen

Mostly I'm saying that super() is badly named.
What else would you call a function that does lookups on the current
object's superclasses?

^. You make a symbol for it. ^__init__(foo, bar)
 
I

Ian Kelly

Sure, though the above behaviour was probably easier to achieve with
old-style classes than it would have been with new-style classes because all
instances of old-style classes have the same type. But I don't doubt that
you're correct that they could have done it if they wanted.

It seems to me that the important difference with new-style classes is
that they suddenly have metaclasses and are themselves just ordinary
objects, and so it is important that they consistently resolve calls
in the same way that all other objects do.
 
I

Ian Kelly

^. You make a symbol for it. ^__init__(foo, bar)

On the one hand, eww.

On the other hand, with the changes to super in Python 3 to make it
more magical, it might as well be syntax.
 
I

Ian Kelly

Op 23-06-13 18:35, Steven D'Aprano schreef:


Which is why I don't understand that the python standard library still
contains that kind of code. At least it did in 3.2 and I saw nothing
in the 3.3 release notes that would make me suspect this has changed.

This bothers me as well. If you look at Raymond Hettinger's "super()
considered super" article, he includes the (correct) advice that
super() needs to be used at every level of the call chain. At the end
of the article, he offers this example to show how "easy" multiple
inheritance can be:

from collections import Counter, OrderedDict

class OrderedCounter(Counter, OrderedDict):
'Counter that remembers the order elements are first seen'
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__,
OrderedDict(self))
def __reduce__(self):
return self.__class__, (OrderedDict(self),)

oc = OrderedCounter('abracadabra')

Which is pretty cool in its simplicity, but here's the rub (which I
have previously noted on this list): OrderedDict doesn't use super.
Counter does, but not cooperatively; it just calls super().__init__()
with no arguments. So the fact that this example works at all is
basically luck.
 
M

Mark Janssen

This bothers me as well. If you look at Raymond Hettinger's "super()
considered super" article, he includes the (correct) advice that
super() needs to be used at every level of the call chain. At the end
of the article, he offers this example to show how "easy" multiple
inheritance can be:
[...]
oc = OrderedCounter('abracadabra')

Which is pretty cool in its simplicity, but here's the rub (which I
have previously noted on this list): OrderedDict doesn't use super.
Counter does, but not cooperatively; it just calls super().__init__()
with no arguments. So the fact that this example works at all is
basically luck.

Ah, and here we see the weakness in the object architecture that has
evolved in the past decade (not just in Python, note). It hasn't
really ironed out what end is what. Here's a proposal: the highest,
most "parental", most general object should be in charge, not
subclasses calling specific parent's init methods
(Parent.__init__(myparams)), etc. -- ***THIS IS WHERE WE WENT
WRONG***.

After the "type/class unification", python tried to make the most
generic, most useless class be the parent of *all of them*, but
there's been no use whatsoever in that. It was a good idea in the
beginning, so pure as it was, but it has not panned out in practice.
Sorry...

I'm trying to start a recovery plan at the wikiwikiweb
(http://c2.com/cgi/wiki?WikiWikiWeb) and I don't want to hear any more
smarmy comments about it. The confusion is deeper than Python.
 
C

Chris Angelico

This bothers me as well. If you look at Raymond Hettinger's "super()
considered super" article, he includes the (correct) advice that
super() needs to be used at every level of the call chain. At the end
of the article, he offers this example to show how "easy" multiple
inheritance can be:

from collections import Counter, OrderedDict

class OrderedCounter(Counter, OrderedDict):
'Counter that remembers the order elements are first seen'
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__,
OrderedDict(self))
def __reduce__(self):
return self.__class__, (OrderedDict(self),)

oc = OrderedCounter('abracadabra')

Which is pretty cool in its simplicity, but here's the rub (which I
have previously noted on this list): OrderedDict doesn't use super.
Counter does, but not cooperatively; it just calls super().__init__()
with no arguments. So the fact that this example works at all is
basically luck.

The main problem is getting to the top/end of the call chain. Classic
example is with __init__, but the same problem can also happen with
other calls. Just a crazy theory, but would it be possible to
construct a black-holing object that, for any given method name,
returns a dummy function that ignores its args? (Other forms of
attribute lookup aren't going to be a problem, I think, so this can be
just methods/functions.) Then you just subclass from that all the
time, instead of from object itself, and you should be able to safely
call super's methods with whatever kwargs you haven't yourself
processed. Would that work?

Caveat: I have not done much with MI in Python, so my idea may be
complete balderdash.

ChrisA
 
I

Ian Kelly

The main problem is getting to the top/end of the call chain. Classic
example is with __init__, but the same problem can also happen with
other calls. Just a crazy theory, but would it be possible to
construct a black-holing object that, for any given method name,
returns a dummy function that ignores its args? (Other forms of
attribute lookup aren't going to be a problem, I think, so this can be
just methods/functions.) Then you just subclass from that all the
time, instead of from object itself, and you should be able to safely
call super's methods with whatever kwargs you haven't yourself
processed. Would that work?

class BlackHole(object):
def __getattr__(self, attr):
return lambda *args, **kwargs: None

There's no way to restrict it to just methods, because there's no
fundamental difference in Python between methods and other attributes,
and at the point that you're looking it up you have no way of knowing
whether the result is about to be called or not.

And even if there were, this would be an excellent way to hide bugs.
 
M

Mark Janssen

The main problem is getting to the top/end of the call chain. Classic
example is with __init__, but the same problem can also happen with
other calls. Just a crazy theory, but would it be possible to
construct a black-holing object that, for any given method name,
returns a dummy function that ignores its args? (Other forms of
attribute lookup aren't going to be a problem, I think, so this can be
just methods/functions.) Then you just subclass from that all the
time, instead of from object itself, and you should be able to safely
call super's methods with whatever kwargs you haven't yourself
processed. Would that work?

Caveat: I have not done much with MI in Python, so my idea may be
complete balderdash.

Here's how it *should* be made: the most superest, most badassed
object should take care of its children. New instances should
automatically call up the super chain (and not leave it up to the
subclasses), so that the parent classes can take care of the chil'en.
When something goes wrong the parent class has to look in and see
what's wrong.

In other words, this habit of specializing a Class to make up for the
weaknesses of its parent are THE WRONG WAY. Instead, let the
specialization start at the machine types (where it doesn't get more
specialized), and work UPWARDS.

Let the standard library make the grouping (or collection types) to
point to the standard way of data structuring, and then everything
else becomes little mini-apps making a DataEcosystem.

--mark
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,127
Latest member
CyberDefense
Top