Attack a sacred Python Cow

A

alex23

What was "suggested in rejected" on the thread you pointed me to was
not what I suggested. Not even close. Get it, genius?

*sigh* Clearly I don't have better things to do right now than waste
my time.
So why not allow something like this?:
class MyClass:
def func( , xxx, yyy):
.xxx = xxx
local = .yyy
The "self" argument is replaced with nothing, but a comma is used as a
placeholder.

Philip Eby suggested in the thread I linked to:
def .aMethod(arg1, arg2):
return .otherMethod(arg1*2+arg2)
In other words, 'self' here is uniformly replaced by an empty string.

So you honestly see no similarity between your suggestion and the
latter?

Or do you seriously think that placing an errant comma in the argument
list is somehow substantively different from placing a period before
the function name?

Or are you trying to say that '"self" argument is replaced with
nothing' is in no way the same suggestion as "'self' here is uniformly
replaced by an empty string"?

And do you -really- believe that Guido's rejection reasons of
* "you're proposing to hide a fundamental truth in Python, that
methods are "just" functions whose first argument can be supplied
using syntactic sugar"
* "that's a lot of new syntax for no gain in readability. You just
need to get your head around the fundamental truth"
....somehow don't apply to your suggestion?

Did you even read the thread?
 
B

Bruno Desthuilliers

Russ P. a écrit :
The issue here has nothing to do with the inner workings of the Python
interpreter.

Oh yes, really ???
The issue is whether an arbitrary name such as "self"
needs to be supplied by the programmer.

Neither I nor the person to whom you replied to here (as far as I can
tell) is suggesting that Python adopt the syntax of Java or C++, in
which member data or functions can be accessed the same as local
variables. Any suggestion otherwise is a red herring.

I didn't talk about the implicit self/this reference in methods, but
about java/C++/PHP/etc method signatures vs python's *functions* signature.
All I am suggesting is that the programmer have the option of
replacing "self.member" with simply ".member", since the word "self"
is arbitrary and unnecessary.

It is arbitrary, indeed. And FWIW, you can use any other (legal)
arbitrary identifier you like. But if you still assert it is
unnecessary, then either you didn't read the above explanation of how
and why the whole thing works in Python, or you didn't understand it.
Otherwise, everything would work
*EXACTLY* the same as it does now. This would be a shallow syntactical
change with no effect on the inner workings of Python, but it could
significantly unclutter code in many instances.

The fact that you seem to think it would change the inner functioning
of Python just shows that you don't understand the proposal.

<metoo>
The fact that you seem to think it would *not* change the inner working
of Python just shows that you don't understand how Python works.
</metoo>

We're not going very far with such arguments. Sorry but I perfectly
understood the proposition, and explained why this would require changes
to how Python actually implements methods.
 
B

Bruno Desthuilliers

Russ P. a écrit :
It just occurred to me that Python could allow the ".member" access
regardless of the name supplied in the argument list:

class Whatever:

def fun(self, cat):

.cat = cat
self.cat += 1

This allows the programmer to use ".cat" and "self.cat"
interchangeably. If the programmer intends to use only the ".cat"
form, the first argument becomes arbitrary. Allowing him to use an
empty argument or "." would simply tell the reader of the code that
the ".cat" form will be used exclusively.

When I write a function in which a data member

Python has nothing like "data member" (nor "member function" etc). It
has class attributes and instance attributes, period. *Python is not
C++*. So please stop using C++ terms which have nothing to do with
Python's object model.
will be used several
times, I usually do something like this:

data = self.data

so I can avoid the clutter of repeated use of "self.data". If I could
just use ".data", I could avoid most of the clutter without the extra
line of code renaming the data member.

The main reason to alias an attribute is to avoid the overhead of
attribute lookup.
A bonus is that it becomes
clearer at the point of usage that ".data" is member data rather than
a local variable.

I totally disagree. The dot character is less obvious than the 'self.'
sequence, so your proposition is bad at least wrt/ readability (it's
IMHO bad for other reasons too but I won't continue beating that poor
dead horse...)
 
S

Steven D'Aprano

Cutting to the crux of the discussion...

I want something where "if x" will do but a simple explicit test won't.

Explicit tests aren't simple unless you know what type x is. If x could
be of any type, you can't write a simple test. Does x have a length? Is
it a number? Maybe it's a fixed-length circular length, and the length is
non-zero even when it's empty? Who knows? How many cases do you need to
consider?

Explicit tests are not necessarily simple for custom classes. Testing for
emptiness could be arbitrarily complex. That's why we have __nonzero__,
so you don't have to fill your code with complex expressions like (say)

if len(x.method()[x.attribute]) > -1

Instead you write it once, in the __nonzero__ method, and never need to
think about it again.

In general, you should write "if x" instead of an explicit test whenever
you care whether or not x is something (true), as opposed to nothing
(false), but you don't care what the type-specific definition of
something vs. nothing actually is.

To put it another way... using "if x" is just a form of duck-typing. Let
the object decide itself whether it is something or nothing.
 
S

Steven D'Aprano

Do you realize what an insult that is to everyone else who has posted
here in the past week?

Actually I don't. I hadn't realised that when a person believes that
somebody has made an especially clever, witty, insightful or fun remark,
that's actually a put-down of all the other people whose remarks weren't
quite as clever, witty, insightful or fun.

But now that I've had this pointed out to me, why, I see insults
everywhere! Tonight, my wife said to me that she liked my new shirt, so I
replied "What's the matter, you think my trousers are ugly?"
 
M

member thudfoo

Actually I don't. I hadn't realised that when a person believes that
somebody has made an especially clever, witty, insightful or fun remark,
that's actually a put-down of all the other people whose remarks weren't
quite as clever, witty, insightful or fun.

But now that I've had this pointed out to me, why, I see insults
everywhere! Tonight, my wife said to me that she liked my new shirt, so I
replied "What's the matter, you think my trousers are ugly?"

It is difficult to not offend the insult-sensitive.
 
K

Kay Schluehr

Oh, gosh, that is so clever. What a bunch of crap.


Do you realize what an insult that is to everyone else who has posted
here in the past week?

Nothing glues a community together so well as a common enemy. Or even
better: two enemies i.e. Perl and Java in Pythons case. On the other
hand, some enemies have to be ignored or declared to be not an enemy
( Ruby ), although oneself is clearly an enemy for them. The same
antisymmetry holds for Python and Java. Java is an enemy for Python
but Python is not worth for Java to be an enemy as long as it can be
ignored. C++ and Java are enemies for each other. Same holds for Java
and C#.
 
C

castironpi

In auxmeth, self would refer to the B instance. In get_auxclass, it
would refer to the A instance. If you wanted to access the A instance
in auxmeth, you'd have to use

class A:
   def get_auxclass(b, c ):
     a_inst = self
     class B:
       def auxmeth(d, e ):
         self # the B instance
         a_inst # the A instance
     return B

This seems pretty natural to me (innermost scope takes precedence),
and AFAIR this is also how it is done in Java.

True. Net keystrokes are down in this method. Consider this:

class A:
def get_auxclass(b, c ):
a_inst = self
class B:
@staticmethod #<--- change
def auxmeth(d, e ):
self # -NOT- the B instance
a_inst # the A instance
return B

What are the semantics here? Error, No 'self' allowed in staticmethod-
wrapped functions. Or, the a instance, just like a_inst?

Do you find no advantage to being able to give 'self' different names
in different cases?
 
C

castironpi

Actually I don't. I hadn't realised that when a person believes that
somebody has made an especially clever, witty, insightful or fun remark,
that's actually a put-down of all the other people whose remarks weren't
quite as clever, witty, insightful or fun.

But now that I've had this pointed out to me, why, I see insults
everywhere! Tonight, my wife said to me that she liked my new shirt, so I
replied "What's the matter, you think my trousers are ugly?"

No insult was intended. The writer stated that where Java minimizes
bad, Python maximizes good. This is a non-trivial truth, and a non-
trivial observation. Also, clever. I agreed and said so, and
compliments go a long way. Do you?
everywhere! Tonight, my wife said to me that she liked my new shirt, so I
replied "What's the matter, you think my trousers are ugly?"

Arf, arf.
 
R

Russ P.

Russ P. a écrit :










Python has nothing like "data member" (nor "member function" etc). It
has class attributes and instance attributes, period. *Python is not
C++*. So please stop using C++ terms which have nothing to do with
Python's object model.




The main reason to alias an attribute is to avoid the overhead of
attribute lookup.


I totally disagree. The dot character is less obvious than the 'self.'
sequence, so your proposition is bad at least wrt/ readability (it's
IMHO bad for other reasons too but I won't continue beating that poor
dead horse...)

Man, you are one dense dude! Can I give you a bit of personal advice?
I suggest you quit advertising your denseness in public.

Letting "self" (or whatever the first argument was) be implied in
".cat" does absolutely *NOTHING* to change the internal workings of
the Python interpreter. It's a very simple idea that you insist on
making complicated. As I said, I could write a pre-processor myself to
implement it in less than a day.

As for "dot" being less obvious than "self.", no kidding? Hey, "self."
is less obvious than "extraspecialme.", so why don't you start using
the latter? Has it occurred to you that the difference between 1.000
and 1000 is just a dot? Can you see the difference, Mr. Magoo?

Your posts here are typical. I'm trying to make a suggestion to reduce
the clutter in Python code, and you throw tomatoes mindlessly.

You seem to think that being a "regular" on this newsgroup somehow
gives you special status. I sure wish I had one tenth the time to
spend here that you have. But even if I did, I have far more important
work to do than to "hang out" on comp.lang.python all day every day.
Man, what a waste of a life. Well, I guess it keeps you off the
streets at least.
 
R

Russ P.

*sigh* Clearly I don't have better things to do right now than waste
my time.


Philip Eby suggested in the thread I linked to:


So you honestly see no similarity between your suggestion and the
latter?

Or do you seriously think that placing an errant comma in the argument
list is somehow substantively different from placing a period before
the function name?

Yes, in terms of Python syntax, it's completely different.

Forget the empty first argument. As I explained in other posts on this
thread, it is not even needed for my proposal. It was just a
distraction from the main idea.
 
B

Bruno Desthuilliers

Russ P. a écrit :
(snip)


Man, you are one dense dude! Can I give you a bit of personal advice?
I suggest you quit advertising your denseness in public.
Thanks.

Letting "self" (or whatever the first argument was) be implied in
".cat" does absolutely *NOTHING* to change the internal workings of
the Python interpreter.

You probably have a way better knowledge of Python's compiler and
interpreter than I do to assert such a thing. But if it's so easy,
please provide a working patch.
It's a very simple idea that you insist on
making complicated. As I said, I could write a pre-processor myself to
implement it in less than a day.

Preprocessor are not a solution. Sorry.
As for "dot" being less obvious than "self.", no kidding?

Nope. I'm deadly serious.

(snip no-op argument and name-calling).
Your posts here are typical. I'm trying to make a suggestion to reduce
the clutter in Python code,

s/the clutter/what Russ P. decided to consider as clutter/
and you throw tomatoes mindlessly.

Oh, you don't stand people disagreing with you, that's it ?
You seem to think that being a "regular" on this newsgroup somehow
gives you special status.

Why so ? Because I answer to your proposition and not agree with your
arguments ??? C'mon, be serious, you have the right to post your
proposition here, I have the right to post my reaction to your
proposition, period. Grow up, boy.
I sure wish I had one tenth the time to
spend here that you have. But even if I did, I have far more important
work to do than to "hang out" on comp.lang.python all day every day.
Man, what a waste of a life. Well, I guess it keeps you off the
streets at least.

Boy, I don't know who you think you're talking to, but you're obviously
out of luck here. I'm 41, married, our son is now a teenager, I have an
happy social life, quite a lot of work, and no time to waste in the
streets. And FWIW, name-calling won't buy you much here.
 
R

Russ P.

Actually I don't. I hadn't realised that when a person believes that
somebody has made an especially clever, witty, insightful or fun remark,
that's actually a put-down of all the other people whose remarks weren't
quite as clever, witty, insightful or fun.

But now that I've had this pointed out to me, why, I see insults
everywhere! Tonight, my wife said to me that she liked my new shirt, so I
replied "What's the matter, you think my trousers are ugly?"

That would all be true if the comment that was called "QOTW" was
indeed clever or, for that matter, true. It was neither.

The idea that Python does not try to discourage bad programming
practice is just plain wrong. Ask yourself why Python doesn't allow
assignment within a conditional test ("if x = 0"), for example. Or,
why it doesn't allow "i++" or "++i"? I'll leave it as an exercise for
the reader to give more examples.

Also, the whole idea of using indentation to define the logical
structure of the code is really a way to ensure that the indentation
structure is consistent with the logical structure. Now, is that a way
to "encourage good practice," or is it a way to "discourage bad
practice"? The notion that the two concepts are "very different" (as
the "QOTW" claimed) is just plain nonsense.
 
C

Carl Banks

Cutting to the crux of the discussion...



Explicit tests aren't simple unless you know what type x is. If x could
be of any type, you can't write a simple test. Does x have a length? Is
it a number? Maybe it's a fixed-length circular length, and the length is
non-zero even when it's empty? Who knows? How many cases do you need to
consider?

Use case, please. I'm asking for code, not arguments. Please give me
a piece of code where you can write "if x" that works but a simple
explicit test won't.

(Note: I'm not asking you to prove that "if len(x)!=0" might fail for
some contrived, poorly-realized class you could write. I already know
you can do that.)


Carl Banks
 
B

Bruno Desthuilliers

Derek Martin a écrit :
The idea that Python behaves this way is new to me. For example, the
tutorials make no mention of it:

http://docs.python.org/tut/node11.html#SECTION0011300000000000000000

The Python reference manual has very little to say about classes,
indeed. If it's discussed there, it's buried somewhere I could not
easily find it.

Yeps, most of the doc didn't really follow Python's evolutions alas. But
it's still documented - I've learned all this from the doc.

You'll find more details starting here:

http://www.python.org/doc/newstyle/

and a couple more stuff in the language specs part of the doc:

http://docs.python.org/ref/descriptors.html
http://docs.python.org/ref/descriptor-invocation.html

Fair enough, but I submit that this distinction is abstruse,

The distinction between class interface (the method call) and class
implementation (the function called by the method) ?
and
poorly documented,

This is certainly true. Patches to the doc are welcome.
and also generally not something the average
application developer should want to or have to care about...

I don't know what's an "average application developper", but as an
application developper myself, I feel I have to care about the
implementation of my programs, just like I feel I have to care about
knowing enough about the languages I use to use them properly.
it's of
interest primarily to computer scientists and language enthusiasts.
The language should prefer to hide such details from the people using
it.

There I beg to disagree. Transparently exposing most of it's object
model is a design choice, and is for a great part responsible for Python
expressive power and flexibility. And all this is really part of the
language - I mean, it's a public API, not an implementation detail.
FWIW, I'm certainly not what you'd call a "computer scientist" (I left
school at 16 and have absolutely no formal education in CS).

Anyway: "the language" (IOW: the language's designer) made a different
choice, and I'm very grateful he did.
Seems not so certain to me... We disagree, even after your careful
explanation.

You're of course (and hopefully) entitled the right to disagree !-)
See below.


But these two constructs are conceptually DIFFERENT,

Why so ?
whether or not
their implementation is the same or similar. The first says that
some_method is defined within the name space of some_object.

The first says that you're sending the message "some_method" to
some_object. Or, to express it in Python terminology, that you're
looking up the name "some_method" on some_object, and try to call the
object returned by the attribute lookup mechanism, whatever that object
is (function, method, class or any other callable).

Now saying that it implies that "some_method is defined within the name
space of some_object" depends on the definitions of 'defined', 'within'
and 'namespace' (more on this below).
The
second says that some_object is a parameter of some_function...

Yes. It also say that some_function knows enough about some_object to
accept it as a parameter (or at least that the developper that passed
some_object to some_function thought / expected it would be the case).

You know, the dotted notation ("obj.attrib") is, well, just a notation.
It's by no mean necessary to OO. You could have a perfectly valid object
system where the base notation is "some_message(some_object)" instead of
being "some_object.some_message" - and FWIW, in Common Lisp - which BTW
have one of the richer object systems around -, the syntax for method
call is the same as the syntax for function call, IOW
"(func_or_method_name object arg1 arg2 argN)".

Namespace != parameter!!!!!!!!!

Functions parameters are part of the namespace of the function body.

Please don't get me wrong : I'm not saying your point is moot, just
suggesting another possible way to look at the whole thing.
To many people previously familiar with OO programming in other
languages (not just Java or C++), but not intimately familiar with
Python's implementation details,

It's actually not an implementation detail - it's part of the language spec.
the first also implies that
some_method is inherently part of some_object,

There again, I disagree. To me, it implies that some_object understands
the 'some_method' message. Which is not the same thing. Ok, here's a
possible implementation:

# foo.py

def foo(obj):
return obj.__class__.__name__


# bar.py
from foo import foo

class Meta(type):
def __new__(meta, name, bases, attribs):
cls = type.__new__(meta, name, bases, attribs)
old_getattr = getattr(cls, '__getattr__', None)

def _getattr(self, attrname):
if attrname == 'some_method':
return lambda self=self: foo(self)
elif callable(old_getattr):
return old_getattr(self, attrname)
else:
raise AttributeError("blah blah")

cls.__getattr__ = _getattr
return cls

# baaz.py
import bar

class Quux(object):
__metaclass__ = bar.Meta


class Baaz(object):
def __init__(self):
self._nix = Quux()
def __getattr__(self, name):
return getattr(self._nix, name)

# main.py
import baaz
some_object = baaz.Baaz()



Is 'some_method' "inherently part of" some_object here ? There isn't
even an object named 'some_method' anywhere in the above code...

(and no, don't tell me, I know: it's a very convoluted way to do a
simple thing - but that's not that far from things you could find in
real-life library code for not-so-simple things).
in which case
explicitly providing a parameter to pass in the object naturally seems
kind of crazy. The method can and should have implicit knowledge of
what object it has been made a part.

The method does. Not the function.

Here's a possible (and incomplete) Python implementation of the method type:

class Method(object):
def __init__(self, func, instance, cls):
self.im_func = func
self.im_self = instance
self.im_class = cls
def __call__(self, *args, **kw):
if self.im_self:
args = (self.im_self, ) + args
return self.im_func(*args, **kw)
elif isinstance(args[0], self.im_class):
return self.im_func(*args, **kw)
else:
raise TypeError("blah blah")
Part of the point of using
objects is that they do have special knowledge of themselves...

s/do/seem to/

they
(generally) manipulate data that's part of the object. Conceptually,
the idea that an object's methods can be defined outside of the scope
of the object,
s/object/class/

and need to be told what object they are part
> of/operating on is somewhat nonsensical...

That's still how other OOPLs work, you know. But they hide the whole
damn thing out and close the box, while Python exposes it all. And I can
tell you from experience that it's a sound idea - this gives you full
control about your object's behaviour.

wrt/ functions being defined outside classes then used as part of the
implementation of a class, I fail to see where is the problem - but I
surely see how it can help avoiding quite a lot of boilerplate when
wisely used.
I can see now the distinction, but please pardon my prior ignorance,
since the documentation says it IS the case, as I pointed out earlier.

Yeps. Part of the problem is that OO terminology doesn't have any clear,
unambiguous definition - so terms like 'method' can be used with
somewhar different meanings. Most of Python's doc use the term 'method'
for functions defined within class statements - and FWIW, that's usually
what I do to.
Furthermore, as you described, defining the function within the scope
of a class binds a name to the function and then makes it a method of
the class. Once that happens, *the function has become a method*.

The function doesn't "become" a method - it's __get__ method returns a
method object, that itself wraps the object and the function (cf above
snippet). What's get stored in the class __dict__ is really the function:
.... def bar(self):
.... print "bar(%s)" % self
....

Whether you bind the name within or outside of the class statement
doesn't change anything.
To be perfectly honest, the idea that an object method can be defined
outside the scope of an object

I assume you meant "outside the class statement's body" ?
(i.e. where the code has no reason to
have any knowledge of the object)

Just any code "using" an object need to have at least some knowledge of
this object, you know. Or do you mean that one should not pass message
to any other object than self ? This seems like a pretty severe
restriction to me - in fact, I fail to see how one could write any code
that way !-)

seems kind of gross to me... another
Python wart.

Nope. A *great* strength.
One which could occasionally be useful I suppose,

More than "occasionaly". Lots of frameworks use that (usually in
conjonction with metaclasses) to inject attributes (usually functions)
into your objects. Have a look at how most Python ORM work.
but a
wart nonetheless.

Your opinion. But then you wont probably like Python. May I suggest Ruby
instead - it has a much more canonical object model ?-)

Err, no, wait - while dynamically adding attributes / methods to objects
/ classes is possible but not that common in Python (outside frameworks
and ORMs at least), it's close to a national sport in Ruby. Nope, you
won't like Ruby neither...
This seems inherently not object-oriented at all,

Because things happens outside a class statement ? Remember, it's
*object* oriented, not class oriented. Classes are not part of the base
definitions of OO, and just aren't necessary to OO (have a look at Self,
Io, or Javascript).

As far as I'm concerned, "object oriented" is defined by

1/ an object has an identity, a state and a behaviour
2/ objects communicate by sending messages to each others

And that's all for the OO theory - everything else is (more or less)
language-specific. As you can see, there's no mention of "class" here,
and not even of "method". All you have is identity, state, behaviour and
messages - IOW, high level concepts that can be (are are indeed)
implemented in many different ways.
for reasons I've already stated. It also strikes me as a feature
designed to encourage bad programming practices.

For which definition of "bad" ?

Your views on what OO is are IMHO very restricted - I'd say, restricted
to what the C++/Java/UML guys defined as "being OO".

Anyway: you'd be surprised by the self (no pun) discipline of most
Python programmers. Python let you do all kind of crazy things, but
almost no one seems to get over the board.

FWIW, if you find the idea of a "method" defined outside the class
statement shocking, what about rebinding the class of an object at
runtime ? You may not know it, but the class of an object is just
another attribute, and nothing prevents you from rebinding it to any
other object whenever you want !-)
Even discounting that, if Python had a keyword which referenced the
object of which a given peice of code was a part, e.g. self, then a
function written to be an object method could use this keyword *even
if it is defined outside of the scope of a class*. The self keyword,
once the function was bound to an object, would automatically refer to
the correct object. If the function were called outside of the
context of an object, then referencing self would result in an
exception.

This could probably be implemented, but it would introduce additional
complexity. As I already said somewhere in this thread, as far as I'm
concerned, as long as it doesn't break any existing code and doesn't
impose restrictions on what is actually possible, I wouldn't care that
much - but I think it would be mostly a waste of time (IMHO etc).
You'll probably argue that this takes away your ability to define a
function and subsequently use it both as a stand-alone function and
also as a method.

I could. FWIW, I've almost never had a need for such a construction yet,
and I don't remember having seen such a thing anywhere.

But anyway, to avoid breaking code, the modification would still have to
take into account functions using an explicit self (or cls) in the
function's signature. I'm afraid this would end up making a mess of
something that was originally simple.
I'm OK with that -- while it might occasionally
be useful, I think if you feel the need to do this, it probably means
your program design is wrong/bad. More than likely what you really
needed was to define a class that had the function as a method, and
another class (or several) that inherits from the first.

Not designing things the CanonicalUMLJavaMainstreamWay(tm) doesn't mean
the design is wrong. Also, there are some problems that just can't be
solved that way - or are overly (and uselessly) tedious to solve that way.

Talking about design, you may not have noticed yet, but quite a lot of
the OO design patterns are mostly workaround the lack of flexibility in
Java and C++ (hint: how would you implement the decorator pattern in
Python ?). And while we're at it, the GoF (IMHO one of the best books on
OO design) lousily insists on composition/delegation being often a way
better design than inheritance (which FWIW is what Python does with
method calls being delegated to functions).
Thus methods are not really methods at all,

Please show me where you get access to the object "within itself" in any
other OO language. Methods (for the usual acceptation of the term) are
*not* "within" the instances. And they access instances thru a reference
to it, reference that get injected into the code one way or another.
Most languages make this implicit, Python makes it explicit. So what ?
which would seem to
suggest that Python's OO model is inherently broken (albeit by design,
and perhaps occasionally to good effect).

Here again, you are being overly dogmatic IMHO. Being explicit, or just
being different from mainstream, is not the same as being "broken".
It does indeed -- it does more than imply. It states outright that
the function is defined within the namespace of that object,
s/object/class/

and as
such that it is inherently part of that object.
s/object/class/

So why should it need
to be explicitly told about the object of which it is already a part?

Because it's the simplest thing to do ?-)

More seriously, methods are usually (in Python - always in most OOPLs)
part of a class, not of it's instances - IOW, the same code is shared by
all instances of a same class. And the language implementation needs to
make the instance accessible to the method code one way or another.

From this POV, Python doesn't behave differently - except that it
choosed to expose the fact and make it part of the API.
It further does indeed imply, to hordes of programmers experienced
with OO programming in other languages, that as a member, property,
attribute, or what ever you care to call it, of the object, it should
have special knowledge about the object of which it is a part.

class Foo(object):
some_dict = dict()

def __init__(self, some_int, some_list, some_string):
self.int = some_int
self.list = some_list
self.string = some_string

foo = Foo(42, range(3), "yadda")

Where do you see that 42, range(3) and "yadda" have any knowledge of foo?

Translate this to any other OOPLs and tell me if the answer is different.

No, that's not the reason. I don't especially like Java, nor do I use
it.

Sorry, I usually use 'Java' as a generic branding for the whole static
struct-based class-oriented mindset - UML / C++ / Java / C# etc - by
opposition to dynamic OOPLs.

Anyway, in this particular case, it was not even what I meant, so please
accept my apologies and s/Java/canonical/ in the above remark.
The reason is to make the object model behave more intuitively.

I understand that having to specify the target object can seem
disturbing, at least at first. Now once you know why, I personnaly find
it more "intuitive" to not have different constructs for functions,
methods, and functions-to-be-used-as-methods. I write functions, period.
IOW:
Clearly a lot of people find that it is less simple TO USE.

I would say it requires a bit more typing. Does that make it less simple
to use ? I'm not sure. Am I biased here ? After many years of Python
programming, I don't even notice typing 'self' in the argument list no
more, so very probably: yes.
The point
of computers is to make hard things easier... if there is a task that
is annoying, or tedious, or repetitive, it should be done by code, not
humans.

Which BTW is why I really enjoy having the possibility to modify a class
at runtime - believe me, it can save quite a lot of boilerplate...
This is something that Python should do automatically for its
users.

This point is debatable, indeed. Still, the only serious reason I see
here is to make Python look more like most mainstream OOPLs, and while
it may be a good idea - I'm not making any judgement on this - I can
happily live with the current state of things as far as I'm concerned.
Anyway, the decision doesn't belong to me.
 
B

Bruno Desthuilliers

Nikolaus Rath a écrit :
(snip)



Thats true. But out of curiosity: why is changing the interpreter such
a bad thing? (If we suppose for now that the change itself is a good
idea).

Because it would very seriously break a *lot* of code ?
 
A

Anders J. Munch

Steven said:
Explicit tests aren't simple unless you know what type x is.

If you don't even know a duck-type for x, you have no business invoking any
methods on that object.

If you do know a duck-type for x, then you also know which explicit test to perform.
Explicit tests are not necessarily simple for custom classes. Testing for
emptiness could be arbitrarily complex. That's why we have __nonzero__,
so you don't have to fill your code with complex expressions like (say)

if len(x.method()[x.attribute]) > -1

Instead you write it once, in the __nonzero__ method, and never need to
think about it again.

Okay, so you have this interesting object property that you often need to test
for, so you wrap the code for the test up in a method, because that way you only
need to write the complex formula once. I'm with you so far. But then you
decide to name the method "__nonzero__", instead of some nice descriptive name?
What's up with that?

This is the kind of code I would write:
class C:
def attribute_is_nonnegative(self):
return len(self.method()[self.attribute]) > -1
...
c = get_a_C()
if c.attribute_is_nonnegative():
...

Now suppose you were reading these last few lines and got to wondering if
get_a_C might ever return None.

The answer is obviously no. get_a_C must always return a C object or something
compatible. If not, it's a bug and an AttributeError will ensue. The code
tells you that. By giving the method a name the intent of the test is perfectly
documented.

In comparison, I gather you would write something like this:
class C:
def __nonzero__(self):
return len(self.method()[self.attribute]) > -1
...
c = get_a_C()
if c:
...

Again, the question is, can get_a_C return None? Well that's hard to say
really. It could be that "if c" is intended to test for None. Or it could be
intended to call C.__nonzero__. Or it could be cleverly intended to test
not-None and C.__nonzero__ at the same time. It may be impossible to discern
the writer's true intent.

Even if we find out that C.__nonzero__ is called, what was it that __nonzero__
did again? Did it test for the queue being non-empty? Did it test for the
queue being not-full? Did it test whether for the consumer thread is running?
Did it test for if there are any threads blocked on the queue? Better dig up
the class C documentation and find out, because there is no single obvious
interpretation of what is means for an object to evaluate to true.

"if x" is simple to type, but not so simple to read. "if x.namedPredicate()" is
harder to type, but easier to read. I prefer the latter because code is read
more often than it is written.

regards,
Anders
 
S

Steven D'Aprano

Use case, please. I'm asking for code, not arguments. Please give me a
piece of code where you can write "if x" that works but a simple
explicit test won't.

I gave you a piece of code, actual code from one of my own projects. If
you wouldn't accept that evidence then, why would you accept it now?

It isn't that explicit tests will fail, it is that explicit tests are
more work for no benefit. You keep talking about "simple explicit tests",
but it seems to me that you're missing something absolutely fundamental:
"if x" is simpler than "if x!=0" and significantly simpler than "if len
(x)!=0". Even if you discount the evidence of the character lengths (4
versus 7 versus 12) just use the dis module to see what those expressions
are compiled into. Or use timeit to see the execution speed.

So I'm not really sure what your point is. Yes, for vast amounts of code,
there's no *need* to write "if x". If x is always a number, you can
replace it with "if x != 0" and it will still work. Big deal. I never
said differently. But if I don't care what type x is, why should I write
code that cares what type x is?

All you're doing is writing an expression that does more work than it
needs to. Although the amount of work is trivial for built-ins, it's
still more than necessary. But provided x is always the right sort of
duck, your code will work. It will be longer, more verbose, slower, fail
in unexpected ways if x is an unexpected type, and it goes against the
spirit of duck-typing, but it will work.
 
S

Steven D'Aprano

If you don't even know a duck-type for x, you have no business invoking
any methods on that object.

Have you tried to make "if x" fail?

Pull open an interactive interpreter session and try. You might learn
something.

If you do know a duck-type for x, then you also know which explicit test
to perform.
Explicit tests are not necessarily simple for custom classes. Testing
for emptiness could be arbitrarily complex. That's why we have
__nonzero__, so you don't have to fill your code with complex
expressions like (say)

if len(x.method()[x.attribute]) > -1

Instead you write it once, in the __nonzero__ method, and never need to
think about it again.

Okay, so you have this interesting object property that you often need
to test for, so you wrap the code for the test up in a method, because
that way you only need to write the complex formula once. I'm with you
so far. But then you decide to name the method "__nonzero__", instead
of some nice descriptive name?
What's up with that?

Dude. Dude. Just... learn some Python before you embarrass yourself
further.

http://www.python.org/doc/ref/customization.html
 
S

Steven D'Aprano

Dude. Dude. Just... learn some Python before you embarrass yourself
further.


I'm sorry Anders, that was a needlessly harsh thing for me to say. I
apologize for the unpleasant tone.

Still, __nonzero__ is a fundamental part of Python's behaviour. You
should learn about it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,151
Latest member
JaclynMarl
Top