Why return None?

A

Antoon Pardon

Op 2004-08-27 said:
No: "but special cases aren't special enough to break the rules". No
rule was broken by introducing += and friends.



You think one way, GvR thinks another, and in Python GvR wins. Go
design your own language where what you think matters.

Why the fuss over the chosen decorator syntax if GvR
wins anyhow. Why don't you go tell all those people
arguing decorator syntax that they should design their own
language where what they think matters.

If you think I shouldn't voice an opinion here because GvR
wins anyhow and my opinion won't matter fine. Just say so
from the beginning. Don't start with pretending that you
have good arguments that support the status quo because
all that matters is that GvR prefers it this way.
All good arguments in support are just a coincidence in
that case.
I don't see that much difference in the frustration of having
to write:

t = f(x)
v[t] = v[t] + 1

You're repeating (necessarily) the indexing operation, which may be
unboundedly costly for a user-coded type.

That repetion is just pythons inabilty to optimise.
Here, no operation is needlessly getting repeated.

Yes there is, the operation to find lst from the local dictionary.
Although it wont be unboundedly costly.
If you don't see much difference between forcing people to code in a way
that repeats potentially-costly operations,
and forcing a style that
doesn't imply such repetitions, I wonder how your language will look.

I'm sure that if I ever find the time to do so, you won't like it.
Still, I'm much happier thinking of you busy designing your own
wonderful language, than wasting your time and yours here, busy
criticizing what you cannot change.

If you don't want to waste time, just state from the beginning
that this is how GvR wanted it and people won't be able to
change it.

You shouldn't start by arguing why the language as it is is as
it should because that will just prolong the discussion as
people will give counter arguments for what they think would
be better. If you know that, should people not be persuaded
by your arguments, you will resort to GvR autority and declare
the arguments a waste of time, you are better of puttings
GvR autority that can't be questioned on the table as soon
as possible.
 
A

Antoon Pardon

Op 2004-08-27 said:
Antoon Pardon said:
And what about

a += b vs a.extend(b)

I can go on repeating "in the general case [these constructs] get very
different effects" just as long as you can keep proposing, as if they
might be equivalent, constructs that just aren't so in the general case.

Do I really need to point out that a.extend(b) doesn't work for tuples
and strings, while a+=b works as polymorphically as feasible on all
these types?

That extend doesn't work for strings and tupples is irrelevant.
for those types that have an extend a.extend(b) is equivallent
to a+=b

In other words there is no reason to have an extend member for
lists.

furthermore a+=b doesn't work polymorphically feasable because
the result of

a=c
a+=b

is different depending on the type of c.
It should be pretty obvious, I think. So, if you want to
get an AttributeError exception when 'a' is a tuple or str, a.extend(b)
is clearly the way to go -- if you want para-polymorphic behavior in
those cases, a+=b. Isn't it obvious, too?

No it isn't obvious. No design is so stupid that you can't find
an example for it's use. That you have found a specific use
here doesn't say anything.
c=a=range(3)
b=range(2)
a+=b
c
[0, 1, 2, 0, 1]

versus:

c=a=range(3)
b=range(2)
a=a+b
c
[0, 1, 2]

I wouldn't say you get different effects in *general*. You get the
same effect if you use numbers or tuples or any other immutable
object.

a+=b is defined to be: identical to a=a+b for immutable objects being
bound to name 'a'; but not necessarily so for mutable objects -- mutable
types get a chance to define __iadd__ and gain efficiency through
in-place mutation for a+=b, while the semantics of a=a+b strictly forbid
in-place mutation. *IN GENERAL*, the effects of a+=b and a=a+b may
differ, though in specific cases ('a' being immutable, or of a mutable
type which strangely chooses to define __add__ but not __iadd__) they
may be identical.

which makes them para-polymorphic infeasable.
Like for a+b vs b+a: in general they may differ, but
they won't differ if the types involved just happen to have commutative
addition, of if a and b are equal or identical objects, i.e., in various
special cases.

"You get different effects *in general*" does not rule out that there
may be special cases (immutable types for one issue,

If those specific cases can be half of the total cases I wouldn't
call the not specific cases *in general*.
commutative-addition types for another, etc, etc) in which the effects
do not differ. Indeed, if it was "always" true that you got different
effects, it would be superfluous to add that "in general" qualifier.
Therefore, I find your assertion that you "wouldn't say you get
different effects in *general*" based on finding special cases in which
the effects do not differ to be absurd and unsupportable.



In a hypothetical language without any + operator, but with both unary
and binary - operators, the one "obvious" way to add two numbers a and b
might indeed be to code: a - (-b). So what? In a language WITH a
normal binary + operator, 'a - (-b)' is nothing like 'an obvious way'.

The point is that there is a difference between what is obvious in
general and what is obvious within a certain tradition. If python
would limit itself to only one obvious way for those things that
are obvious in general that would be one way.

But here you are defending one thing that is only obvious through
tradition, by pointing out that something that hasn't had the time
to become a tradition isn't obvious.

Personly I don't find the use of "+" as a concat operator obvious.
There are types for which both addition and concatenation can be
a usefull operator. Using the same symbol for both will make
it just that much harder to implement such types and as a result
there is no obvious interface for such types.

I'm sure you're a better language designer than GvR, since you're
qualified to critique, not just a specific design decision, but one of
the pillars on which he based many of the design decisions that together
made Python.

Therefore, I earnestly urge you to stop wasting your time critiquing an
inferiorly-designed language and go off and design your own, which will
no doubt be immensely superior. Good bye; don't slam the door on the
way out, please.


This is exactly Perl's philosophy, of course.

No it isn't. Perl offers you choice in a number of situations
where a number of the alternatives don't offer you anything usefull.
unless a way to do things differently and eliminate a few characters.
If a language should not eliminate possibilities because its designer
does not like those possibilities, indeed if it's BAD for a language
designer to omit from his language the possibilities he dislikes, what
else should a language designer do then, except include every
possibility that somebody somewhere MIGHT like?

So if you think it is good for a language designer to omit what
he dislikes. Do you think it is equally good for a language
designer to add just because he likes it. And if you think so,
do you think the earlier versions of perl, where we can think
the language was still mainly driven by what Larry Wall liked,
was a good language.

I can understand that a designer has to make choices, but
if the designer can allow a choice and has no other arguments
to limit that choice than that he doesn't like one alternative
then that is IMO a bad design decision.
What I have herad about the decorators is that one of the
arguments in favor of decorators is, that you have to
give the name of the function only once, where tradionally
you have to repeat the function name and this can introduce
errors.

But the same argument goes for allowing method chaining.
Without method chaining you have to repeat the name of
the object which can introduce errors.

I've heard that argument in favour of augmented assignment operators
such as += -- and there it makes sense, since the item you're operating
on has unbounded complexity... mydict[foo].bar[23].zepp += 1 may indeed
be better than repeating that horrid LHS (although "Demeter's Law"
suggests that such multi-dotted usage is a bad idea in itself, one
doesn't always structure code with proper assignment of responsibilities
to objects and so forth...).

For a plain name, particularly one which is just a local variable and
therefore you can choose to be as simple as you wish, the argument makes
no sense to me. If I need to call several operations on an object I'm
quite likely to give that object a 'temporary alias' in a local name
anyway, of course:
target = mydict[foo].bar[23].zepp
target.pop(xu1)
target.sort()
target.pop(xu3)
target.reverse()
target.pop(xu7)

I find this a questionable practice. What if you need to make the list
empty at some time. The most obvious way to do so after a number of
such statements would be:

target = []

But of course that won't work.
Ridiculous. Keep around a+b, which for all we know here might be a
million-items list!, by having a name bound to it, without ANY current
need for that object, because some FUTURE version of your program may
have different specs?!
If specs change, refactoring the program written in the sensible way,
the way that doesn't keep memory occupied to no good purpose, won't be
any harder than refactoring the program that wastes megabytes by always
keeping all intermediate results around "just in case".

One could argue that this is again just an deficiency of pythons
implementation, that can't optimise the code in such a way so that
unused variables will have there memory released.
When more than one person cooperates in writing a program, the group
will work much better if there is no "code ownership" -- the lack of
individualized, quirky style variations helps a lot. It's not imposible
to 'cope with differences' in coding style within a team, but it's just
one more roadblock erected to no good purpose. A language can help the
team reach reasonably uniform coding style (by trying to avoid offering
gratuitous variation which serves no real purpose), or it can hinder the
team in that same goal (by showering gratuitous variation on them).

If a language goes so far as to make a particular coding impossible
while that would have been the prefered coding style for most of
the project members then such a limitation can hinder the decision
to agree upon a certain style instead of helping.

I also think this attitude is appalling. Python is for consenting
adults I hear. But that doesn't seem to apply here, as python
seems to want to enforce a certain coding style instead of
letting consenting adults work it out among themselves.

Great, so, I repeat: go away and design your language, one that WILL
impress you with its design. Here, you're just waiting your precious
time and energy, as well of course as ours.

That you waste yours, is entirly your choice, nobody forces your hand
to reply to me.
Practicality beats purity: needing to polymorphically concatenate two
sequences of any kind, without caring if one gets modified or not, is a
reasonably frequent need and is quite well satisfied by += for example.

It isn't. Either you know what types the variables are and then
using a different operator depending on the type is no big deal,
or you don't know what type the variables are and then not caring
if one gets modified or not, is a disaster in waiting.
 
A

Alex Martelli

Antoon Pardon said:
Why the fuss over the chosen decorator syntax if GvR
wins anyhow. Why don't you go tell all those people
arguing decorator syntax that they should design their own
language where what they think matters.

Because in this case, specifically, the decision is explicitly
considered "not definitive yet". Alpha versions get released
_specifically_ to get community feedback, so a controversial innovation
gets a chance to be changed if the community can so persuade Guido.

I do not know if you're really unable to perceive this obvious
difference, or are just trying to falsely convince others that you're as
thick as that; in this like in many other cases in this thread, if
you're pretending, then you're doing a good job of it, because you're
quite close to convincing me that your perception _is_ truly as impeded
as it would appear to be.
If you think I shouldn't voice an opinion here because GvR
wins anyhow and my opinion won't matter fine. Just say so
from the beginning. Don't start with pretending that you
have good arguments that support the status quo because
all that matters is that GvR prefers it this way.
All good arguments in support are just a coincidence in
that case.

I do think, and I have indeed "stated so from the beginning" (many years
ago), that it's generally a waste of time and energy for people to come
charging here criticizing Python's general design and demanding changes
that won't happen anyway. There are forums essentialy devoted to
debates and flamewars independenlty of their uselessness, and this
newsgroup is not one of them.

People with normal levels of perceptiveness can see the difference
between such useless rants, on one side, and, on the other, several
potentially useful kinds of discourse, that I, speaking personally, do
indeed welcome. Trying to understand the design rationale for some
aspect of the language is just fine, for example -- and that's because
trying to understand any complicated artefact X is often well served by
efforts to build a mental model of how X came to be as it is, quite
apart from any interest in _changing_ X. You may not like the arguments
I present, but I'm not just "pretending" that they're good, as you
accuse me of doing: many people like them, as you can confirm for
yourself by studying the google groups archives of my posts and of the
responses to them over the years, checking out the reviews of my books,
and so on. If you just don't like reading my prose, hey, fine, many
others don't particularly care for it either (including Guido,
mostly;-); I'll be content with being helpful to, and appreciated by,
that substantial contingent of people who do like my writing.

And so, inevitably, each and every time I return to c.l.py, I find some
people who might be engaging in either kind of post -- the useful
"trying to understand" kind, or the useless "criticizing what you cannot
change" one -- and others who are clearly just flaming. And inevitably
I end up repeating once again all the (IMHO) good arguments which (IMHO)
show most criticisms to be badly conceived and most design decisions in
Python to be consistent, useful, helpful and well-founded. Why?
Because this is a _public_ forum, with many more readers than writers
for most any thread. If these were private exchanges, I'd happily set
my mail server to bounce any mail from people I know won't say anything
useful or insightful, and good riddance. But since it's a public forum,
there are likely to be readers out there who ARE honestly striving to
understand, and if they see unanswered criticisms they may not have
enough Python knowledge to see by themselves the obvious answers to
those criticisms -- so, far too often, I provide those answers, as a
service to those readers out there. I would much rather spend this time
and energy doing something that's more fun and more useful, but it's
psychologically difficult for me to see some situation that can
obviously use some help on my part, and do nothing at all about it.

Maybe one day I'll be psychologically able to use killfiles more
consistently, whenever I notice some poster that I can reliably classify
as a useless flamer, and let readers of that poster's tripe watch out
for themselves. But still I find it less painful to just drop out of
c.l.py altogether when I once again realize I just can't afford the time
to show why every flawed analysis in the world IS flawed, why every
false assertion in the world IS false, and so on -- and further realize
that there will never be any shortage of people eager to post flawed
analysis, false assertions, and so on, to any public forum.

I don't see that much difference in the frustration of having
to write:

t = f(x)
v[t] = v[t] + 1

You're repeating (necessarily) the indexing operation, which may be
unboundedly costly for a user-coded type.

That repetion is just pythons inabilty to optimise.

There being, in general, no necessary correlation whatsoever between the
computations performed by __getitem__ and those performed by
__setitem__, the repetition of the indexing operation is (in general)
indeed inevitable here. Python does choose not to hoist constant
subexpressions even in other cases, but here, unless one changed
semantics very deeply and backwards-incompatibly, there's nothing to
hoist. ((note carefully that I'm not claiming v[t] += 1 is inherently
different...))
Yes there is, the operation to find lst from the local dictionary.
Although it wont be unboundedly costly.

If you're thinking of a virtual machine based on a stack (which happens
to be the case in the current Python), you can indeed imagine two
repeated elementary operations, in current bytecode LOAD_FAST for name
'lst' and POP_TOP to ignore the result of each call -- they're extremely
fast but do indeed get repeated. But that's an implementation detail
based on using a stack-based virtual machine, and therefore irrelevant
in terms of judging the _language_ (as opposed to its implementations).
Using a register-based virtual machine, name 'lst' could obviously be
left in a register after the first look-up -- no repetition at all is
_inherently_ made necessary by this aspect of language design.

If you don't want to waste time, just state from the beginning
that this is how GvR wanted it and people won't be able to
change it.

You shouldn't start by arguing why the language as it is is as
it should because that will just prolong the discussion as
people will give counter arguments for what they think would
be better. If you know that, should people not be persuaded
by your arguments, you will resort to GvR autority and declare
the arguments a waste of time, you are better of puttings
GvR autority that can't be questioned on the table as soon
as possible.

On the other hand, _reasonable_ readers (and there are some, as shown by
the various feedback on my work that I have referred to in previous
parts of this post) can benefit by a presentation of the _excellent_
reasons underlying Python's design, and such readers would be badly
served if the flawed arguments and false assertions presented to justify
some criticisms of Python were left unanswered.


Alex
 
A

Antoon Pardon

Op 2004-08-27 said:
I do think, and I have indeed "stated so from the beginning" (many years
ago), that it's generally a waste of time and energy for people to come
charging here criticizing Python's general design and demanding changes
that won't happen anyway. There are forums essentialy devoted to
debates and flamewars independenlty of their uselessness, and this
newsgroup is not one of them.

I don't demand changes. I have my critisms of the language and
think that some arguments used to defend the language are not
well founded and when I see one of those I sometimes respond
to it. That is all. I realise no language is perfect
and I don't have the time to design the one true perfect language
my self. In general I'm happy to program in python with the
warts I think it has. I'll just see how it evolves and based
on that evolution and the appearance of other languages will
decide what language I will use in the future.

I hope that someday a ternary operator will arive, but my choice
of language will hardly depend on that and I won't ask for it
except if ever that particulat PEP becomes activated again.
But if someone argues there is no need for a ternary operator
I'll probably respond.
People with normal levels of perceptiveness can see the difference
between such useless rants, on one side, and, on the other, several
potentially useful kinds of discourse, that I, speaking personally, do
indeed welcome. Trying to understand the design rationale for some
aspect of the language is just fine, for example -- and that's because
trying to understand any complicated artefact X is often well served by
efforts to build a mental model of how X came to be as it is, quite
apart from any interest in _changing_ X. You may not like the arguments
I present, but I'm not just "pretending" that they're good, as you
accuse me of doing: many people like them, as you can confirm for
yourself by studying the google groups archives of my posts and of the
responses to them over the years, checking out the reviews of my books,
and so on.

The number of people that like your arguments is irrelevant to me.
If I don't think it is a good argument chances are I will respond
to it.
If you just don't like reading my prose, hey, fine, many
others don't particularly care for it either (including Guido,
mostly;-); I'll be content with being helpful to, and appreciated by,
that substantial contingent of people who do like my writing.

And so, inevitably, each and every time I return to c.l.py, I find some
people who might be engaging in either kind of post -- the useful
"trying to understand" kind, or the useless "criticizing what you cannot
change" one -- and others who are clearly just flaming.

The problem IMO is that often enough, when a usefull trying to
understand article arrives, the answers are not limited to
explaining what is going on, but often include some advocacy
of why the choice made in python was the correct one.

This invites people who are less happy with that particular choice
to argue why that choice isn't so good as the first responder may
have let to believe. Even if they don't particularly want the
language to change.

And inevitably
I end up repeating once again all the (IMHO) good arguments which (IMHO)
show most criticisms to be badly conceived and most design decisions in
Python to be consistent, useful, helpful and well-founded. Why?
Because this is a _public_ forum, with many more readers than writers
for most any thread. If these were private exchanges, I'd happily set
my mail server to bounce any mail from people I know won't say anything
useful or insightful, and good riddance. But since it's a public forum,
there are likely to be readers out there who ARE honestly striving to
understand, and if they see unanswered criticisms they may not have
enough Python knowledge to see by themselves the obvious answers to
those criticisms -- so, far too often, I provide those answers, as a
service to those readers out there.

Well the same work the other way around. There are those people who
think that some of the choices that python made are not that consistent,
usefull, helpfull and well-founded as some would like us to believe
and that those things may be known too.
I don't see that much difference in the frustration of having
to write:

t = f(x)
v[t] = v[t] + 1

You're repeating (necessarily) the indexing operation, which may be
unboundedly costly for a user-coded type.

That repetion is just pythons inabilty to optimise.

There being, in general, no necessary correlation whatsoever between the
computations performed by __getitem__ and those performed by
__setitem__,

Maybe that is the problem here. I think one could argue that a c++
approach here would have been better, where v[t] would result in
an lvalue, from which a value could be extracted or that could
be set to a value depending on which side of an assignment it
was found. And no I'm not asking that python should be changed
this way.
 
R

Roy Smith

Antoon Pardon said:
In this case I think the practicality of method chaining beats
the purity of not allowing side-effects in print statements and
of having only one obvious way to do things.

Especially since the whole decorator thing is essentially about method
chaining.
 
A

Alex Martelli

Antoon Pardon said:
That extend doesn't work for strings and tupples is irrelevant.
for those types that have an extend a.extend(b) is equivallent
to a+=b

It's perfectly relevant, because not all types have extend and += works
para-polymorphically anyway.
In other words there is no reason to have an extend member for
lists.

If lists were being designed from scratch today, there would be a design
decision involved: give them a nicely named normal method 'extend' that
is a synonym for __iadd__, so that the callable bound method can be
nicely extracted as 'mylist.extend' and passed around / stored somewhere
/ etc etc, or demand that people wanting to pass the callable bound
method around use 'mylist.__iadd__' which is somewhat goofied and less
readable. I'm glad I do not have to make that decision myself, but if I
did I would probably tend to err on the side of minimalism -- no
synonyms.

However, lists had extend before += existed, so clearly they kept it for
backwards compatibility. Similarly, dicts keep their method has_key
even though it later became just a synonym for __contains__, etc etc.

If the point you're trying to make here is that Python chooses to be
constrained by backwards compatibility, keeping older approaches around
as new ones get introduced, I do not believe I ever heard anybody
arguing otherwise. You may know that at some unspecified time in the
future a Python 3.0 version will be designed, unconstrained by strict
needs of backwards compatibility and specifically oriented to removing
aspects that have become redundant. Guido has stated so repeatedly,
although he steadfastly refuses to name a precise date. At that time
every aspect of redundancy will be carefully scrutinized, extend versus
_iadd__ and has_key versus __contains__ being just two examples; any
such redundancy that still remain in Python 3.0 will then have done so
by deliberate, specific decision, rather than due to the requirements of
backwards compatibility.

A "greenfield design", an entirely new language designed from scratch,
has no backwards compatibility constraints -- there do not exist million
of lines of production code that must be preserved. One can also decide
to develop a language without special regards to backwards
compatibility, routinely breaking any amount of working code, but that
would be more appropriate to a language meant for such purposes as
research and experimentation, rather than for writing applications in.

furthermore a+=b doesn't work polymorphically feasable because
the result of

a=c
a+=b

is different depending on the type of c.

Indeed it's _para_-polymorphic, exactly as I said, not fully
polymorphic. In many cases, as you code, you know you have no other
references bound to the object in question, and in that case the
polymorphism applies.
No it isn't obvious. No design is so stupid that you can't find
an example for it's use. That you have found a specific use
here doesn't say anything.

I can silently suffer MOST spelling mistakes, but please, PLEASE do not
write "it's" where you mean "its", or viceversa: it's the most horrible
thing you can do to the poor, old, long-suffering English language, and
it makes me physically ill. Particularly in a paragraph as content-free
as this one of yours that I've just quoted, where you're really saying
nothing at all, you could AT LEAST make an attempt to respect the rules
of English, if not of logic and common sense.

On to the substance: your assertion is absurd. You say it isn't obvious
that a.extend(b) will raise an exception if a is bound to a str or
tuple, yet it patently IS obvious, given that str and tuple do not have
a method named 'extend'. Whether that's stupid or clever is a
completely different issue, and one which doesn't make your "No it isn't
obvious" assertion any closer to sanity one way or another.

which makes them para-polymorphic infeasable.

I don't know what "infeasable" means -- it's a word I cannot find in the
dictionary -- and presumably, if it means something like "unfeasible",
you do not know what the construct 'para-polymorphic' means (it means:
polymorphic under given environmental constraints -- I constructed it
from a common general use of the prefix 'para').

If those specific cases can be half of the total cases I wouldn't
call the not specific cases *in general*.

There is no sensible way to specify how to count "cases" of types that
have or don't have commutative addition -- anybody can code their own
types and have their addition operation behave either way. Therefore,
it makes no sense to speak of "half the total cases".

Still, the English expression "in general" is ambiguous, as it may be
used to mean either "in the general case" (that's how it's normally used
in mathematical discourse in English, for example) or "in most cases"
(which is how you appear to think it should exclusively be used).

The point is that there is a difference between what is obvious in
general and what is obvious within a certain tradition. If python

Absolutely true: the fact that a cross like + stands for addition is
only obvious for people coming from cultures in which that symbol has
been used to signify addition for centuries, for example. There is
nothing intrinsical in the graphics of the glyph '+' that makes it
'obviously' mean 'addition'.
would limit itself to only one obvious way for those things that
are obvious in general that would be one way.

I cannot think of _any_ aspect of a programming language that might
pertain to 'things that are obvious in general' as opposed to culturally
determined traits -- going left to right, using ASCII rather than other
alphabets, using '+' to indicate addition, and so on, and so forth.
Please give examples of these 'things that are obvious in general' where
you think Python might 'limit oneself to only one obvious way'.
But here you are defending one thing that is only obvious through
tradition, by pointing out that something that hasn't had the time
to become a tradition isn't obvious.

When there is one operator to do one job, _in the context of that set of
operator_, it IS obviously right to use that operator, rather than using
two operators which, combined, give the same effect. I claim that, no
matter what symbols you use to represent the operators, "TO a, ADD b" is
'more obvious' than "FROM a, SUBTRACT the NEGATIVE of b", because the
former requires one operator, binary ADD, the latter requires two,
binary SUBTRACT and unary NEGATIVE. I do not claim that this is
necessarily so in any culture or tradition whatsoever: I do claim it is
true for cultures sufficiently influenced by Occam's Razor, "Entities
are not to be multiplied beyond necessity", and that the culture to
which Python addresses itself is in fact so influenced. ((If you feel
unable to relate to a culture influenced by Occam's Razor, then it is
quite possible that Python is in fact not suitable for you)).

Personly I don't find the use of "+" as a concat operator obvious.
There are types for which both addition and concatenation can be
a usefull operator. Using the same symbol for both will make
it just that much harder to implement such types and as a result
there is no obvious interface for such types.

True, if I were designing a language from scratch I would surely
consider the possiibly of using different operators for addition and
concatenation, and, similarly, for multiplication and repetition --
there are clearly difficult trade-offs here. On one side, in the future
I may want to have a type that has both addition and concatenation
(presumably a numeric array); on the other, if concatenation is a
frequent need in the typical use cases of the language it's hard to
think of a neater way to express it than '+, in this culture (where, for
example, PL/I's concatenation operator '||' has been appropriated by C
to mean something completely different, 'logical or' -- now, using '||'
for concatenation would be very confusing to a target audience that is
more familiar with C than with PL/I or SQL...). Any design choice in
the presence of such opposite constraints can hardly be 'obvious' (and
in designing a language from scratch there is an enormously high number
of such choices to be made -- not independently from each other,
either).

But note that the fact that choosing to use the same operator was not an
_obvious_ choice at the time the language was designed has nothing to do
with the application of the Zen of Python point to 'how to concatenate
two strings'. Python _is_ designed in such a way that the task "how do
I concatenate the strings denoted by names a and b" has one obvious
answer: a+b. This is because of how Python is designed (with + between
sequences meaning concatenation) and already-mentioned cultural aspects
(using a single operator that does job X is the obvious way in a culture
influenced by Occam's Razor to do job X). All alternatives require
multiple operations ('JOIN the LIST FORMED BY ITEMS a and b' -- you have
to form an intermediate list, or tuple, and then join it, for example)
and therefore are not obvious under these conditions. This is even
sometimes unfortunate, since
for piece in makepieces(): bigstring += piece
is such a performance disaster (less so in Python 2.4, happily!), yet
people keep committing it because it IS an "attractive nuisance" -- an
OBVIOUS solution that is not the RIGHT solution. That it's obvious to
most beginners is proven by the fact that so many beginners continue to
do it, even though ''.join(makepieces()) is shorter and faster. I once
hoped that sum(makepieces()) could field this issue, but Guido decided
that having an alternative to ''.join would be bad and had me take the
code out of 'sum' to handle string arguments . Note that I do not
_whine_ about it, even though it meant giving up both one of my pet
ideas _and_ some work I had already done, rather I admit it's his
call... and I use his language rather than making my own because over
the years I've learned that _overall_ his decisions make a better
language than mine would, even though I may hotly differ with him
regarding a few specific decisions out of the huge numbers needed to
build and grow a programming language.

If I didn't think that, I wouldn't use Python, of course: besides the
possibility of making my own languages, there are many truly excellent
very high level languages to choose among -- Lisp, Smalltalk, Haskell,
ML of many stripes, Erlang, Ruby. I think I could be pretty happy with
any of these... just not quite as happy as I am with Python, therefore
it is with Python that I stick!

No it isn't. Perl offers you choice in a number of situations
where a number of the alternatives don't offer you anything usefull.
unless a way to do things differently and eliminate a few characters.

And for some people eliminating some characters is very important and
makes those alternatives preferable and useful to them, according to
their criteria.

So if you think it is good for a language designer to omit what
he dislikes. Do you think it is equally good for a language
designer to add just because he likes it. And if you think so,
do you think the earlier versions of perl, where we can think
the language was still mainly driven by what Larry Wall liked,
was a good language.

Do you know how to use the question mark punctuation character? It's
hard to say whether you're asking questions or making assertions, when
your word order suggests one thing and your punctuation says otherwise.

"You know a design is finished, not when there is nothing left to add,
but when there is nothing left to take away" (Antoine de Saint Exupery,
widely quoted and differently translated from French). There is no
necessary symmetry between adding features and avoiding them.

But sure, it's a designer's job to add what he likes and thinks
necessary and omit what he dislikes and thinks redundant or worse. I
met Perl when Perl was at release 3.something, and by that time it was
already florid with redundancy -- I believe it was designed that way
from the start, with "&foo if $blah;" and "if($blah) {&foo;}" both
included because some people would like one and others would like the
other, 'unless' as a synonym of 'if not' for similar reasons, etc, etc,
with a design principle based on the enormous redundancy of natural
language (Wall's field of study). ((However, I have no experience with
the very first few releases of Perl)). At the time when I met Perl 3 I
thought it was the best language for my needs under Unix given the
alternatives I believed I had (sh and its descendants, awk -- Rexx was
not available for Unix then, Python I'd never heard of, Lisp would have
cost me money, etc, etc), which is why I used it for years (all the way
to Perl 4 and the dawn of Perl 5...) -- but, no, I never particularly
liked its florid redundancy, its lack of good data structures (at the
time, I do understand the current Perl is a bit better there!), and the
need for stropping just about every identifier. Why do you ask? I do
not see the connection between my opinion of Perl and anything else we
were discussing.
I can understand that a designer has to make choices, but
if the designer can allow a choice and has no other arguments
to limit that choice than that he doesn't like one alternative
then that is IMO a bad design decision.

Ah, you're making a common but deep mistake here: the ability to DO
something great, and the ability to explain WHY one has acted in one way
or another in the process of doing that something, are not connected.

Consider a musician composing a song: the musician's ability to choose a
sequence of notes that when played will sound just wonderful is one
thing, his ability to explain WHY he's put a Re there instead of a Mi is
quite another issue. Would you say "if a musician could have used a
note and has no other arguments to omit that note than that he doesn't
like it then than is a bad music composition decision"? I think it's
absurd to infer, from somebody's inability to explain a decision to your
satisfaction, _or at all_, that the decision is bad.

"Those who can, do, those who can't, explain" may come closer (except
that there _are_ a few musicians, language designers, architects, and
other creative types, who happen to be good at both doing and
explaining, but they're a minority, I believe).

I've never made any claim about Guido's skill as an explainer or
debater, please note. I do implicitly claim he's great at language
design, by freely choosing to use the language he's designed when there
are so many others I could just as freely choose among. (_Your_ use of
Python, on the other hand, is obviously totally contradictory with your
opinion, which you just expressed, that it's a horribly badly designed
language, since its designer is not good at argumenting about each and
every one of the uncountable decisions he's made -- to disallow
possibility a, possibility b, possibility c, and so on, and so forth).

target = mydict[foo].bar[23].zepp
target.pop(xu1)
target.sort()
target.pop(xu3)
target.reverse()
target.pop(xu7)

I find this a questionable practice. What if you need to make the list
empty at some time. The most obvious way to do so after a number of
such statements would be:

target = []

But of course that won't work.

That would be 'obvious' only to someone so totally ignorant of Python's
most fundamental aspects that I _cringe_ to think of that someone using
Python. By just asserting it would be obvous you must justify serious
doubts about your competence in Python use.

Assigning to a bare name NEVER mutates the object to which that name
previously referred to, if any. NEVER.

Therefore, thinking of assigning to a bare name as a way of mutating an
object is not obvious -- on the contrary, it's absurd, in Python.

One obvious way is:

target[:] = []

"assigning to the CONTENTS of the object" does mutate it, and this works
just fine, of course. Unfortunately there is another way, also obvious:

del target[:]

"deleting the CONTENTS of the object". This will also work just fine.
Alas, it's only _preferable_ that the obvious way be just one, and we
cannot always reach the results we would prefer.

So, your assertion that this is a questionable practice proves
untenable. But then, as this thread shows, _most_ of your assertions
are untenable, so you're clearly comfortable with the fact. I guess it
goes well with freely choosing to use a language which you consider so
badly designed!

If a language goes so far as to make a particular coding impossible
while that would have been the prefered coding style for most of
the project members then such a limitation can hinder the decision
to agree upon a certain style instead of helping.

And in this case the team should definitely choose another language,
just like you should do instead of wasting your time using Python, and
yours AND ours whining against it.

I also think this attitude is appalling. Python is for consenting
adults I hear. But that doesn't seem to apply here, as python
seems to want to enforce a certain coding style instead of
letting consenting adults work it out among themselves.

Python most definitely does not multiply entities beyond necessity in
order to allow multiple equivalent coding styles -- it's that Occam
Razor thing again, see. If a team wants enormous freedom of design,
short of designing their own language from scratch, they can choose
among Lisp, Scheme, Dylan -- all good languages with enormously powerful
MACRO systems which let you go wild in ways languages without macros
just can't match. Of course, it's likely that nobody besides the
original team can maintain their code later -- that's the flip side of
that freedom... it can go all the way to designing your own language,
and who else but you will know it so they can advise, consult, maintain,
and so on, once you choose to avail yourself of that freedom?

Python values uniformity -- values the ability of somebody "from outside
the team" read the code, advise and consult about it, and maintain it
later, higher than it values the possibility of giving the team *yet
another way* to design their own language... why would you NEED another
Lisp? There IS one, go and use THAT (or if you can't stand parentheses,
use Dylan -- not far from Lisp with different surface syntax after all).

I also appreciate this uniformity highly -- it lets me advise and
consult all manners of teams using Python, it makes my books and courses
and presentations more useful to them, it lets me turn for advice and
consultancy to the general community regarding my own projects and
teams, all without difficulty. What could possibly be "appalling" in
not wanting to be yet another Lisp, yet another Perl, and so on?! Why
shouldn't there be on this Earth ONE language which makes software
maintenance easier, ONE language which care more about the ease of
reading others' code than about the ease of writing that code?! Now
THAT absolutism, this absurd attitude of yours that wants to wipe out
from the face of the Earth the ONLY language so focused on uniformity,
egoless and ownerless code, community, maintenance, etc, to turn it into
yet another needless would-be clone of Lisp, Perl, etc... *THAT* is
truly appalling indeed!

That you waste yours, is entirly your choice, nobody forces your hand
to reply to me.

Absolutely my choice, of course. But I have a reasonable motivation,
which I have already explained: there may be other readers which would
be ill-served by leaving your untenable assertions, etc etc,
unchallenged, when those assertions &c are so easy to tear into small
bloody pieces and deserve nothing better.

YOUR motivation for using a language you consider badly designed, one
whose underlying culture you find APPALLING (!your choice of word!), and
then spending your time spewing venom against it, is, on the other hand,
totally mysterious.

It isn't. Either you know what types the variables are and then
using a different operator depending on the type is no big deal,
or you don't know what type the variables are and then not caring
if one gets modified or not, is a disaster in waiting.

Your assertion is, once again, totally false and untenable.

def frooble(target, how_many, generator, *args, **kwds):
for i in xrange(how_many):
target += generator(i, *args, **kwds)
return target

Where is the "disaster in waiting" here? The specifications of
'frooble' are analogous to those of '+=': if you pass it a first
argument that is mutable it will extend it, otherwise it obviously
won't.


Alex
 
J

John J. Lee

Maybe one day I'll be psychologically able to use killfiles more
consistently, whenever I notice some poster that I can reliably classify
as a useless flamer, and let readers of that poster's tripe watch out
for themselves. But still I find it less painful to just drop out of
c.l.py altogether when I once again realize I just can't afford the time
to show why every flawed analysis in the world IS flawed, why every
false assertion in the world IS false, and so on -- and further realize
that there will never be any shortage of people eager to post flawed
analysis, false assertions, and so on, to any public forum.
[...]

It can be amusing, in a sadistic sort of way, to watch you attempt to
nail to the floor every protruding flabby piece of argument, no matter
how peripheral or repetitious. It's not *always* as edifying as other
ways you could spend your time, though...

But I do see the temptation :-/


John
 
A

Antoon Pardon

Op 2004-08-27 said:
I can silently suffer MOST spelling mistakes, but please, PLEASE do not
write "it's" where you mean "its", or viceversa: it's the most horrible
thing you can do to the poor, old, long-suffering English language, and
it makes me physically ill. Particularly in a paragraph as content-free
as this one of yours that I've just quoted, where you're really saying
nothing at all, you could AT LEAST make an attempt to respect the rules
of English, if not of logic and common sense.

On to the substance: your assertion is absurd. You say it isn't obvious
that a.extend(b) will raise an exception if a is bound to a str or
tuple, yet it patently IS obvious, given that str and tuple do not have
a method named 'extend'.

That is a non sequitur, the fact that something is a given, doesn't
make that something obvious. I dare say that if you have two parts
of equal code, one that uses += and the other that uses extend,
it will not be obvious to the reader that you want to first
code to work only with lists and the other with strings and tupples
too. He can probably figure it out but IMO it is not the most clear
way to make that disticntion.

Whether that's stupid or clever is a
completely different issue, and one which doesn't make your "No it isn't
obvious" assertion any closer to sanity one way or another.

IMO, obvious means, that it is the first thing that comes to mind
when someone reads the code. IMO it is not obvious in that sense.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,802
Messages
2,569,663
Members
45,433
Latest member
andrewartemow

Latest Threads

Top