variable declaration

  • Thread starter Alexander Zatvornitskiy
  • Start date
A

Alex Martelli

Michael Tobis said:
Well, I hate to try to tell you your job, but it doesn't seem to be to
be all that great of a marketing strategy to actively chase people
away... Hey, he might have been a Nutshell customer.

I'd far rather sell one fewer copy of the Nutshell, than sell one more
to somebody who is not _happy_ to use Python.

It did me, and it did many others. Perhaps you are unrepresentative.

Maybe; I did have good knowledge of a variety of languages, for example.
However, I have seen many people come upon ideas that were new to them
and meet them with interest and curiosity, even if initially doubtful
about the novelty, rather than with fear and loathing. I think that
such an attitude is a better predictor of how happy a person will be
with Python (or other technologies he's initially unfamiliar with) than
"previously accumulated knowledge".

It's one thing to say "no can do, sorry", it's another to say "you
don't need this anyway and if you think you do you aren't worthy".

To say one is sorry about something that, in fact, makes one deliriously
happy, would be the worst sort of hypocrisy, even though it's a socially
common ``white lie''. As for "worthy", that's a completely different
viewpoint, and I would appreciate it if you didn't put words into my
mouth.

Believing that somebody might be unhappy using Python (or any other
given technology), at least at this point in their personal development,
due to their ingrained mindset and strongly held opinions against one of
the cornerstones of the language, has nothing to do with "worth".

In fact, it was your book I spent the most time thumbing through
looking for the "use strict" equivalent that I was absolutely certain
must exist. Hell, even Fortran eventually gave in to "IMPLICIT NONE".

....which has nothing to do with the case, since in Fortran, and from day
one, all names had statically determinable types anyway.
It's practically the only thing I've ever expected to find in Python
that hasn't vastly exceeded my expectations, aand I'm sure Alexander is
not the only person to be put off by it.

By Hugol's Law, surely not. So what? If he sticks to his misguided
expectations, he won't be happy with Python. If he outgrows them, he
probably will. But the best way forwards is to get used to unit-testing
and thereby realize declarations are deadweight, THEN use a language
where they just aren't there.
In fact, I'd recommend a
paragraph early in the Nutshell book saying "there are no declarations,

What about a paragraph asking the reader to *READ* the book before
offering advice about it? On p.32, "Python has no declarations"; on
p.39, under Variables, right at the start of the paragraph, "In Python,
there are no declarations. The existence of a variable depends on a
statement that <i>binds</i> the variable, or, in other words, that sets
a name to hold a reference to some object".

Not only are these words pretty early, right towards the start of the
chapter giving an overview on the language, but I think the second
snippet has quite a prominent position. If you're looking for a way to
declare variables, and don't think of looking at the very start of the
section titled "Variables" -- or if the words you find there,
reinforcing the previous assertion to the same effect, still keep you
"thumbing through" the book in search of what I just said isn't there, I
disclaim responsibility.

A book, particularly a quick reference, needs to be considered as a
_cooperative_ effort between writer and reader. I can fairly be tasked
with expressing every important thing about the language, and I think I
do -- not all of the "sea lawyer"-level quibbles, but all of the aspects
most readers truly need. But I cannot fairly be asked to *belabor*
every aspect that might faze some reader or other, to ward against the
reader not being careful and not realizing that, in a work where
conciseness is of the essence, every single word is there for a purpose.

If you need repetition and belaboring, get a book that's intended as
slow-paced tutorial. Demanding tutorial-like repetitiousness of a
*quick reference* is absurd and irresponsible.
no use strict, no implicit none, sorry, forget it", and an index

This is the second time you ask or suggest that I take an apologetic
attitude (or at least mouth platitudes that sound apologetic without
_meaning_ to be apologetic in the least), and I repeat: forget it.
listing under "declarations" pointing to a detailed exegesis of their
nonexistence. It would have saved me some time.

....and would have cost time to careful readers, ones who know that "no
declarations" means (surprise!) *NO* declarations, and are sensible
enough to NOT expect "detailed exegesis" in a *quick reference* work.

No way -- and I'm not apologetic in the least about it, please note.

Having a tightly limited space budget, I carefully allocate it to
materials that I think it will do most good to most readers in the main
target audience: Python programmers. "I'm not trying to teach Python
here", as I say right at the start of that chapter (it's chapter 4, and
I believe it's just the sample chapter O'Reilly chose to place on their
site, so readers of this thread who don't own the book should be able to
have a look and follow this debate more closely) -- it's certainly
_possible_ to learn Python from the book, but only by reading very
closely and carefully, because the book eschews the repetition and
belaboring that a tutorial work would abound in (unless it be very, very
fast paced for a tutorial, which is also a possible stance; I believe
it's the stance taken by the excellent "Dive Into Python").

To you, it's the lack of declarations (because you can't or won't take
assertions about "no declarations" at face value); to another, it's
significant indentation, issues of typing, limits on recursion, argument
passing, the lack of a switch/case statement, -2**2... I mention each
and every one of these issues, of course, but it would be totally
inappropriate to offer for each the "detailed exegesis" you require.

It's true that in some sense an assignment is all the declaration you
need. I think Carl Banks's point (what we think of as assignment as a
carryover from other languages is really rebinding, and in many cases
can be avoided) is also helpful.

It's known as an "assignment statement" in Python, too, and its
semantics may be ones of binding or rebinding. "Assignment statements
are the most common way to bind variables and other references", and the
whole following paragraph about rebinding, Nutshell p. 39.
But that doesn't make the "epselon" bug go away, and wanting to have a
way to catch it quickly isn't, to my mind, obviously a criminal act.

Of course not, and test-driven design is the right way to catch quickly
that bug, and many others (including but not limited to ones caused by
other typos).
Also, based on what DogWalker demonstrates, it's really not that alien
to Python and should be feasible.

I assume you're referring to the abuse of __slots__ that's so
ridiculously popular? As I explain on p. 86, __slots__ is meant
strictly to let you save memory. It does NOT work well to catch typos
in real-life code (as opposed to toy-level examples). Michele Simionato
has a reasonable recipe on ActiveState's Cookbook (that recipe's also
going to be in the Cookbook's 2nd edition, due out in a couple months)
to show how to use __setattr__ instead (much more sensible) if you're so
tremendously "typo-phobic".

But, like the "leading __" idea, weird metaclasses and decorators that
let you omit the explicit 'self', and quite a few others, these nifty
tricks ARE in fact fully alien to Python "as she's spoken" -- they're
quoted, repeated, and varied upon, with ridiculous frequency, totally
out of proportion to their miniscule importance in the normal and
idiomatic practice of the language, because more and more people come to
Python, from a wide variety of other languages, keen to not change their
habits and "keep coding X in Python" for any value of X you might care
to name.

Pythonistas like showing off Python's power, in particular by mimicking
the different ways in which different languages work, and we may well
think that trying to soothe the worries of people coming to Python with
all the wrong expectations and thereby make Python more converts -- I
know I've been guilty of this particular error in the past. But I do
think it's an error when it gets overdone the way it's being overdone
these days; that to be really happy with Python, one should use Python
to do *PYTHON* programming, NOT to halfway-mimic Java, Visual Basic,
Fortran, PHP, Eiffel, Scheme, Dylan, Javascript, Ada, and Haskell, all
rolled into one.

As Steve Holden just posted, this may well mean that Python is not in
fact suitable for EVERYbody -- only people who are willing to give a try
to Python AS SUCH, rather than feeling a deep-seated need to mimic other
languages they're used to (perhaps due to unwillingness to use important
and excellent methodologies and tools, from unit-tests onwards). If so,
then, so be it: yeah, even if it means I sell fewer copies of my books,
let those fewer copies be sold to people who'll APPRECIATE AND ENJOY
both the books and the language, rather than pine for ``variable
declarations'', thumb endlessly looking for detailed exegeses of what
ISN'T there, and so on. I think the total amount of happiness in the
world will be enhanced thereby, and, in the end, that's what matters.

This may well be true in implementation, but cognitively it is a
declaration that modifies the reference and not the referent. I see
that it is a big deal to ask for more of these, but I don't see why.

Because the "cognition" is simply and totally wrong and
counterproductive: if you have the wrong mental model of what "splat
foo" means, you won't be productive in coding and using various new
possibilities for ``foo'' within that syntax.

It's not a matter of implementation, but of SEMANTICS -- what the syntax
form actually DOES, quite independent of HOW it does it, is to call a
HOF once the following def statement is done executing, to ``filter''
the function object to be bound to the name. No more, no less, and
NOTHING to do with ``declarations'', any more than statements such as
def and class are declarations (thinking of them as such is a serious
conceptual error and gravely hampers programming productivity).

I think your total and utter misconception of decorator syntax as being
a "declaration" is the best possible argument against that syntax; I
also think the syntax was probably worth having anyway, despite that.
After all, somebody sufficiently crazed from a withdrawal crisis from
declarations, as you appear to be, may go around calling "declarations"
anything whatsoever, be it splats, def, class, import, assignments, and
so on -- we can't rip all statements out from Python to make it into a
safe padded cell for declaration-junkies where they won't hurt
themselves and others.

I hope one day to forward this exchange to Guido as part of a dossier
against the latest (and neat!) syntax tweak he's considering for Python
3.0 -- having something like:

def f(x: <expression>):
<body>

be a syntactic shortcut for

def f(x):
x = <something>(x, <expression>)
<body>

where the `<something>' might be the ``adapt'' builtin (per PEP 246) or
some variation thereon. The argument boils down to: many people will
mis-conceptualize this nifty shortcut as a DECLARATION and thereby get
hopelessly confused and utterly misuse it, completely failing to
understand or accept that it's just a syntax shortcut to promote an
important idiom, just like the splat-syntax for decorators.

He'll probably go ahead anyway, of course, just as he did for decorators
despite the huge flamewars -- if he wasn't such a stubborn guy, Python
would be unlikely to be so unique;-).

I hope this works out, but it's hard for me to see how pypy will avoid
lots of hashing through dictionaries. I'm willing to help it by
declaring an immutable reference. Here, don't look this up; it always
points to that.

If such crutches are necessary, I guess that will emerge. The best
compilers for other dynamic languages, such as the stalin compiler for
scheme, can do without such crutches -- I'm sure it's hard for you to
see how, but if you study modern advanced compiler theory (which IS
hard, of course) then you might perhaps stand a chance.
I'm guessing that this will also be considered a bad idea, and maybe
someday I'll understand why. I'm looking for insight, not controversy.

I'm not sure there's much insight to be had: you want some forms of
redundancy, which you can get in other languages; we'd rather avoid such
localized redundancy, as the "end-to-end" checks that testing gives us
makes it so much deadweight -- and we NEED testing anyway, even with the
little redundancy aspects, because no such redundancy will catch many
errors, including typos such as a + where a - was meant, a < where a <=
was meant, and so on.

Introducing such redundancy as "optional" soon makes it practically
mandatory for most people in most situations, for many reasons, such as,
because people who manage programming efforts often like restricting
their employees under the misapprehension that this somehow makes things
better -- the history of all languages which introduced ``optional''
redundancy makes this abundantly clear. So, in practice, defending
redundancy as ``it's just optional'' is nothing but a figleaf: if it
gets its ugly foot in the door, it WILL be around most people's necks
most of the time foreverafter.

Python is one of the few widespread languages where one is blissfully
free of practically-mandated redundancy, and can thus practice the
healthy programming principle of "Once, and only once" -- not having to
repeat things twice like some magical formula in a ritual. This lack of
redundancy is a good part of Python's power, in the judgment of many of
us who deliberately chose Python rather than other languages which allow
(and indeed practically mandate) the redundancy.

I, and many others, chose Python over other languages, because (duh!) we
saw Python's differences wrt the others as its _strengths_ -- and ever
since we have had to fight off the well-meaning attempts of ceaseless
waves of newcomers, who are utterly keen to spoil Python by making it
more similar to those other languages we rejected in its favor, i.e.,
sap Python's strengths. Many old-timers already believe that changes to
Python in recent years have not been an overall good thing -- I
disagree, as I believe the changes so far have mostly strengthened
Python's support for its own design principles and goals, but I
understand their viewpoint.

If you're looking for insight on end-to-end checks as being preferable
to localized ones, the best paper I know is one on networking theory,
<http://mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf> -- for
test-driven design, Beck's book, the already-referenced Robert Martin
blog entry <http://www.artima.com/weblogs/viewpost.jsp?thread=4639>, and
innumerable other entries about "unit testing", "agile programming",
"test driven design", and suchlike, which you can google for.

I am trying to talk about having expressive power in constraining
references as well as the referents. Python studiously avoids this, but
decorators change that.

No it doesn't! A decorator puts absolutely NO constraint on any
reference whatsoever -- you're perfectly free to rebind every reference
at will, as usual.

A *descriptor*, or any of several other OO mechanisms, may put whatever
constraints you like on ``compound'' references -- attributes and items.
Most simply, you can define __setitem__, etc, in a class, and then
assignments to an indexing on instances of that class will go through
such special methods which may do, *AT RUNTIME*, whatever checks you
like; and similarly for __setattr__ -- and if you can use metaclasses to
put such constraints on classes just like classes put them on instances.

A descriptor lets you ``constrain'' one specific attribute of instances
by having all bindings of that attribute go through the descriptor's
__set__ method (if it defines one, of course -- that's currently known
as a ``data descriptor'' though the terminology is shifting). Again,
that's entirely a runtime issue -- no ``declaration'' whatsoever.

I am not deep enough into the mojo as yet to
have more than a glimmer of an idea about the distinction you are
making. It's not the one I'm trying to make.

The distinction that seems relevant to me: a *declaration* is about
something you tell the _compiler_, something which has intrinsically
static effects, _preliminary_ to runtime; a *statement* has effects at
runtime _if and when it executes_.

In Python, "global" is a declaration -- a wart -- because it has exactly
these ``preliminary'' effects, independent of runtime and execution.

But, for example, ``class'' definitely isn't: it builds and binds a
separate class object each time it executes, IF it executes. E.g.:

classes = []
for i in range(3):
class Foo(object): pass
classes.append(Foo)

classes[2].zippo = 23
classes[0].zippo = 17
print classes[2].zippo

See? No declarations whatsoever -- just a few perfectly ordinary
statements, which do perfectly ordinary runtime operations, such as
making objects, binding names, calling methods, ...

If the first statement in the loop body was changed to:

Foo = type('Foo', (), {})

we'd have exact semantic equivalence. It's NOT a matter of "mere
implementation": this is how the class statement is DEFINED to work;
it's SEMANTICS.
decorators may not be implemented as declarations, but they cognitively
act as declarations, and that's what I care about here.

Once again, with feeling: implementation doesn't really matter. Their
SEMANTICS are defined in those terms -- perfectly ordinary runtime
executable statements. In *NO* *WAY* *WHATSOEVER* do the "act as
declarations", and if your cognition tells you differently, then it's
your cognition that is faulty in this case -- you're badly misreading
the whole situation.

In particular, decorators don't put any "constraints on references",
which you seem to somehow mysteriously equate to ``declarations''. Of
course, you may use a decorator to install a _descriptor_ object (which
may put constraints on compound-references, as above mentioned), just as
you might use an assignment for the same kind of "installing". But that
doesn't make
@foo(23)
def bar( ... :
...
any more of ``cognitively a declaration'' than the exact semantic
equivalent:
def bar( ... :
...
bar = foo(23)(bar)

If this occurs outside a class, there's no connection to compound
references, thus certainly no constraint; if inside a class, and foo(23)
is a callable which returns a data-descriptor object when called with
function object bar as its argument, then this may imply constraints on
access to x.bar where x is an instance of said class.

I'm glad you have said something something I absolutely agree with. I'm
alarmed at the suggestions here that class and def blocks are
declarative. The fact that they're executable is really a core part of
the beauty of Python.

And the fact that *decorators* are executable is exactly equivalent.

However, I don't see how an 'import strict' would necessarily violate
this, nor an "import immutableref", which is something I would find
useful in trying to wrestle with NumArray, and which a high-performance
Python could (I think) use to advantage.

Now I may be wrong; in fact I'd bet against me and in favor of you and
Frederik if I had to bet. It's just that I don't see why I'm wrong.

What would the effects of such an ``import strict'' be? Stop any
binding or rebinding of barenames except by some arcane incantations?
Then, by inevitably becoming a "theoretically optional, practically
mandatory" part of Python use, anytime a typical pointy-haired boss has
anything to do with it, it would sabotage programmers' productivity, by
forcing them to use just such redundant incantations and thereby violate
"once and only once". Otherwise, I don't see how it would avert the
``epselon'' terror you keep waving at us.

If ``import immutableref'' is meant to make Python into a
single-assignment language (specifically and only for the module into
which it gets imported), it probably would not run as high a risk of
becoming mandatory -- single-assignment languages, and other functional
programming languages based on concepts of data being immutable, have
been around for ages but PHBs are terrified by them anyway (reasonably:
it takes a highly mathematical mind to be effective at functional
programming). I do not understand how this would help you with
numarray, in the least, so perhaps you could help with some examples of
how you would like such a declaration to work. Presumably the language
thus constrained would not have for loops (which do need to rebind the
control variable over and over), and while loops would also be pretty
iffy, so recursion abilities would have to be strengthened considerably,
for starters. More generally, I have my doubts that Python can be
usefully constrained to single-assignment semantics without needing some
compensating additions elsewhere. But maybe single-assignment is not
what you mean by your extremely elliptic mention, so I'll wait for your
examples of how that would help you with numarray in particular.


Alex
 
A

Arthur

This may well be true in implementation, but cognitively it is a
declaration that modifies the reference and not the referent. I see
that it is a big deal to ask for more of these, but I don't see why.

Thank you for bringing it and respecting the cognitive factor. My
"expereince* of the decorator, disassembly of internals quite aside,
is that it breaks old rules - or, if prferred, breaks new ground - by
impacting code one wouildn't expect it to kow about.

It frightens me a bit when the road to Guru seems to move in the
direction of most completely transcending the normal user experience,
rather than in best comprehending it.

Art
 
T

Thomas Bartkus

How common is it for a local variable to be bound in
more than one place within a function?

How common? It shouldn't happen at all and that was the point.
The original posters code demonstrates how it can occur inadvertently as a
result of a simple typographical error.

You won't hear me claim that Python is without mitigating virtues. Clearly,
there is much about Python that encourages good design which will in turn
reduce the incidence of such errors. Nevertheless, one has to admit to this
blemish. One also wonders if it is really necessary to endure what looks to
me like an omission. Is there a reason why the interpreter
couldn't/shouldn't require formal declarations?

I, too, wish there were a switch like VBs "Option Explicit" that would
require you to declare "epsilon = 0" and thereafter have the interpretor
refuse assignment to an undeclared "epselon". Sane VB programmers (and yes,
there are a few!) leave it on by default and consider it abomination that
the switch is optional. The original posters example was a good one. I had
to take a good long stare before I saw it even though the code is short,
sweet, and otherwise correct.

*Is* there a reason why the interpreter couldn't/shouldn't require formal
variable declaration?
It seems to me that lack of same may also be creating hellish barriers to
writing truly effective IDEs for Python.

Thomas Bartkus
 
M

Michael Tobis

Given the behavior, the documentation is gratifyingly correct.

Given that the syntax is legal, though, the behavior is not what one
would intuitively expect, and is therefore unPythonic by (rather
dramatically) violating the principle of least surprise.

It's also, to me, understandable why it's difficult for the language
design to avoid this behavior.

This little discovery of mine sheds some considerable light on the
awkwardness of what you guys will deign to call "declarations". This
being the case, I can understand the resistance to "declarations" in
Python.

I had thought, until the current conversation and this experiment, that
the globals statement, er, declaration was just another executable,
especially given all the stress on Python's being purely executable.

I still see "global" and "@" as expressions of the same fundamental
problem, even though decorators are not implemented as declarations.
They both take effect in a non-intuitive sequence and they both affect
the reference rather than the referent.

This points out the problem that Python has in qualifying references
rather than referents.

Since BDFL is contemplating some optional typing, does this also imply
qualifying the references?

Maybe you wizard types can agree that there is a useful abstraction
that I'm talking about here, whether you wish to call it "declarations"
or not, and try to factor out some sort of consistent strategy for
dealing with it, perhaps in P3K. (I will try to be in a position to
help someday, but I have a long way to go.)

Language features that modify references rather than referents appear
to be problematic. Python clearly chafes at these. Yet there are at
least a few compelling reasons to want them.
 
S

Steve Holden

Thomas said:
How common? It shouldn't happen at all and that was the point.

This seems a little excessive to me. Sample use case:

for something in lst:
if type(something) != type(()):
something = tuple(something)

regards
Steve
 
M

Michael Tobis

How common is it for a local variable to be bound in
more than one place within a function?

It's more natural for a beginner to read or write

..mystr = ""
..for snippet in snippets:
.. if ilike(snippet):
.. mystr = mystr + snippet

than

..mylist = []
..for snippet in snippets:
.. if ilike(snippet):
.. mylist.append(snippet)
..mystr = "".join(mylist)

for instance.

While the latter is superior in some ways, I frequently find my fingers
tossing off the former approach.

Of course in this case it's not hard to come up with

mystr = "".join([snippet in snippets if ilike(snippet)])

but it's also not too hard to imagine cases where the list
comprehension would be too complex or would require too much
refactoring.

I don't know that it's ever necessary to rebind, but it is, in fact,
common, and perhaps too easy. In numeric Python, avoiding rebinding
turns out to be a nontrivial skill.

mt
 
T

Thomas Bartkus

Steve Holden said:
This seems a little excessive to me. Sample use case:

for something in lst:
if type(something) != type(()):
something = tuple(something)

Hhhmmh!
I presume you are going through the list and want to gaurantee that every
item you encounter is a tuple! So if it ain't - you just re-declare
"something" to be a tuple. What was formerly a single string, integer,
whathaveyou is now a tuple *containing* a single string, integer,
whathaveyou.

Do you do it that way because you can? Or because you must?
And
If the former - is it a good idea?
OR did I just miss your codes intent completely?

My first inclination would be to create a new variable (type = tuple) and
accept (or typecast) each "something" into it as required. The notion that
you just morph "something" still seems rather abhorrent. It hadn't occurred
to me that iterating through a list like that means the iterater "something"
might need to constantly morph into a different type according to a lists
possibly eclectic contents.

It might explain why the interpreter is incapable of enforcing a type. It
would forbid iterating through lists containing a mix of different types.
EXCEPT- I must note, that other languages manage to pull off exactly such a
trick with a variant type. When you need to pull off a feat such as this,
you declare a variant type where the rules are relaxed *for that situation
only* and there is no need to toss the baby out with the bathwater.

Thomas Bartkus
 
J

Jeremy Bowers

*Is* there a reason why the interpreter couldn't/shouldn't require formal
variable declaration?

You mean, other than the reasons already discussed at length in this
thread, not to mention many many others?

Your not *liking* the reasons doesn't make them any less the reasons. They
may not even be good reasons, nevertheless, there the reasons are.

If you're literally asking the question you are asking, re-read this
thread more carefully. If you're *really* asking "Give me a reason *I
like*", I suggest re-reading Alex's discussion on why maybe Python isn't
for everybody.

All I know is that I have created large programs and typos like you seem
mortally terrified of occur on average about once every *ten modules* or
so, and are generally caught even before I write the unit tests. Breaking
the language to avoid what *by construction* is demonstrated not be a real
problem is... well, I believe Alex covered that, too.

Blah blah blah, "what if... what if... what if..." We should concentrate
on *real* problems, ones that exist in real code, not ones that mostly
exist in wild-eyed prose that consists of predictions of pain and death
that conspicuously fail to occur, no matter how many times they are
repeated or we are exhorted to heed them or face our doom.

(The previous paragraph also describes my root problem with Java's strong
typing philosophy; death, doom, and destruction conspicuously fail to
occur in Python programs, so why the hell should I listen to the
doomsayers after I've already proved them false by extensive personal
experience? No amount of prose is going to convince me otherwise, nor
quite a lot of the rest of us.)
 
A

Alex Martelli

Michael Tobis said:
I don't know that it's ever necessary to rebind, but it is, in fact,
common, and perhaps too easy. In numeric Python, avoiding rebinding
turns out to be a nontrivial skill.

Well, a for-statement is BASED on rebinding, for example. Maybe you
don't mean to address rebinding per se, but rebinding in ``separate
statements''? The ``separation'' needs to be defined carefully to make
while-loops work, too. Normally, a while-statement's header clause
would be something like:
while <expression>:
where the expression depends on the values bound to some local
variables. For the first evaluation, the variables need to be bound by
earlier statements; for the expression to eventually become false, it's
likely (unless we're talking about mutable objects) that the variables
need to be re-bound in the loop body.

For example, consider:

def f(a, b):
x = 0
while x < 100000:
print x,
x = a*x + b
print

would you require the programmer to find out a closed-form expression
for this recurrence relation, in order to avoid having to rebind the
name 'x'? OK, in this particular case it may be OK, if you are willing
to put competence in college-level algebra as a prerequisite for using
Python. But what if the rebinding in the body of the loop was to a more
complicated expression on x? What if it was something like x=g(x)? I'm
afraid we'd end up with weird constructs such as:

def ff(g):
x = [0]
while x[-1] < 100000:
print x[-1],
x.append(g(x[-1]))
print

if we had to avoid rebinding completely. Or, you could force the
iteration to be changed into a recursion... but then you'd better be
prepared to remove the current 'recursion limit' AND offer tail-call
optimization possibilities, at the very least.

All in all, I fail to see what gains would be expected by making Python
into a single-assignment or single-binding language, even on a module by
module basis, to repay this kind of awkwardness.


Alex
 
N

Nick Vargish

It's kind of like having a guy who juggles chainsaws wearing body armor
arguing with a guy who juggles rubber chickens wearing a T-shirt about who's
in more danger." --Roy Smith, c.l.py, 2004.05.23

If it's Nethack, the guy in the T-shirt is in more danger. A _lot_
more danger.

Nick
 
M

Michael Tobis

All in all, I fail to see what gains would be expected by making
Python
into a single-assignment or single-binding language, even on a module by
module basis, to repay this kind of awkwardness.

Just to be clear, if anyone was suggesting that, it wasn't me.

It would be helpful on occasion in a numarray development context to
have *specific* refererences be bound only once, and I wouldn't be
surprised if the compiler couldn't use that information to good
advantage too.

However, this subthread is about whether rebinding is completely
avoidable. Others including you have come up with better reasons than I
did that it's not.

If rebinding is normal, I think the 'epselon' bug can't be dismissed as
completely avoidable. This is something that I gather you disagree with
on the presumption that everyone who writes Python is sufficently
talented that they can use their skills to avoid getting too far into
this trap.

Since I'm very much a believer in Python as a beginner's language, that
doesn't satisfy me. "Declarations are impractical" would satisfy me,
but so far I'm not completely convinced of that.

mt
 
T

Thomas Bartkus

Since I'm very much a believer in Python as a beginner's language, that
doesn't satisfy me. "Declarations are impractical" would satisfy me,
but so far I'm not completely convinced of that.

As has been pointed out, it's not a big deal for a programmer who's been
there, done that. But the original posters example is a beginners trap for
certain.

*If* Python were a "beginners language", then it would be missing one of
it's training wheels.
Thomas Bartkus
 
S

Steve Holden

Thomas said:
Hhhmmh!
I presume you are going through the list and want to gaurantee that every
item you encounter is a tuple! So if it ain't - you just re-declare
"something" to be a tuple. What was formerly a single string, integer,
whathaveyou is now a tuple *containing* a single string, integer,
whathaveyou.

Do you do it that way because you can? Or because you must?
And
If the former - is it a good idea?
OR did I just miss your codes intent completely?
I suspect you missed the intent completely.
My first inclination would be to create a new variable (type = tuple) and
accept (or typecast) each "something" into it as required. The notion that

OK, but if you do that then surely the loop looks like

for something in lst:
somethingElse = something
if type(somethingElse) != type(()):
somethingElse = ...
you just morph "something" still seems rather abhorrent. It hadn't occurred
to me that iterating through a list like that means the iterater "something"
might need to constantly morph into a different type according to a lists
possibly eclectic contents.
Now I suspect I'm missing *your* point.
It might explain why the interpreter is incapable of enforcing a type. It
would forbid iterating through lists containing a mix of different types.
EXCEPT- I must note, that other languages manage to pull off exactly such a
trick with a variant type. When you need to pull off a feat such as this,
you declare a variant type where the rules are relaxed *for that situation
only* and there is no need to toss the baby out with the bathwater.
Well I have to say that the longer I program (and I've been at it nearly
forty years now) the more I am convinced that type declarations don't
actually help. I can see their value in terms of code optimization, but
there is no way that I see them as an error-detection mechanism. "You
have tried to assign a string to an integer variable" just isn't a
mistake I make a lot.

regards
Steve
 
E

Eric Pederson

As has been pointed out, it's not a big deal for a programmer who's
been
there, done that. But the original posters example is a beginners trap
for
certain.

*If* Python were a "beginners language", then it would be missing one
of
it's training wheels.


If you put training wheels on your bicycle, it's not going to be any good for moderately serious cycling. The OP was clearly not new to programming, and it was a hypothetical problem.

We're all adults here (even my 12 year old!) - and we have only beginners in my house. This purported wart has never bothered me -- Python is so friendly to develop in. If this sort of code error bites my 12 year old, I'm sure he will be able to find it and feel good about fixing it. It's not the kind of code error that has you shutting down your computer at 4AM, perplexed and frustrated - those feelings are usually attributable to subtle, complex, dastardly language features (unexpected behavoirs). Just my opinion, of course.

Among the great and enlightening posts in this thread, I liked this:

QOTW?
"""We should concentrate on *real* problems, ones that exist in real code, not ones that mostly
exist in wild-eyed prose that consists of predictions of pain and death
that conspicuously fail to occur, no matter how many times they are
repeated or we are exhorted to heed them or face our doom. """

http://groups-beta.google.com/group...e&mode=thread&noheader=1#doc_178fef06830cc779


[Go PyPy!]



Eric Pederson
http://www.songzilla.blogspot.com

:::::::::::::::::::::::::::::::::::
domainNot="@something.com"
domainIs=domainNot.replace("s","z")
ePrefix="".join([chr(ord(x)+1) for x in "do"])
mailMeAt=ePrefix+domainIs
:::::::::::::::::::::::::::::::::::
 
P

Paddy McCarthy

Alexander said:
Hello All!

I'am novice in python, and I find one very bad thing (from my point of view) in
language. There is no keyword or syntax to declare variable, like 'var' in
Pascal, or special syntax in C. It can cause very ugly errors,like this:

epsilon=0
S=0
while epsilon<10:
S=S+epsilon
epselon=epsilon+1
print S

It will print zero, and it is not easy to find such a bug!

Even Visual Basic have 'Option Explicit' keyword! May be, python also have such
a feature, I just don't know about it?

Alexander, (e-mail address removed)
Advocates always say Type Checking, but so often it seems like Type
Constriction. - To hell with it!
I don't believe I would be more productive by cluttering Python with the
Type schemes and variable declarations found in languages like Pascal,
C, Basic, C++ and Java.
People have said that there may be a more intelligent way, maybe type
inferencing? But no, please, nothing like the above, it would just get
in the way.

-- Paddy.
 
A

Alexander Zatvornitskiy

Hi Paddy!

PM> Advocates always say Type Checking, but so often it seems like Type
PM> Constriction. - To hell with it!
PM> I don't believe I would be more productive by cluttering Python with
PM> the Type schemes and variable declarations found in languages like
PM> Pascal, C, Basic, C++ and Java. People have said that there may be a
PM> more intelligent way, maybe type inferencing? But no, please, nothing
PM> like the above, it would just get in the way.
No, I don't even think about compulsory type checking, and so on. I just want
something like this:

var epsilon=0
var S
S=0
while epsilon<10:
S=S+epsilon
epselon=epsilon+1#interpreter should show error here,if it's in "strict mode"
print S

It is easy, and clean-looking.

Alexander, (e-mail address removed)
 
A

Alexander Zatvornitskiy

Hi, Alex!

31 jan 2005 at 13:46, Alex Martelli wrote:

(sorry for the delay,my mail client don't highlight me your answer)

AM> Since the lack of declarations is such a crucial design choice for
AM> Python, then, given that you're convinced it's a very bad thing, I
AM> suggest you give up Python in favor of other languages that give you
AM> what you crave.
Well, I like Python. But, as every language I know, it have some bad sides
which I don't like. One of them in Python is lack of variable declarations,
another (this problem is common with C/C++) is:
===0
===
(I understand why it is so, but I don't like it anyway. Such behaviour also can
cause some hard-to-find-bugs)

AM> issue for you. Therefore, using Python, for you, would mean you'd be
AM> fighting the language and detesting its most fundamental design
AM> choice: and why should you do that? There are zillions of languages
AM> -- use another one.
Thank you for advice:) AM> Actually, this while loop never terminates and never prints anything,
Oh, I don't find it:)
AM> so that's gonna be pretty hard to ignore;-).
AM> But, assume the code is
AM> slightly changed so that the loop does terminate. In that case...:

AM> It's absolutely trivial to find this bug, if you write even the
AM> tiniest and most trivial kinds of unit tests. If you don't even have
AM> enough unit tests to make it trivial to find this bug, I shudder to
AM> think at the quality of the programs you code.

Thank you for advice again, I already use different tests in my work and I
found them usefull. But! I want to use Python for prototyping. I want to write
my algorithms on it, just to see they do almost they must to do. Next, I want
to play with them to understand their properties and limitations.
If sometimes such a program fall, or not very fast, or sometimes show wrong
results, it's not a big problem. So, I use Python like tool for prototyping.

After I debug the algorithm and understand it, I can rewrite it on C++ (if I
need), carefully, paying attention to speed, side effects, memory requirements,
and so on. With full testing, of course.



Hence, from "language for prototyping" I need next features:

1. I want to think about algorithm (!!!), and language must help me to do it.
It must take care on boring things like memory management, garbage collection,
strict type inference, my typos. It must provide easy-to-use packages for many
of my low-level needs. And so on.

2. goto 1:)



Python is realy very good for such demands. Except one: it force me to type
variable names carefully:) In other words, divert my attraction from algorithm,
to typing.

AM> Even just focusing on
AM> typos,
AM> think of how many other typos you could have, besides the misspelling
AM> of 'epsilon', that unit tests would catch trivially AND would be
AM> caught in no other way whatsoever -- there might be a <= where you
AM> meant a <, a 1.0 where you meant 10, a - where you meant a +, etc,
AM> etc.
AM> You can't live without unit tests. And once you have unit tests, the
AM> added value of declarations is tiny, and their cost remains.

Fine! Let interpreter never show us errors like division by zero, syntax
errors, and so on. If file not found, library don't need to say it. Just skip
it!!! Because every, even simple, test will find such bugs. Once you have unit
tests, the added value of <anything> is tiny, and their cost remains.

:)

Or, maybe, we will ask interpreter to find and prevent as many errors as he
can?

And, one more question: do you think code like this:

var S=0
var eps

for eps in xrange(10):
S=S+ups

is very bad? Please explain your answer:)
AM> Python has no declarations whatsoever. If you prefer Visual Basic, I
AM> strongly suggest you use Visual Basic, rather than pining for Visual
AM> Basic features in Python. If and when your programming practices ever
AM> grow to include extensive unit-testing and other aspects of agile
AM> programing, THEN you will be best advised to have a second look at
AM> Python, and in such a case you will probably find Python's strengths,
AM> including the lack of declarations, quite compelling.

Uh! And you! And you!... And you must never even come close to any languages
with variable declaration! Even to Visual Basic! :)

AM> brain". I find it's true: Python gets out of my way and let me solve
AM> problems much faster, because it fits my brain, rather than changing
AM> the way I think.

I'm agree with you.

AM> If Python doesn't fit YOUR brain, for example because your brain is
AM> ossified around a craving for the declaration of variables, then,
AM> unless you're specifically studying a new language just for personal
AM> growth purposes, I think you might well be better off with a language
AM> that DOES, at least until and unless your brain changes by other
AM> means.

Thank you for explanation of your opinion.

Alexander, (e-mail address removed)
 
J

Jorgen Grahn

I disagree: I believe that, if the poster really meant what he wrote, he
may well be happier using other languages and all the declarations he
cherishes, so recommending that course of action to him is quite proper
on my part.

You are wrong, for once.

That poster could have been me a few years back, when I was younger, more
stupid, more arrogant and less experienced. He'll get over it.

Also, what he described /is/ a problem. I still get bitten by it now and
then. It's just that it has even larger /benefits/ which aren't obvious at
first.

In BASIC, /bin/sh and perl without 'use strict', the lack of declarations is
only a negative thing without benefits. If you know those languages, it's
easy to jump to the conclusion that this applies to Python, too.

/Jorgen
 
A

Alexander Zatvornitskiy

ðÒÉ×ÅÔ Peter!

31 ÑÎ×ÁÒÑ 2005 × 09:09, Peter Otten × Ó×ÏÅÍ ÐÉÓØÍÅ Ë All ÐÉÓÁÌ:
PO> pychecker may help you find misspelled variable names. You have to
PO> move the code into a function, though:

PO> $ cat epsilon.py
....skipped...
PO> $ pychecker epsilon.py
PO> epsilon.py:6: Local variable (epselon) not used

Well, I can change it a little to pass this check. Just add "print epselon"
line.

I think if as soon as I will make such error, I will write special checker. It
will take code like this:

def loop():
#var S,epsilon
epsilon=0
S=0
while epsilon<10:
S=S+epsilon
epselon=epsilon+1
print S

Such checker will say "error:epselon is not declared!" if I will use something
not declared. If everything is ok, it will call pychecker. Simple and tasty,
isn't it?
Of cource, it may be difficult to handle fields of classes:
MyClass.epsElon=MyClass.epsilon+1
but it is solvable, I think. What do you think, is it a good idea?


Alexander, (e-mail address removed)
 
A

Alex Martelli

Alexander Zatvornitskiy
Hi, Alex!

31 jan 2005 at 13:46, Alex Martelli wrote:

(sorry for the delay,my mail client don't highlight me your answer)

AM> Since the lack of declarations is such a crucial design choice for
AM> Python, then, given that you're convinced it's a very bad thing, I
AM> suggest you give up Python in favor of other languages that give you
AM> what you crave.
Well, I like Python. But, as every language I know, it have some bad sides
which I don't like. One of them in Python is lack of variable declarations,
another (this problem is common with C/C++) is:
===
0
===
(I understand why it is so, but I don't like it anyway. Such behaviour
also can cause some hard-to-find-bugs)

You're conflating a fundamental, crucial language design choice, with a
rather accidental detail that's already acknowledged to be suboptimal
and is already being fixed (taking years to get fixed, of course,
because Python is always very careful to keep backwards compatibility).

Run Python with -Qnew to get the division behavior you probably want, or
-Qwarn to just get a warning for each use of integer division so those
hard to find bugs become trivially easy to find. Or import from the
future, etc, etc.

The fact that in Python there are ONLY statements, NO declarations, is a
completely different LEVEL of issue -- a totally deliberate design
choice taken in full awareness of all of its implications. I do not see
how you could be happy using Python if you think it went wrong in such
absolutely crucial design choices.

AM> issue for you. Therefore, using Python, for you, would mean you'd be
AM> fighting the language and detesting its most fundamental design
AM> choice: and why should you do that? There are zillions of languages
AM> -- use another one.
Thank you for advice:)

You're welcome.
AM> Actually, this while loop never terminates and never prints anything,
Oh, I don't find it:)

Hit control-C (or control-Break or whatever other key combination
interrupts a program on your machine) when the program is just hanging
there forever doing nothing, and Python will offer a traceback showing
exactly where the program was stuck.

In any case, you assertion that "it will print zero" is false. You
either made it without any checking, or chose to deliberately lie (in a
rather stupid way, because it's such an easy lie to recognize as such).

Fine! Let interpreter never show us errors like division by zero, syntax
errors, and so on. If file not found, library don't need to say it. Just skip
it!!! Because every, even simple, test will find such bugs. Once you have unit
tests, the added value of <anything> is tiny, and their cost remains.

Another false assertion, and a particularly ill-considered one in ALL
respects. Presence and absence of files, for example, is an
environmental issue, notoriously hard to verify precisely with unit
tests. Therefore, asserting that "every, even simple, test will find"
bugs connected with program behavior when a file is missing shows either
that you're totally ignorant about unit tests (and yet so arrogant to
not let your ignorance stop you from making false unqualified
assertions), or shamelessly lying.

Moreover, there IS no substantial cost connected with having the library
raise an exception as the way to point out that a file is missing, for
example. It's a vastly superior approach to the old idea of "returning
error codes" and forcing the programmer to check for those at every
step. If the alternative you propose is not to offer ANY indication of
whether a file is missing or present, then the cost of THAT alternative
would most obviously be grievous -- essentially making it impossible to
write correct programs, or forcing huge redundancy if the check for file
presence must always be performed before attempting I/O.

In brief: you're not just wrong, you're so totally, incredibly, utterly
and irredeemably wrong that it's not even funny.

And, one more question: do you think code like this:

var S=0
var eps

for eps in xrange(10):
S=S+ups

is very bad? Please explain your answer:)

Yes, the many wasted pixels in those idiotic 'var ' prefixes are a total
and utter waste of programmer time. Mandated redundancy, the very
opposite of the spirit of Python.
Uh! And you! And you!... And you must never even come close to any languages
with variable declaration! Even to Visual Basic! :)

Wrong again. I've made a good living for years as a C++ guru, I still
cover the role of C++ MVP for the Brainbench company, I'm (obviously)
totally fluent in C (otherwise I could hardly contribute to the
development of Python's C-coded infrastructure, now could I?), and as it
happens I have a decent command (a bit rusty for lack of recent use) of
dozens of other languages, including several Basic dialects and Visual
Basic in particular.

It should take you about 20 seconds with Google to find this out about
me, you know? OK, 30 seconds if you're on a slow dialup modem line.

So, I guess you just *LIKE* being utterly and monumentally wrong, since
it would be so easy to avoid at least some of the bloopers you instead
prefer to keep making.

I *CHOOSE* Python, exactly because I have vast programming experience in
such a huge variety of languages, across all kinds of application areas,
methodologies, and sizes and levels of programming teams. It's not
``perfect'', of course, being a human artifact, but it does implement
its main design ideas consistently and brilliantly, and gets the
inevitable compromises just about right.


Alex
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,540
Members
45,025
Latest member
KetoRushACVFitness

Latest Threads

Top