"no variable or argument declarations are necessary."

  • Thread starter James A. Donald
  • Start date
J

James A. Donald

I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified to read

"no variable or argument declarations are necessary."

Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.

If you don't declare variables, you can inadvertently re-use an
variable used in an enclosing context when you don't intend to, or
inadvertently reference a new variable (a typo) when you intended to
reference an existing variable.

What can one do to swiftly detect this type of bug?
 
D

D H

James said:
I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified to read

"no variable or argument declarations are necessary."

Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.

If you don't declare variables, you can inadvertently re-use an
variable used in an enclosing context when you don't intend to, or
inadvertently reference a new variable (a typo) when you intended to
reference an existing variable.

What can one do to swiftly detect this type of bug?

It's a fundamental part of python, as well as many other scripting
languages.
If you're not comfortable with it, you might try a language that forces
you to declare every variable first like java or C++.
Otherwise, in python, I'd recommend using variable names that you can
easily spell. Also do plenty of testing of your code. It's never been
an issue for me, although it would be nicer if python were
case-insensitive, but that is never going to happen.
 
W

Will McGugan

James said:
I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified to read

"no variable or argument declarations are necessary."

Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.

If you don't declare variables, you can inadvertently re-use an
variable used in an enclosing context when you don't intend to, or
inadvertently reference a new variable (a typo) when you intended to
reference an existing variable.

What can one do to swiftly detect this type of bug?

A variable has to be assigned to before it is used, otherwise a
NameError exception is thrown..
Traceback (most recent call last):
2

Typos in variable names are easily discovered unless the typo happens to
exist in the current context.

Will McGugan
 
?

=?ISO-8859-1?Q?Jean-Fran=E7ois_Doyon?=

Wow, never even occured ot me someone would have a problem with this!

But, this might help:

http://www.logilab.org/projects/pylint

In more detail:
> Surely that means that if I misspell a variable name, my program will
> mysteriously fail to work with no error message.

No, the error message will be pretty clear actually :) You are
attempting to use a variable that doesn't exist! This would be the same
type of message you would get from a compiled language, just at a
different point in time (runtime vs. compile time).
> If you don't declare variables, you can inadvertently re-use an
> variable used in an enclosing context when you don't intend to,

Possible, though good design should always keep anysuch situation at
bay. Python is OO, hence scoping should rarely be a problem ... globals
are mostly evil, so the context at any given time should be the method,
you'd need a fairly big and complex method to start loosing track of
what you called what ... Also, a good naming convention should keep this
at bay.

Also, because things are interpreted, you don't (normally) need to put
extensive forthought into things as you do with compiled languages. You
can run things quickyl and easily on demand, a misnamed variable will be
clearly indicated and easily solved in a matter of minutes.

Using a smart IDE might also help prevent such problems before they occur?

Hope you enjoy python :)

J.F.
 
D

dwelch91

The easiest way to avoid this problem (besides watching for NameError
exceptions) is to use an editor that has automatic name completion.
Eric3 is a good example. So, even though in theory it could be an
issue, I rarely run into this in practice.

-Don
 
J

James A. Donald

James A. Donald:
No, the error message will be pretty clear actually :)

Now why, I wonder, does this loop never end :)
egold = 0
while egold < 10:
ego1d = egold+1
 
M

Michael

James said:
Now why, I wonder, does this loop never end :)
egold = 0
while egold < 10:
ego1d = egold+1

I know (hope! :) that's a tongue-in-cheek question, however the answer as
to why that's not a problem is more to do with development habits rather
than language enforcement. (yes with bad habits that can and will happen)

Much python development is test-driven. Either formally using testing
frameworks (I'm partial to unittest, but others like other ones), or
informally using a combination of iterative development and the
interactive shell. Or a mix of the two.

With a formal test framework you would have noticed the bug above
almost instantly - because your test would never finish (Which would
presumably count as a failure for the test that exercises that code).

Whilst that might seem odd, what you're actually doing with type
declarations is saying "if names other than these are used, a bug
exists" and "certain operations on these names are valid". (as well
as a bunch of stuff that may or may not relate to memory allocation
etc)

With test driven development you are specifically testing the functionality
you want to exist *does* exist. TDD also provides a few tricks that can
help you get around writers block, and also catch bugs like above easily and
more importantly early.

Bruce Eckel (author of a fair few interesting C++ & Java books :) has a
couple of interesting essays on this topic which I think also take this
idea a lot further than is probably suitable for here:

* Strong Typing vs. Strong Testing:
http://www.mindview.net/WebLog/log-0025
* How to Argue about Typing
http://www.mindview.net/WebLog/log-0052

For what it's worth, if you've not come across test driven development
before then I'd highly recommend Kent Beck's "Test Driven Development: By
Example". You'll either love it or hate it. IMO, it's invaluable though!
I suppose though the difference between static types based testing and
test driven development is that static types only really help you find
bugs (in terms of aiding development), whereas TDD actually helps you
write your code. (Hopefully with less bugs!)

Best Regards,


Michael.
 
G

George Sakkis

Michael said:
James said:
Now why, I wonder, does this loop never end :)
egold = 0
while egold < 10:
ego1d = egold+1

I know (hope! :) that's a tongue-in-cheek question, however the answer as
to why that's not a problem is more to do with development habits rather
than language enforcement. (yes with bad habits that can and will happen)

[snipped description of test-driven development culture]

As an aside, more to the point of the specific erroneous example is the lack of the standard python
idiom for iteration:

for egold in xrange(10):
pass

Learning and using standard idioms is an essential part of learning a language; python is no
exception to this.

George
 
A

Antoon Pardon

Op 2005-10-03 said:
Michael said:
James said:
Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.
No, the error message will be pretty clear actually :)
Now why, I wonder, does this loop never end :)
egold = 0
while egold < 10:
ego1d = egold+1

I know (hope! :) that's a tongue-in-cheek question, however the answer as
to why that's not a problem is more to do with development habits rather
than language enforcement. (yes with bad habits that can and will happen)

[snipped description of test-driven development culture]

As an aside, more to the point of the specific erroneous example is the lack of the standard python
idiom for iteration:

for egold in xrange(10):
pass

Learning and using standard idioms is an essential part of learning a language; python is no
exception to this.

Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1
 
D

Duncan Booth

Antoon said:
Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1

Oh come on. That is a completely contrived example, and besides you can
still rewrite it easily using the 'standard idiom' at which point it
becomes rather clearer that it is in danger of being an infinite loop even
without assigning to the wrong variable.

for egold in range(10):
while test():
pass

I find it very hard to believe that anyone would actually mistype ego1d
while intending to type egold (1 and l aren't exactly close on the
keyboard), and if they typed ego1d thinking that was the name of the loop
variable they would type it in both cases (or use 'ego1d += 1') which would
throw an exception.

The only remaining concern is the case where both ego1d and egold are
existing variables, or more realistically you increment the wrong existing
counter (j instead of i), and your statically typed language isn't going to
catch that either.

I'm trying to think back through code I've written over the past few years,
and I can remember cases where I've ended up with accidental infinite loops
in languages which force me to write loops with explicit incremements, but
I really can't remember that happening in a Python program.

Having just grepped over a pile of Python code, I'm actually suprised to
see how often I use 'while' outside generators even in cases where a 'for'
loop would be sensible. In particular I have a lot of loops of the form:

while node:
... do something with node ...
node = node.someAttribute

where someAttribute is parentNode or nextSibling or something. These would,
of course be better written as for loops with appropriate iterators. e.g.

for node in node.iterAncestors():
... do something with node ...
 
A

Antoon Pardon

Op 2005-10-03 said:
Oh come on. That is a completely contrived example,

No it is not. You may not have had any use for this
kind of code, but unfamiliary with certain types
of problems, doesn't make something contrived.
and besides you can
still rewrite it easily using the 'standard idiom' at which point it
becomes rather clearer that it is in danger of being an infinite loop even
without assigning to the wrong variable.

for egold in range(10):
while test():
pass

And trying to force this into standard idiom is just silly.
When people write examples they try to get the essential
thing into the example in order to make things clear to
other people. The real code they may be a lot more complicated.
That you can rework the example into standard idiom doesn't mean
the real code someone is working with can be reworked in a like manner.
I find it very hard to believe that anyone would actually mistype ego1d
while intending to type egold (1 and l aren't exactly close on the
keyboard), and if they typed ego1d thinking that was the name of the loop
variable they would type it in both cases (or use 'ego1d += 1') which would
throw an exception.

Names do get misspelled and sometimes that misspelling is hard to spot.
That you find the specific misspelling used as an example contrived
doesn't change that.
The only remaining concern is the case where both ego1d and egold are
existing variables, or more realistically you increment the wrong existing
counter (j instead of i), and your statically typed language isn't going to
catch that either.

A language where variable have to be declared before use, would allow
to give all misspelled (undeclared) variables in on go, instead of
just crashing each time one is encounterd.
I'm trying to think back through code I've written over the past few years,
and I can remember cases where I've ended up with accidental infinite loops
in languages which force me to write loops with explicit incremements, but
I really can't remember that happening in a Python program.

Good for you, but you shouldn't limit your view to your experience.
Having just grepped over a pile of Python code, I'm actually suprised to
see how often I use 'while' outside generators even in cases where a 'for'
loop would be sensible. In particular I have a lot of loops of the form:

while node:
... do something with node ...
node = node.someAttribute

where someAttribute is parentNode or nextSibling or something. These would,
of course be better written as for loops with appropriate iterators. e.g.

for node in node.iterAncestors():
... do something with node ...

That "of course" is unfounded. They may be better in your specific
code, but what you showed is insufficient to decide that. The first
code could for instance be reversing the sequence in the part that
is labeled ...do something with node ...
 
D

Duncan Booth

Antoon said:
A language where variable have to be declared before use, would allow
to give all misspelled (undeclared) variables in on go, instead of
just crashing each time one is encounterd.

Wrong. It would catch at compile-time those misspellings which do not
happen to coincide with another declared variable. It would give the
programmer a false sense of security since they 'know' all their
misspellings are caught by the compiler. It would not be a substitute for
run-time testing.

Moreover, it adds a burden on the programmer who has to write all those
declarations, and worse it adds a burden on everyone reading the code who
has more lines to read before understanding the code. Also there is
increased overhead when maintaining the code as all those declarations have
to be kept in line as the code changes over time.

It's a trade-off: there is a potential advantage, but lots of
disadvantages. I believe that the disadvantages outweight the possible
benefit. Fortunately there are plenty of languages to choose from out
there, so those who disagree with me are free to use a language which does
insist on declarations.
 
B

bruno modulix

James said:
I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified

"horrified" ???

Ok, so I'll give you more reasons to be 'horrified':
- no private/protected/public access restriction - it's just a matter of
conventions ('_myvar' -> protected, '__myvar' -> private)
- no constants (here again just a convention : a name in all uppercase
is considered a constant - but nothing will prevent anyone to modify it)
- possibility to add/delete attributes to an object at runtime
- possibility to modify a class at runtime
- possibility to change the class of an object at runtime
- possibility to rebind a function name at runtime
.....

If you find all this horrifying too, then hi-level dynamic languages are
not for you !-)
to read

"no variable or argument declarations are necessary."

No declarative static typing is necessary - which not the same thing. In
Python, type informations belong to the object, not to names that are
bound to the object.

Of course you cannot use a variable that is not defined ('defining' a
variable in Python being just a matter of binding a value to a name).
Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.

Depends. If you try to use an undefined variable, you'll get a name error:
Traceback (most recent call last):
File "<stdin>", line 1, in ?
NameError: name 'var1' is not defined

Now if the typo is on the LHS, you'll just create a new name in the
current namespace:

myvra = 42 # should have been 'myvar' and not 'myvra'

But you'll usually discover it pretty soon:

print myvar
Traceback (most recent call last):
File "<stdin>", line 1, in ?
NameError: name 'myvar' is not defined

If you don't declare variables, you can inadvertently re-use an
variable used in an enclosing context when you don't intend to,

yes, but this is quite uncommon.

The 'enclosing context' is composed of the 'global' (which should be
named 'module') namespace and the local namespace. Using globals is bad
style, so it shouldn't be too much of a concern, but anyway trying to
*assign* to a var living in the global namespace without having
previously declared the name as global will not overwrite the global
variable - only create a local name that'll shadow the global one. Since
Python is very expressive, functions code tend to be small, so the
chances of inadvertently reuse a local name are usually pretty low.

Now we have the problem of shadowing inherited attributes in OO. But
then the same problem exists in most statically typed OOPLs.
or
inadvertently reference a new variable (a typo) when you intended to
reference an existing variable.

Nope. Trying to 'reference' an undefined name raises a name error.
What can one do to swiftly detect this type of bug?

1/ write small, well-decoupled code
2/ use pychecker or pylint
3/ write unit tests

You'll probably find - as I did - that this combination (dynamic typing
+ [pylint|pychecker] + unit tests) usually leads to fewer bugs than just
relying on declarative static typing.

What you fear can become reality with some (poorly designed IMHO)
scripting languages like PHP, but should not be a concern with Python.
Try working with it (and not to fight agaisnt it), and you'll see by
yourself if it fits you.
 
B

bruno modulix

James said:
James A. Donald:



On Sun, 02 Oct 2005 17:11:13 -0400, Jean-François Doyon



Now why, I wonder, does this loop never end :)
egold = 0
while egold < 10:
ego1d = egold+1

A more pythonic style would be:

egold = 0
while egold < 10:
ego1d += 1

And that one raises a name error !-)
 
B

bruno modulix

The easiest way to avoid this problem (besides watching for NameError
exceptions) is to use an editor that has automatic name completion.
Eric3 is a good example. So, even though in theory it could be an
issue, I rarely run into this in practice.

I don't use emacs automatic completion, and I still rarely (read:
'never') run into this kind of problem in Python.
 
A

Antoon Pardon

Op 2005-10-03 said:
Wrong. It would catch at compile-time those misspellings which do not
happen to coincide with another declared variable.

Fine, it is still better than python which will crash each time
one of these is encountered.
It would give the
programmer a false sense of security since they 'know' all their
misspellings are caught by the compiler. It would not be a substitute for
run-time testing.

I don't think anyone with a little bit of experience will be so naive.
Moreover, it adds a burden on the programmer who has to write all those
declarations,

So? He has to write all those lines of code too.

People often promote unittesting here. Writing all those unittest is
an added burden too. But people think this burden is worth it.

I think writing declaration is also worth it. The gain is not as
much as with unittesting but neither is the burden, so that
balances out IMO
and worse it adds a burden on everyone reading the code who
has more lines to read before understanding the code.

Well maybe we should remove all those comments from code too,
because all it does is add more lines for people to read.
Also there is
increased overhead when maintaining the code as all those declarations have
to be kept in line as the code changes over time.

Which is good. Just as you have to keep the unittests in line as code
changes over time.
 
D

Duncan Booth

Antoon said:
Well maybe we should remove all those comments from code too,
because all it does is add more lines for people to read.

You'll get no argument from me there. The vast majority of comments I come
across in code are a total waste of time.
 
C

Christophe

Steven D'Aprano a écrit :
Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1


for item in [x for x in xrange(10) if test()]:

But it isn't about the idioms. It is about the trade-offs. Python allows
you to do things that you can't do in other languages because you
have much more flexibility than is possible with languages that
require you to declare variables before using them. The cost is, some
tiny subset of possible errors will not be caught by the compiler. But
since the compiler can't catch all errors anyway, you need to test for
errors and not rely on the compiler. No compiler will catch this error:

x = 12.0 # feet
# three pages of code
y = 15.0 # metres
# three more pages of code
distance = x + y
if distance < 27:
fire_retro_rockets()

And lo, one multi-billion dollar Mars lander starts braking either too
early or too late. Result: a new crater on Mars, named after the NASA
employee who thought the compiler would catch errors.


Declared variables have considerable labour costs, and only marginal
gains. Since the steps you take to protect against other errors will also
protect against mistyping variables, declarations of variables is of
little practical benefit.

As a matter of fact, doing that one on a HP48 calculator with unit
anotated values would have worked perfectly, except for the distance <
27 check which would have raised one error.
 
S

Steven D'Aprano

Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1

for item in [x for x in xrange(10) if test()]:

But it isn't about the idioms. It is about the trade-offs. Python allows
you to do things that you can't do in other languages because you
have much more flexibility than is possible with languages that
require you to declare variables before using them. The cost is, some
tiny subset of possible errors will not be caught by the compiler. But
since the compiler can't catch all errors anyway, you need to test for
errors and not rely on the compiler. No compiler will catch this error:

x = 12.0 # feet
# three pages of code
y = 15.0 # metres
# three more pages of code
distance = x + y
if distance < 27:
fire_retro_rockets()

And lo, one multi-billion dollar Mars lander starts braking either too
early or too late. Result: a new crater on Mars, named after the NASA
employee who thought the compiler would catch errors.


Declared variables have considerable labour costs, and only marginal
gains. Since the steps you take to protect against other errors will also
protect against mistyping variables, declarations of variables is of
little practical benefit.
 
G

Guest

People often promote unittesting here. Writing all those unittest is
an added burden too. But people think this burden is worth it.

I think writing declaration is also worth it. The gain is not as
much as with unittesting but neither is the burden, so that
balances out IMO

+1

Some people just don't get the simple fact that declarations are
essentially kind of unit test you get for free (almost), and the compiler
is a testing framework for them.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,599
Members
45,165
Latest member
JavierBrak
Top