"no variable or argument declarations are necessary."

  • Thread starter James A. Donald
  • Start date
S

Steve Holden

Brian said:
Which is evaluated at runtime, does not require that the actual global
variable be pre-existing, and does not create the global variable if not
actually assigned. I think that is pretty different than your proposal
semantics.
I believe that "global" is the one Python statement that isn't actually
executable, and simply conditions the code generated during compilation
(to bytecode).

Hard to see why someone would want to use a global declaration unless
they were intending to assign to it, given the sematnics of access.[...]

regards
Steve
 
A

Antoon Pardon

Op 2005-10-05 said:
That is one possibility, but I think that it would be better to use a
keyword at the point of the assigment to indicate assignment to an outer
scope. This fits with the way 'global' works: you declare at (or near) the
assignment that it is going to a global variable, not in some far away part
of the code, so the global nature of the assignment is clearly visible.

As far as I understand people don't like global very much so I don't
expect that a second keyword with the same kind of behaviour has
any chance.
The
'global' keyword itself would be much improved if it appeared on the same
line as the assignment rather than as a separate declaration.

e.g. something like:

var1 = 0

def f():
var2 = 0

def g():
outer var2 = 1 # Assign to outer variable
global var1 = 1 # Assign to global

And what would the following do:

def f():

var = 0

def g():

var = 1

def h():

outer var = 2 * var + 1

h()
print var

g()
print var

f()
 
R

Ron Adam

Antoon said:
I want to catch all errors of course.

Yes, of course, and so do other programmers. What I mean is to try and
break it down into specific instances and then see what the best
approach is for each one is.

When I first started leaning Python I looked for these features as well,
but after a while my programming style changed and I don't depend on
types and names to check my data near as much now. But instead write
better organized code and data structures with more explicit value
checks where I need them.

My concern now is having reusable code and modules I can depend on. And
also separating my data and data management operations from the user
interface. Having functions and names that don't care what type the
objects are, makes doing this separation easier.

Another situation where typeless names are useful is routines that
explicitly check the type, then depending on the type does different
things. For example if you have a list with a lot of different type
objects stored in it, you can sort the contents into sublists by type.

Looking at it from a different direction, how about adding a keyword to
say, "from this point on, in this local name space, disallow new
names". Then you can do...

def few(x,y):
a = 'a'
b = 'b'
i = j = k = l = None
no_new_names
# raise an error after here if a new name is used.
...
for I in range(10): <-- error
...

This is more suitable to Pythons style than declaring types or variables
I think. Add to this explicit name-object locking to implement
constants and I think you would have most of the features you want.

so...

no_new_names # limit any new names
lock_name name # lock a name to it's current object


Since names are stored in dictionaries, a dictionary attribute to
disallow/allow new keys, and a way to set individual elements in a
dictionary to read only would be needed. Once you can do that and it
proves useful, then maybe you can propose it as a language feature.

These might also be checked for in the compile stage and would probably
be better as it wouldn't cause any slow down in the code or need a new
dictionary type.

An external checker could possibly work as well if a suitable marker is
used such as a bare string.

...
x = y = z = None
"No_New_Names" # checker looks for this
...
X = y/z # and reports this as an error
return x,y

and..

...
Author = "Fred"
"Name_Lock Author" # checker sees this...
...
Author = "John" # then checker catches this
...

So there are a number of ways to possibly add these features.

Finding common use cases where these would make a real difference would
also help.

Cheers,
Ron
 
D

Duncan Booth

Antoon said:
As far as I understand people don't like global very much so I don't
expect that a second keyword with the same kind of behaviour has
any chance.

That's why the behaviour I suggest is different than the current behaviour
of global. Arguments against global (it is the only non-executable
statement in Python & it is confusing because people don't understand the
declaration goes inside the function instead of at global scope) don't
apply.
And what would the following do:

def f():

var = 0

def g():

var = 1

def h():

outer var = 2 * var + 1

h()
print var

g()
print var

f()
It would follow the principle of least surprise and set the value of var in
g() of course. The variable in f is hidden, and if you didn't mean to hide
it you didn't need to give the two variables the same name.

So the output would be:
3
0

(output verified by using my hack for setting scoped variables:)
-------------------------------
from hack import *
def f():
var = 0

def g():
var = 1

def h():
assign(lambda: var, 2 * var + 1)

h()
print var

g()
print var

f()
-------------------------------
 
B

Brian Quinlan

Paul said:
Different how?

Aren't you looking for some of compile-time checking that ensures that
only declared variables are actually used? If so, how does global help?
They're necessary if you enable the option.

OK. Would it work on a per-module basis or globally?
def do_add(x->str, y->str):
return '%s://%s' % (x, y)

def do_something(node->Node):
if node.namespace == XML_NAMESPACE:
return do_add('http://', node.namespace)
elif node.namespace == ...

Wouldn't an error be generated because XML_NAMESPACE is not declared?

And I notice that you are not doing any checking that "namespace" is a
valid attribute of the node object. Aren't the typos class of error that
you are looking to catch just as likely to occur for attributes as
variables?

Cheers,
Brian
 
B

bruno modulix

Mike Meyer wrote:
(snip)
Antoon, at a guess I'd say that Python is the first time you've
encountered a dynamnic language. Being "horrified" at not having
variable declarations,

Mike, "being horrified" by the (perceived as...) lack of variable
declaration was the OP's reaction, not Antoon's.
 
M

Magnus Lycka

Paul said:
So where are the complex templates and dangerous void pointers in ML?

You're right about that of course. There aren't any templates or
pointers in COBOL either as far as I know, and COBOL has been used
for lots of real world code (which ML hasn't).

I don't know what your point is though.

Sure, Python could have Perl-like declarations, where you just state
that you intend to use a particular name, but don't declare its type.
I don't see any harm in that.

Type declarations or inferred types would, on the other hand, make
Python considerably less dynamic, and would probably bring the need of
additional featurs such as function overloading etc.
 
P

Paul Rubin

Brian Quinlan said:
Aren't you looking for some of compile-time checking that ensures that
only declared variables are actually used? If so, how does global help?

You'd have to declare any variable global, or declare it local, or it
could be a function name (defined with def) or a function arg (in the
function scope), or maybe you could also declare things like loop
indices. If it wasn't one of the above, the compiler would flag it.
OK. Would it work on a per-module basis or globally?

Whatever perl does. I think that means per-module where the option is
given as "use strict" inside the module.
Wouldn't an error be generated because XML_NAMESPACE is not declared?

XML_NAMESPACE would be declared in the xml.dom module and the type
info would carry over through the import.
And I notice that you are not doing any checking that "namespace" is a
valid attribute of the node object. Aren't the typos class of error
that you are looking to catch just as likely to occur for attributes
as variables?

The node object is declared to be a Node instance and if the Node
class definition declares a fixed list of slots, then the compiler
would know the slot names and check them. If the Node class doesn't
declare fixed slots, then they're dynamic and are looked up at runtime
in the usual way.
 
B

Brian Quinlan

Paul said:
You'd have to declare any variable global, or declare it local, or it
could be a function name (defined with def) or a function arg (in the
function scope), or maybe you could also declare things like loop
indices. If it wasn't one of the above, the compiler would flag it.

OK. The Python compiler would check that the name is declared but it
would not check that it is defined before use? So this would be acceptable:

def foo():
local x
return x
Whatever perl does. I think that means per-module where the option is
given as "use strict" inside the module.


XML_NAMESPACE would be declared in the xml.dom module and the type
info would carry over through the import.

Problems:
1. your type checking system is optional and xml.dom does not use it
1a. even if xml.dom did declare the type, what if the type were declared
conditionally e.g.

try:
unicode
except NameError:
XML_NAMESPACE<str> = "..."
else:
XML_NAMESPACE<unicode> = u"..."

2. the compiler does not have access to the names in other modules
anyway

The node object is declared to be a Node instance and if the Node
class definition declares a fixed list of slots, then the compiler
would know the slot names and check them.

How would you find the class definition for the Node object at
compile-time? And by "slots" do you mean the existing Python slots
concept or something new?
If the Node class doesn't
declare fixed slots, then they're dynamic and are looked up at runtime
in the usual way.

So only pre-defined slotted attributes would be accessable (if the
object uses slots). So the following would not work:

foo = Foo() # slots defined
foo.my_attribute = 'bar'
print foo.my_attribute

Cheers,
Brian
 
M

Mike Meyer

Antoon Pardon said:
It is not perfect, that doesn't mean it can't help. How much code
deletes variables.

It's not perfect means it may not help. Depends on the cost of being
wrong - which means we need to see how things would be different if
the code was assuming that a variable existed, and then turned out to
be wrong.

Actually, I'd be interested in knowing how you would improve the
current CPython implementation with knowledge about whether or not a
variable existed. The current implementation just does a dictionary
lookup on the name. The lookup fails if the variable doesn't exist. So
checking on the existence of the variable is a byproduct of finding
the value of the variable. So even if it was perfect, it wouldn't
help.
I thought it was more than in a few. Without some type information
from the coder, I don't see how you can infer type from library
code.

There's nothing special about library code. It can be anaylyzed just
like any other code.
I think this is too strict. Decorators would IMO never made it.

From the impressions I get here, a lot of people would have been happy
with that result.
I think that a feature that could be helpfull in reduction
errors, should be a candidate even if it has no other merrits.

Yes, but that doesn't mean it should be accepted. Otherwise, every
language would be Eiffel. You have to weigh the cost of a feature
against the benefit you get from it - and different people come to
different conclusions. Which is why different languages provide
different levels of bondage.
It would be one way to get writable closures in the language.
That is added functionality.

Except just adding declerations doesn't give you that. You have to
change the language so that undeclared variables are looked for up the
scope. And that's the only change you need to get writable variables -
some way to indicate that a variable should be checked for up the
scope. There are more lightweight ways to do that than tagging every
*other* variable. Those have been proposed - and rejected.
Whether the good effect is good enough is certainly open for debate.
But the opponents seem to argue that since it is no absolute guarantee,
it is next to useless. Well I can't agree with that kind of argument
and will argue against it.

You're not reading the opponents arguments carefully enough. The
argument is that the benefit from type declerations is overstated, and
in reality doesn't outweigh the cost of declerations.
No I'm not horrified at not having variable declarations. I'm in
general very practical with regard to programming, and use what
features a language offers me. However that doesn't stop me from
thinking: Hey if language X would have feature F from language Y,
that could be helpfull.

I'm sorry - I thought you were the OP, who said he was horrified by
that lack.
I think we should get rid of thinking about a language as
static or dynamic. It is not the language which should determine
a static or dynamic approach, it is the problem you are trying
to solve. And if the coder thinks that a static approach is
best for his problem, why shouldn't he solve it that way.

Except that languages *are* static or dynamic. They have different
features, and different behaviors. Rather than tilting at the windmill
of making a dynamic language suitable for static approaches, it's
better to simply use the appropriate tool for the job. Especially if
those changes make the tool *less* suitable for a dynamic approach.
That a language allows a static approach too, doesn't contradict
that it can work dynamically. Everytime a static feature is
suggested, some dynamic folks react as if the dynamic aspect
of python is in perril.

The problem is that such changes invariably have deep impact that
isn't visible until you examine things carefully knowing you're
dealing with a dynamic language. For instance, just as Python can
delete a variable from a name space at run time, it can add a variable
to some name spaces at run time. So the compiler can't reliably
determine that a variable doesn't exist any more than it can reliably
determine that one does. This means that you can't flag using
undeclared variables in those namespaces at compile time without a
fundamental change in the language.
There seems to be some misunderstanding, I don't remember stating that
missing declarations are intolerable, I certainly dont think so. I
wouldn't be programming in python for over five years now if I
thought so. But that doesn't mean having the possibilty to
declare is useless.

Again, I was apparently confusing you and the OP. My apologies.

<mike
 
D

Diez B. Roggisch

This is naive. Testing doesn't guarantee anything. If this is what you
think about testing, then testing gives you a false impression of
security. Maybe we should drop testing.

Typechecking is done by a reduced lamda calculus (System F, which is
ML-Style), whereas testing has the full power of a turing complete
language. So _if_ one has to be dropped, it would certainly be
typechecking.

Additionally, testing gives you the added benefit of actually using your
decelared APIs - which serves documentation purposes as well as
securing your design decisions, as you might discover bad design while
actually writing testcases.

Besides that, the false warm feeling of security a successful
compilation run has given many developers made them check untested and
actually broken code into the VCS. I've seen that _very_ often! And the
_only_ thinng that prevents us from doing so is to enforce tests. But
these are more naturally done in python (or similar languages) as every
programmer knows "unless the program run sucsessfully, I can't say
anything about it" than in a statically typed language where the
programmer argues "hey, it compiled, it should work!"


Regards,

Diez
 
P

Paul Rubin

Brian Quinlan said:
OK. The Python compiler would check that the name is declared but it
would not check that it is defined before use? So this would be
acceptable:

def foo():
local x
return x

Come on, you are asking silly questions. Any reasonable C compiler
would flag something like that and Python (with the flag set) should
do the same. If you want to ask substantive questions, that's fine,
but stop wasting our time with silly stuff.
1. your type checking system is optional and xml.dom does not use it

If type checking is implemented then the stdlib should be updated to
add declarations for public symbols. If not, the compiler would flag
the undeclared symbol. You could always declare it to be of type 'object'.
try:
unicode
except NameError:
XML_NAMESPACE<str> = "..."
else:
XML_NAMESPACE<unicode> = u"..."

This wouldn't be allowed.
2. the compiler does not have access to the names in other modules anyway

You're being silly again. The compiler would examine the other module
when it processes the import statement, just like it does now.
How would you find the class definition for the Node object at
compile-time?

By processing the xml.dom module when it's imported.
And by "slots" do you mean the existing Python slots concept or
something new?

Probably something new, if the existing concept is incompatible in some way.
So only pre-defined slotted attributes would be accessable (if the
object uses slots). So the following would not work:

foo = Foo() # slots defined
foo.my_attribute = 'bar'
print foo.my_attribute

Yes, correct, many people already think the existing __slots__
variable is intended for precisely that purpose and try to use it that
way. Note you can get the same effect with a suitable __setattr__
method that's activated after __init__ returns, so all we're
discussing is a way to make the equivalent more convenient.
 
B

Bengt Richter

That is one possibility, but I think that it would be better to use a
keyword at the point of the assigment to indicate assignment to an outer
scope. This fits with the way 'global' works: you declare at (or near) the
assignment that it is going to a global variable, not in some far away part
of the code, so the global nature of the assignment is clearly visible. The
'global' keyword itself would be much improved if it appeared on the same
line as the assignment rather than as a separate declaration.

e.g. something like:

var1 = 0

def f():
var2 = 0

def g():
outer var2 = 1 # Assign to outer variable
global var1 = 1 # Assign to global

IMO you don't really need all that cruft most of the time. E.g., what if ':='
meant 'assign to variable wherever it is (and it must exist), searching according
to normal variable resolution order (fresh coinage, vro for short ;-), starting with
local, then lexically enclosing and so forth out to module global (but not to builtins).'

If you wanted to assign/rebind past a local var shadowing an enclosing variable var, you'd have
to use e.g. vro(1).var = expr instead of var := expr. Sort of analogous to
type(self).mro()[1].method(self, ...) Hm, vro(1).__dict__['var'] = expr could conceivably
force binding at the vro(1) scope specifically, and not search outwards. But for that there
would be optimization issues I think, since allowing an arbitrary binding would force a real
dict creation on the fly to hold the the new name slot.

BTW, if/when we can push a new namespace on top of the vro stack with a 'with namespace: ...' or such,
vro(0) would still be at the top, and vro(1) will be the local before the with, and := can still
be sugar for find-and-rebind.

Using := and not finding something to rebind would be a NameError. Ditto for
vro(n).nonexistent_name_at_level_n_or_outwards. vro(-1) could refer global module scope
and vro(-2) go inwards towards local scope at vro(0). So vro(-1).gvar=expr would
give you the effect of globals()['gvar']=expr with a pre-existence check.

The pre-existence requirement would effectively be a kind of declaration requirement for
the var := expr usage, and an initialization to a particular type could enhance inference.
Especially if you could have a decorator for statements in general, not just def's, and
you could then have a sticky-types decoration that would say certain bindings may be inferred
to stick to their initial binding's object's type.

Rambling uncontrollably ;-)
My .02USD ;-)

Regards,
Bengt Richter
 
A

Antoon Pardon

Op 2005-10-06 said:
IMO you don't really need all that cruft most of the time. E.g., what if ':='
meant 'assign to variable wherever it is (and it must exist), searching according
to normal variable resolution order (fresh coinage, vro for short ;-), starting with
local, then lexically enclosing and so forth out to module global (but not to builtins).'

Just some ideas about this

1) Would it be usefull to make ':=' an expression instead if a
statement?

I think the most important reason that the assignment is a statement
and not an expression would apply less here because '==' is less easy
to turn into ':=' by mistake than into =

Even if people though that kind of bug was still too easy

2) What if we reversed the operation. Instead of var := expression,
we write expression =: var.

IMO this would make it almost impossible to write an assignment
by mistake in a conditional when you meant to test for equality.
 
A

Antoon Pardon

Op 2005-10-05 said:
Which is evaluated at runtime, does not require that the actual global
variable be pre-existing, and does not create the global variable if not
actually assigned. I think that is pretty different than your proposal
semantics.


Your making this feature "optional" contradicts the subject of this
thread i.e. declarations being necessary. But, continuing with your
declaration thought experiment, how are you planning on actually adding
optional useful type declarations to Python e.g. could you please
rewrite this (trivial) snippet using your proposed syntax/semantics?

from xml.dom import *

def do_add(x, y):
return '%s://%s' % (x, y)

def do_something(node):
if node.namespace == XML_NAMESPACE:
return do_add('http://', node.namespace)
elif node.namespace == ...
...

IMO your variable are already mostly declared. The x and y in
the do_add is kind of a declarartion for the parameters x and
y.
 
B

Bengt Richter

Yes, of course, and so do other programmers. What I mean is to try and
break it down into specific instances and then see what the best
approach is for each one is.

When I first started leaning Python I looked for these features as well,
but after a while my programming style changed and I don't depend on
types and names to check my data near as much now. But instead write
better organized code and data structures with more explicit value
checks where I need them.

My concern now is having reusable code and modules I can depend on. And
also separating my data and data management operations from the user
interface. Having functions and names that don't care what type the
objects are, makes doing this separation easier.

Another situation where typeless names are useful is routines that
explicitly check the type, then depending on the type does different
things. For example if you have a list with a lot of different type
objects stored in it, you can sort the contents into sublists by type.

Looking at it from a different direction, how about adding a keyword to
say, "from this point on, in this local name space, disallow new
names". Then you can do...

def few(x,y):
a = 'a'
b = 'b'
i = j = k = l = None
no_new_names
# raise an error after here if a new name is used.
...
for I in range(10): <-- error
...

This is more suitable to Pythons style than declaring types or variables
I think. Add to this explicit name-object locking to implement
constants and I think you would have most of the features you want.
You can do that now with a decorator, if you are willing to assign something
to no_new_names (so it won't give you a name error if it doesn't exist). E.g.,
... names = f.func_code.co_names
... assert 'no_new_names' not in names or names[-1]=='no_new_names', 'Bad name:%r'%names[-1]
... return f
... ... def few(x,y):
... a = 'a'
... b = 'b'
... i = j = k = l = None
... no_new_names=None
... for i in range(10): print i,
...
Traceback (most recent call last):
File "<stdin>", line 1, in ?
... def few(x,y):
... a = 'a'
... b = 'b'
... i = j = k = l = None
... no_new_names=None
... return a,b,i,j,k,l
... ('a', 'b', None, None, None, None)

No guarantees, since this depends on the unguaranteed order of f.func_code.co_names ;-)
so...

no_new_names # limit any new names
lock_name name # lock a name to it's current object
That last one you could probably do with a decorator that imports dis and
checks the disassembly (or does the equivalent check of the byte code) of f
for STORE_FASTs directed to particular names after the lock_name name declaration,
which you would have to spell as a legal dummy statement like
lock_name = 'name'

or perhaps better, indicating a locked assignment e.g. to x by

x = lock_name = expr # lock_name is dummy target to notice in disassembly, to lock x from there on
Since names are stored in dictionaries, a dictionary attribute to
disallow/allow new keys, and a way to set individual elements in a
dictionary to read only would be needed. Once you can do that and it
proves useful, then maybe you can propose it as a language feature.
I would want to explore how to compose functionality with existing elements
before introducing either new elements or new syntax. E.g., the dictionaries
used for instance attribute names and values already exist, and you can already
build all kinds of restrictions on the use of attribute names via properties
and descriptors of other kinds and via __getattribute__ etc.
These might also be checked for in the compile stage and would probably
be better as it wouldn't cause any slow down in the code or need a new
dictionary type.
Although note that the nnn decorator above does its checking at run time,
when the decorator is executed just after the _def_ is anonymously _executed_
to create the function nnn gets handed to check or modify before what it
returns is bound to the def function name. ;-)
An external checker could possibly work as well if a suitable marker is
used such as a bare string.

...
x = y = z = None
"No_New_Names" # checker looks for this
...
X = y/z # and reports this as an error
return x,y

and..

...
Author = "Fred"
"Name_Lock Author" # checker sees this...
...
Author = "John" # then checker catches this
...

So there are a number of ways to possibly add these features.
Yup ;-)
Finding common use cases where these would make a real difference would
also help.
Yup ;-)

Regards,
Bengt Richter
 
A

Antoon Pardon

Op 2005-10-05 said:
It's not perfect means it may not help. Depends on the cost of being
wrong - which means we need to see how things would be different if
the code was assuming that a variable existed, and then turned out to
be wrong.

Actually, I'd be interested in knowing how you would improve the
current CPython implementation with knowledge about whether or not a
variable existed. The current implementation just does a dictionary
lookup on the name. The lookup fails if the variable doesn't exist. So
checking on the existence of the variable is a byproduct of finding
the value of the variable. So even if it was perfect, it wouldn't
help.

Yes it would. A function with a declare statement could work
as the __slots__ attribute in a class. AFAIU each variable
would then internally be associated with a number and the
dictionary would be replace by a list. Finding the value
of the variable would just be indexing this table.
There's nothing special about library code. It can be anaylyzed just
like any other code.

Not necessarily, library code may not come with source, so there
is little to be analyzed then.
Except just adding declerations doesn't give you that. You have to
change the language so that undeclared variables are looked for up the
scope.

They already are. The only exception being when the variable is
(re)bound. This can give you 'surprising' results like the
following.

a = []
b = []
def f():
a[:] = range(10)
b = range(10)

f()
print a
print b

which will gibe the following result.

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[]

You're not reading the opponents arguments carefully enough. The
argument is that the benefit from type declerations is overstated, and
in reality doesn't outweigh the cost of declerations.

That may be there intend, but often enough I see arguments that boil
down to the fact that declarations won't solve a particular problem
completely as if that settles it.
Except that languages *are* static or dynamic.

Not in the way a lot of people seem to think here. Adding declarations
doesn't need to take away any dynamism from the language.
They have different
features, and different behaviors. Rather than tilting at the windmill
of making a dynamic language suitable for static approaches, it's
better to simply use the appropriate tool for the job. Especially if
those changes make the tool *less* suitable for a dynamic approach.

They don't. That seems to be the big fear after a lot of resistance
but IMO it is unfounded.

If I write a module that only makes sense with floating point numbers,
declaring those variables as floats and allowing the compiler to
generate code optimised for floating point numbers, will in no
way restrict any one else from using the full dynamic features of
the language.
The problem is that such changes invariably have deep impact that
isn't visible until you examine things carefully knowing you're
dealing with a dynamic language. For instance, just as Python can
delete a variable from a name space at run time, it can add a variable
to some name spaces at run time. So the compiler can't reliably
determine that a variable doesn't exist any more than it can reliably
determine that one does. This means that you can't flag using
undeclared variables in those namespaces at compile time without a
fundamental change in the language.

Yes it can. AFAIK python doesn't allow that a variable is created
in a function scope from somewhere else. Sure it may be possible
through some hack that works in the current C implemantation,
but there is a difference between what the language allows and
what is possible in a specific implementation.

Python also has the __slots__ attribute which prohibites attributes
to be added or deleted from instances.

Python can work with C-extension. What would have been the difference
if python would have had a number of static features that would have
allowed this kind of code be written in python itself. Look at pyrex,
I don't see where it lacks in dynamism with respect to python.
 
A

Antoon Pardon

Op 2005-10-05 said:
Typechecking is done by a reduced lamda calculus (System F, which is
ML-Style), whereas testing has the full power of a turing complete
language. So _if_ one has to be dropped, it would certainly be
typechecking.

Sure, But allow me this silly analogy.

Going out on a full test-drive will also reveal your tires are flat.
So if you one has to be dropped, a full test drive or a tire check
it would certainly be the tired check. But IMO the tire check
is still usefull.
Additionally, testing gives you the added benefit of actually using your
decelared APIs - which serves documentation purposes as well as
securing your design decisions, as you might discover bad design while
actually writing testcases.

Hey, I'm all for testing. I never suggested testing should be dropped
for declarations
Besides that, the false warm feeling of security a successful
compilation run has given many developers made them check untested and
actually broken code into the VCS. I've seen that _very_ often! And the
_only_ thinng that prevents us from doing so is to enforce tests.

I wonder how experienced are these programmers? I know I had this
feeling when I started at the univeristy, but before I left I
already wrote my programs in rather small pieces that were tested
before moving on.
But
these are more naturally done in python (or similar languages) as every
programmer knows "unless the program run sucsessfully, I can't say
anything about it" than in a statically typed language where the
programmer argues "hey, it compiled, it should work!"

Again I do have to wonder about how experienced these programmers are.
 
D

Duncan Booth

Antoon said:
IMO your variable are already mostly declared. The x and y in
the do_add is kind of a declarartion for the parameters x and
y.

I think you missed his point, though I'm not surprised since unless you
are familiar with the internals of the xml package it isn't obvious just
how complex this situation is.

The value XML_NAMESPACE was imported from xml.dom, but the xml package is
kind of weird. XML_NAMESPACE defined both in xml.dom and in the
_xmlplus.dom package. The _xmlplus package is conditionally imported by the
xml package, and completely replaces it, but only if _xmlplus is present
and at least version 0.8.4 (older versions are ignored).

This is precisely the kind of flexibility which gives Python a lot of its
power, but it means that you cannot tell without running the code which
package actually provides xml.dom.

Of course, I would expect that if you enforced strict variable declarations
you would also disallow 'from x import *', but you still cannot tell until
runtime whether an particular module will supply a particular variable, not
what type it is.
 
B

Brian Quinlan

Paul said:
Come on, you are asking silly questions. Any reasonable C compiler
would flag something like that and Python (with the flag set) should
do the same. If you want to ask substantive questions, that's fine,
but stop wasting our time with silly stuff.

I'm not trying to be silly. I am trying to get a handle on the semantics
that you are proposing. So we now have two requirements for the new
declaration syntax (please let me know if I'm wrong):

o the variable must be declared
o the variable must be assigned

I would assume that you would make it so that assignment and delaration
happen as part of the same statement?
If type checking is implemented then the stdlib should be updated to
add declarations for public symbols. If not, the compiler would flag
the undeclared symbol. You could always declare it to be of type 'object'.

Fair enough.
This wouldn't be allowed.

OK, that sucks.
You're being silly again. The compiler would examine the other module
when it processes the import statement, just like it does now.

Right now, the compiler DOES NOT examine the contents of the other
modules. All it does is generate an IMPORT_NAME instruction which is
evaluation during runtime. So are you proposing that the compiler now
scan other modules during compilation?
By processing the xml.dom module when it's imported.

Import happens at runtime (see above). But you seem to want compile-time
type checking.

Cheers,
Brian
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,280
Latest member
BGBBrock56

Latest Threads

Top