"no variable or argument declarations are necessary."

  • Thread starter James A. Donald
  • Start date
P

Paul Rubin

Duncan Booth said:
The value XML_NAMESPACE was imported from xml.dom, but the xml package is
kind of weird. XML_NAMESPACE defined both in xml.dom and in the
_xmlplus.dom package. The _xmlplus package is conditionally imported by the
xml package, and completely replaces it, but only if _xmlplus is present
and at least version 0.8.4 (older versions are ignored).

This is precisely the kind of flexibility which gives Python a lot of its
power, but it means that you cannot tell without running the code which
package actually provides xml.dom.

This sounds like the socket module, which is a total mess. Library
code should not be written like that.
 
P

Paul Rubin

Brian Quinlan said:
I'm not trying to be silly. I am trying to get a handle on the
semantics that you are proposing. So we now have two requirements for
the new declaration syntax (please let me know if I'm wrong):

o the variable must be declared
o the variable must be assigned

These would both be errors that the compiler could and should check
for, if declaration checking is enabled. However, they would not be
syntax errors.
I would assume that you would make it so that assignment and
delaration happen as part of the same statement?

Sure, why not.
Right now, the compiler DOES NOT examine the contents of the other
modules. All it does is generate an IMPORT_NAME instruction which is
evaluation during runtime.

In that case the other module gets compiled when the IMPORT_NAME
instruction is executed. That says that compile time and runtime are
really the same thing in the current system.
So are you proposing that the compiler now scan other modules during
compilation?

Yeah, maybe some optimization is possible.
 
D

Diez B. Roggisch

Sure, But allow me this silly analogy.
Going out on a full test-drive will also reveal your tires are flat.
So if you one has to be dropped, a full test drive or a tire check
it would certainly be the tired check. But IMO the tire check
is still usefull.

But you could write it as test - including not only a look (which
resembles the limited capabilities of typechecking), but testing the air
pressure, looking at the tyre type and see that it won't match the rainy
conditions...
Hey, I'm all for testing. I never suggested testing should be dropped
for declarations

The testing is IMHO more valuable than typechecking. The latter one
actually _limits_ me. See e.g. the java IO-Api for a very bloated way of
what comes very naturally with python. Duck-typing at it's best. The
only reason I see typechecking is good for is optimization. But that is
not the problem with JAVA/.NET anyway. And could possibly be done with
psyco.
I wonder how experienced are these programmers? I know I had this
feeling when I started at the univeristy, but before I left I
already wrote my programs in rather small pieces that were tested
before moving on.

Again I do have to wonder about how experienced these programmers are.

Well - surely they aren't. But that is beyond your control - you can't
just stomp into a company and declare your own superiority and force
others your way. I was astonished to hear that even MS just recently
adopted test-driven development for their upcoming windows vista. And
they are commonly seen as sort of whiz-kid hiring hi-class company,
certified CMM Levelo 6 and so on....

The discussion is somewhat moot - typechecking is not nonsense. But
matter of factly, _no_ programm runs without testing. And developing a
good testing culture is cruicial. Where OTH a lot of large and
successful projects exist (namely the python ones, amongst others..)
that show that testing alone without typechecking seems to be good enough.

Diez
 
A

Antoon Pardon

Op 2005-10-06 said:
But you could write it as test - including not only a look (which
resembles the limited capabilities of typechecking), but testing the air
pressure, looking at the tyre type and see that it won't match the rainy
conditions...


The testing is IMHO more valuable than typechecking. The latter one
actually _limits_ me. See e.g. the java IO-Api for a very bloated way of
what comes very naturally with python. Duck-typing at it's best.

But typechecking doesn't has to be java like.

I can't help but feel that a lot of people have specific typechecking
systems in mind and then conclude that the limits of such a symtem
are inherent in typechecking itself.

IMO a good type system doesn't need to limit python in any way.
 
D

Diez B. Roggisch

I can't help but feel that a lot of people have specific typechecking
systems in mind and then conclude that the limits of such a symtem
are inherent in typechecking itself.

I've been writing a type-checker for my diploma thesis for a functionnal
programmming language. And it _is_ limited. The very subject of my work
was to explore extended type-checking methods (so called
multi-level-specifications),, which can be shwon to be NP-hard problems.
Which naturally limits the domains they can be used to.
IMO a good type system doesn't need to limit python in any way.

It has to. certainly. Take alone the list implementation - while
typesystems as ML allow for generics (with much less typing overhead
than JAVA), the list is always homogenous. Which python's aren't - and
that a great thing(tm), even though ususally the contents of a list
share some common behaviour. And that exactly is the key point here: in
a statically typed world, that common behaviour must have been extracted
and made explicit. Which is the cause for that notorious java io API.
And, to extend the argument to ML-type type-checking, there you need a
disjoint union of the possible types - _beforehand_, and the code
dealing with it has to be aware of it.

In python OTH, I just pass objects I like into the list - if they
behave, fine.

DIez
 
A

Antoon Pardon

Op 2005-10-06 said:
I've been writing a type-checker for my diploma thesis for a functionnal
programmming language. And it _is_ limited. The very subject of my work
was to explore extended type-checking methods (so called
multi-level-specifications),, which can be shwon to be NP-hard problems.
Which naturally limits the domains they can be used to.


It has to. certainly. Take alone the list implementation - while
typesystems as ML allow for generics (with much less typing overhead
than JAVA), the list is always homogenous. Which python's aren't - and
that a great thing(tm),

Suppose we have a typesystem which has the type ANY, which would mean
such an object could be any type. You could then have homogenous lists
in the sense that all elements should be of the same declared type and
at the same time mix all kind of type in a particular list, just
as python does.

So how would this limit python.
even though ususally the contents of a list
share some common behaviour. And that exactly is the key point here: in
a statically typed world, that common behaviour must have been extracted
and made explicit.

Would my suggestion be classified as a statically typed world?
Which is the cause for that notorious java io API.
And, to extend the argument to ML-type type-checking, there you need a
disjoint union of the possible types - _beforehand_, and the code
dealing with it has to be aware of it.

In python OTH, I just pass objects I like into the list - if they
behave, fine.

But now we are no longer talking about how typechecking would limit
the language but about convenience for the user.
 
D

Diez B. Roggisch

Suppose we have a typesystem which has the type ANY, which would mean
such an object could be any type. You could then have homogenous lists
in the sense that all elements should be of the same declared type and
at the same time mix all kind of type in a particular list, just
as python does.

The you have JAVA Object or C void*. Which cause all kinds of runtime
troubles.... because they essentially circumvene the typechecking!
So how would this limit python.

The limitation is that in static languages I must _know_ what type to
cast such an ANY, before calling anything on it. Otherwise its useless.
Would my suggestion be classified as a statically typed world?

See above.
But now we are no longer talking about how typechecking would limit
the language but about convenience for the user.

That's dialectics. Limits in the language limit the user and make things
inconvenient.

Diez
 
P

Pierre Barbier de Reuille

Mike Meyer a écrit :
If it happens at runtime, then you can do it without declarations:
they're gone by then. Come to think of it, most functional languages -
which are the languages that make the heaviest use of closures - don't
require variable declarations.

Well, can you give a single example of such language ? Because all the
functionnal language I know but one do need variable declaration : lisp,
scheme, ocaml, haskell do need variable declaration ! Erlang do not ...
[...]


Only in a few cases. Type inferencing is a well-understood
technology, and will produce code as efficient as a statically type
language in most cases.

Type inferencing only works for statically typed languages AFAIK ! In a
dynamically typed languages, typing a variable is simply impossible as
any function may return a value of any type !
I have to agree with that. For whether or not a feature should be
included, there should either be a solid reason dealing with the
functionality of the language - meaning you should have a set of use
cases showing what a feature enables in the language that couldn't be
done at all, or could only be done clumsily, without the feature.

Wrong argument ... with that kind of things, you would just stick with
plain Turing machine ... every single computation can be done with it !
Except declarations don't add functionality to the language. They
effect the programing process. And we have conflicting claims about
whether that's a good effect or not, all apparently based on nothing
solider than personal experience. Which means the arguments are just
personal preferences.

Well, so why not *allow* for variable declaration ? Languages like Perl
does that successfully ... you don't like : you don't do ! you like :
you do ! A simple option at the beginning of the file tell the compilor
if variable declaration is mandatory or not !
Until someone does the research to provide hard evidence one way or
another, that's all we've got to work with. Which means that languages
should exist both with and with those features, and if one sides
experiences generalize to the population at large, they alternative
languages will die out. Which hasn't happened yet.




Um - that's just personal preference (though I may have misparsed your
sentence). What one person can't live without, another may not be able
to live with. All that means is that they aren't likely to be happy
with the same programming language. Which is fine - just as no
programming language can do everything, no programming language can
please everyone.

Antoon, at a guess I'd say that Python is the first time you've
encountered a dynamnic language. Being "horrified" at not having
variable declarations, which is a standard feature of such languages
dating back to the 1950s, is one such indication.

Dynamic language and variable declaration are non-related issues ! You
can have statically-typed language without variable declaration (i.e.
BASIC) and dynamically-typed language with (i.e. Lisp) ! Please, when
you says something about languages, at least give 1 name of language
asserting what you're saying !
Dynamic languages tend to express a much wider range of programming
paradigms than languages that are designed to be statically
compiled. Some of these paradigms do away with - or relegate to the
level of "ugly performance hack" - features that someone only
experienced with something like Pascal would consider
essential. Assignment statements are a good example of that.

Well, could you be more specific once more ? I can't that many paradigm
only available on dynamically typed languages ... beside duck-typing
(which is basically a synonym for dynamically-typed)
Given these kinds of differences, prior experience is *not* a valid
reason for thinking that some difference must be wrong. Until you have
experience with the language in question, you can't really decide that
some feature being missing is intolerable. You're in the same position
as the guy who told me that a language without a goto would be
unusable based on his experience with old BASIC, FORTRAN IV and
assembler.

After more than two years of Python programming, I still fill the need
for variable declarations. It would remove tons of bugs for little works
and would also clarify the scope of any single variable.
Pick one of the many languages that don't require declarations. Try
writing a code in them, and see how much of a problem it really is in
practice, rather than trying to predict that without any
information. Be warned that there are *lots* of variations on how
undeclared variables are treated when referenced. Python raises
exceptions. Rexx gives them their print name as a value. Other
languages do other things.

<mike

Well, IMO, worst case is silently give a default value, like PHP (or
apparently Rexx) does ... this can hide bugs for month if a single
test-case is missing !

Well, in the end, I would really like an *option* at the beginning of a
module file requiring variable declaration for the module. It would
satisfy both the ones who want and the ones who don't want that ...

Pierre
 
M

Mike Meyer

Pierre Barbier de Reuille said:
Mike Meyer a écrit :
Well, can you give a single example of such language ? Because all the
functionnal language I know but one do need variable declaration : lisp,
scheme, ocaml, haskell do need variable declaration ! Erlang do not ...

Scheme and lisp don't need variable declerations. Last time I looked,
Schemd didn't even *allow* variable declerations.
Type inferencing only works for statically typed languages AFAIK ! In a
dynamically typed languages, typing a variable is simply impossible as
any function may return a value of any type !

I think we're using different definitions of statically typed
here. A language that is statically typed doesn't *need* type
inferencing - the types are all declared! Type determines the thypes
by inferenceing them from an examination of the program. So, for
instance, it can determine that this function:

def foo():
return 1

Won't ever return anything but an integer.
Wrong argument ... with that kind of things, you would just stick with
plain Turing machine ... every single computation can be done with it !

"Computation" is is not the same thing as "Functionality". If you
think otherwise, show me how to declare an object with a Turing
machine.

And there's also the issue of "clumsily". Turing machines are clumsy
to program in.

Well, so why not *allow* for variable declaration ? Languages like Perl
does that successfully ... you don't like : you don't do ! you like :
you do ! A simple option at the beginning of the file tell the compilor
if variable declaration is mandatory or not !

Perl is a red herring. Unless it's changed radically since I last
looked, undeclared variables in Perl have dynamic scope, not lexical
scope. While dynamically scoped variables are a powerful feature, and
there have been proposals to add them to Python, having them be the
default is just *wrong*. If I were writing in Perl, I'd want
everything declared just to avoid that. Of course, if Python behaved
that way, I'd do what I did with Perl, and change languages.
Dynamic language and variable declaration are non-related issues ! You
can have statically-typed language without variable declaration (i.e.
BASIC) and dynamically-typed language with (i.e. Lisp) ! Please, when
you says something about languages, at least give 1 name of language
asserting what you're saying !

Declerations and typing are *also* non-related issues. See Perl. Also
see the subject line.
Well, could you be more specific once more ? I can't that many paradigm
only available on dynamically typed languages ... beside duck-typing
(which is basically a synonym for dynamically-typed)

I said "dynamic languages", *not* "dynamically typed languages". They
aren't the same thing. Dynamic languages let you create new functions,
variables and attributes at run time. Python lets you delete them as
well. This means that simle declarations can't tell you whether or
not a variable will exist at runtime, because it may have been added
at run time.
After more than two years of Python programming, I still fill the need
for variable declarations. It would remove tons of bugs for little works
and would also clarify the scope of any single variable.

Maybe you're still writing code for a language with declerations? I
never felt that need. Then again, I came to Python from a language
that didn't require declerations: Scheme.
Well, IMO, worst case is silently give a default value, like PHP (or
apparently Rexx) does ... this can hide bugs for month if a single
test-case is missing !

Well, in the end, I would really like an *option* at the beginning of a
module file requiring variable declaration for the module. It would
satisfy both the ones who want and the ones who don't want that ...

Nope. It would just change the argument from "Python should have ..."
to "You should always use ..." or "Module foo should use ...".

<mike
 
R

Ron Adam

Bengt said:
You can do that now with a decorator, if you are willing to assign something
to no_new_names (so it won't give you a name error if it doesn't exist). E.g.,

Works for me.

__lock_names__ = True

It's not too different than __name__ == '__main__'...

... names = f.func_code.co_names
... assert 'no_new_names' not in names or names[-1]=='no_new_names', 'Bad name:%r'%names[-1]
... return f
...... def few(x,y):
... a = 'a'
... b = 'b'
... i = j = k = l = None
... no_new_names=None
... for i in range(10): print i,
...
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 3, in nnn
AssertionError: Bad name:'range'

Hmm... To make it work this way, the globals and arguments need to have
local references.

@nnn
def few(x,y):
global range
range = range
x,y = x,y
a = 'a'
b = 'b'
i = j = k = l = None
L = 1
__no_new_names__ = True
L += 1
for i in range(x,y):
print I
... def few(x,y):
... a = 'a'
... b = 'b'
... i = j = k = l = None
... no_new_names=None
... return a,b,i,j,k,l
...
('a', 'b', None, None, None, None)

No guarantees, since this depends on the unguaranteed order of f.func_code.co_names ;-)

I had the thought that collecting the names from the 'STORE FAST' lines
of dis.dis(f) would work nicely, but... dis.dis() doesn't return a
string like I expected, but prints the output as it goes. This seems
like it would be easy to fix and it would make the dis module more
useful. I'd like to be able to do...

D = dis.dis(f)

An alternate option to output the disassembly to a list of of tuples.
That would make analyzing the output really easy. ;-)

Something like...

good_names = []
nnnames = False
for line in dis.dislist(f):
if line[2] = 'SAVE_FAST':
if not nnnames:
if line[-1] = '(__no_new_names__)':
nnnames=True
continue
good_names.append(line[-1])
else:
assert line[-1]in good_names, 'Bad name:%r'% line[-1]



So, I wonder what kind of errors can be found by analyzing the disassembly?

That last one you could probably do with a decorator that imports dis and
checks the disassembly (or does the equivalent check of the byte code) of f
for STORE_FASTs directed to particular names after the lock_name name declaration,
which you would have to spell as a legal dummy statement like
lock_name = 'name'

or perhaps better, indicating a locked assignment e.g. to x by

x = lock_name = expr # lock_name is dummy target to notice in disassembly, to lock x from there on

Using dis.dis it becomes two sequential 'STORE_FAST' operations. So add
(x) to the don't change list, and catch it on the next 'STORE_FAST' for
(x). ;-)

28 12 LOAD_GLOBAL 2 (True)
15 DUP_TOP
16 STORE_FAST 0 (x)
19 STORE_FAST 8 (__lock_name__)

I would want to explore how to compose functionality with existing elements
before introducing either new elements or new syntax. E.g., the dictionaries
used for instance attribute names and values already exist, and you can already
build all kinds of restrictions on the use of attribute names via properties
and descriptors of other kinds and via __getattribute__ etc.

That was more or less what I had in mind, but I think keeping things as
passive as possible is what is needed. One thought is to use this type
of thing along with __debug__.

if __debug__: __nnn__ = True


Wouldn't a debug block or suite be better than an if __debug__:? Just a
thought. Even if the -0 option is given the if __debug__: check is
still there. Which means you still need to comment it out if it's in an
inner loop.

debug: __nnn__ = True # Is not included if __debug__ is false.

or...

MAX = 256
MIN = 0
debug:
__lock__ = MIN, MAX # helps checker app
__no_new_names__ = True # find bugs.

for MAX in range(1000): # If __debug__, checker app catches
this.
if m<MIN or m>MAX:
print m

Although note that the nnn decorator above does its checking at run time,
when the decorator is executed just after the _def_ is anonymously _executed_
to create the function nnn gets handed to check or modify before what it
returns is bound to the def function name. ;-)

Yes. ;-)

Is there a way to conditionally decorate? For example if __debug__ is
True, but not if it's False? I think I've asked this question before. (?)

Cheers,
Ron
 
F

Fredrik Lundh

Ron said:
Is there a way to conditionally decorate? For example if __debug__ is
True, but not if it's False? I think I've asked this question before. (?)

the decorator is a callable, so you can simply do, say

from somewhere import debugdecorator

if not __debug__:
debugdecorator = lambda x: x

or

def debugdecorator(func):
if __debug__:
...
else:
return func

etc.

</F>
 
P

Paul Rubin

Mike Meyer said:
I think we're using different definitions of statically typed
here. A language that is statically typed doesn't *need* type
inferencing - the types are all declared! Type determines the thypes
by inferenceing them from an examination of the program.

I thought static typing simply means the compiler knows the types of
all the expressions (whether through declarations or inference) so it
can do type checking at compile time:
So, for instance, it can determine that this function:

def foo():
return 1

Won't ever return anything but an integer.

Static typing in this case would mean that re.match('a.*b$', foo())
would get a compile time error, not a runtime error, since re.match
expects two string arguments. This can happen through type inference
w/o declarations.

Note apropos the private variable discussion that CPython can't
guarantee that foo() always returns an integer. Something might
change foo.func_code.co_code or something like that.
Maybe you're still writing code for a language with declerations? I
never felt that need. Then again, I came to Python from a language
that didn't require declerations: Scheme.

I've done a fair amount of Lisp programming and have found the lack of
compile-time type checking to cause about the same nuisance as in
Python. I also notice that the successors to the old-time Lisp/Scheme
communities seem to now be using languages like Haskell.
Nope. It would just change the argument from "Python should have ..."
to "You should always use ..." or "Module foo should use ...".

Perl has a feature like that right now, and it doesn't lead to many such
arguments.
 
M

Mike Meyer

Paul Rubin said:
I thought static typing simply means the compiler knows the types of
all the expressions (whether through declarations or inference) so it
can do type checking at compile time:


Static typing in this case would mean that re.match('a.*b$', foo())
would get a compile time error, not a runtime error, since re.match
expects two string arguments. This can happen through type inference
w/o declarations.

Except for two problems:

One you noted:
Note apropos the private variable discussion that CPython can't
guarantee that foo() always returns an integer. Something might
change foo.func_code.co_code or something like that.

Two is that dynamic binding means that foo may not refer to the above
function when you get there at run time.
I've done a fair amount of Lisp programming and have found the lack of
compile-time type checking to cause about the same nuisance as in
Python.

So have I - basically none at all.
Perl has a feature like that right now, and it doesn't lead to many such
arguments.

As noted elsewhere, Perl isn't a good comparison. You don't simply say
"This variable exists", you say "this variable is local to this
function". Undeclared variables are dynamically bound, which means you
can get lots of non-obvious, nasty bugs that won't be caught by unit
testing. Making all your variables lexically bound (unless you really
need a dynamically bound variable) is a good idea. But that's already
true in Python.

<mike
 
R

Ron Adam

Fredrik said:
Ron Adam wrote:




the decorator is a callable, so you can simply do, say

from somewhere import debugdecorator

if not __debug__:
debugdecorator = lambda x: x

Ah... thanks.

I suppose after(if) lambda is removed it would need to be.

def nulldecorator(f):
return f

if not __debug__:
debugdecorator = nulldecorator

or

def debugdecorator(func):
if __debug__:
...
else:
return func

etc.

This one came to mind right after I posted. :)
 
S

Steve Holden

Ron said:
Fredrik Lundh wrote:




Ah... thanks.

I suppose after(if) lambda is removed it would need to be.

def nulldecorator(f):
return f

if not __debug__:
debugdecorator = nulldecorator
It would be easier to write

if not __debug__:
def debugdecorator(f):
return f

regards
Steve
 
B

Barbier de Reuille

Scheme and lisp don't need variable declerations. Last time I looked,
Schemd didn't even *allow* variable declerations.

When you want local variable in lisp you do :

(let ((a 3)) (+ a 1))

For global variable you may do:

(defparameter *a* 4)

or:

(defvar *a* 4)

However, either way, variable assignment is done via :

(setf *a* 5)
(setf a 10)

This is what I call variable declaration as you have different way
to declare global variables and to assign them ... So the
two operations are well defined and different. And here there is a
difference between static language and declarative ones ... Lisp is a
dynamic language that needs variable declarations.
I think we're using different definitions of statically typed
here. A language that is statically typed doesn't *need* type
inferencing - the types are all declared! Type determines the thypes
by inferenceing them from an examination of the program. So, for
instance, it can determine that this function:

Well, indeed ... statically typed means only one thing : each *variable*
has a *static* type, i.e. a type determined at compile time. Once again,
OCaml and Haskell *are* statically typed but as they have type inference
you don't *need* to explicitely type your functions / variables. However
you *may* if you want ...
def foo():
return 1

Won't ever return anything but an integer.


"Computation" is is not the same thing as "Functionality". If you
think otherwise, show me how to declare an object with a Turing
machine.

Well, that was "bad spirit" from me ;) My argument here wasn't serious
in any mean ...
And there's also the issue of "clumsily". Turing machines are clumsy
to program in.



Perl is a red herring. Unless it's changed radically since I last
looked, undeclared variables in Perl have dynamic scope, not lexical
scope. While dynamically scoped variables are a powerful feature, and
there have been proposals to add them to Python, having them be the
default is just *wrong*. If I were writing in Perl, I'd want
everything declared just to avoid that. Of course, if Python behaved
that way, I'd do what I did with Perl, and change languages.

I never said to adopt the whole Perl variable semantic. I just pointed
what I think is a good idea in Perl and that help (IMHO) precising what
I intended ...
Declerations and typing are *also* non-related issues. See Perl. Also
see the subject line.

That was just my point ...
I said "dynamic languages", *not* "dynamically typed languages". They
aren't the same thing. Dynamic languages let you create new functions,
variables and attributes at run time. Python lets you delete them as
well. This means that simle declarations can't tell you whether or
not a variable will exist at runtime, because it may have been added
at run time.

Ok, I misunderstood ... however, can you still point some *usefull*
paradigm available to dynamic languages that you cannot use with static
ones ? As there are so many, it shouldn't be hard for you to show us
some !
 
M

Mike Meyer

Barbier de Reuille said:
When you want local variable in lisp you do :

(let ((a 3)) (+ a 1))

Excep that's not a decleration, that's a binding. That's identical to
the Python fragment:

a = 3
return a + 1

except for the creation of the new scope. Not a variable decleration
in site.
For global variable you may do:

(defparameter *a* 4)

or:

(defvar *a* 4)

That's not Scheme. When I was writing LISP, those weren't
required. Which is what I said: variable declarations aren't required,
and aren't allowedd in Scheme.
However, either way, variable assignment is done via :

(setf *a* 5)
(setf a 10)

This is what I call variable declaration as you have different way
to declare global variables and to assign them ... So the
two operations are well defined and different.

Python uses "global foo" to declare global variables.
And here there is a difference between static language and
declarative ones ... Lisp is a dynamic language that needs variable
declarations.

LISP doesn't need variable declarations. I certainly never wrote any
when I was writing it.
I never said to adopt the whole Perl variable semantic. I just pointed
what I think is a good idea in Perl and that help (IMHO) precising what
I intended ...

And I pointed out that it's a good idea in Perl because it does
something that it doesn't need doing in Python.
Ok, I misunderstood ... however, can you still point some *usefull*
paradigm available to dynamic languages that you cannot use with static
ones ? As there are so many, it shouldn't be hard for you to show us
some !

I find the ability to add attributes to an object or class at run time
useful.

<mike
 
B

Bengt Richter

Just some ideas about this

1) Would it be usefull to make ':=' an expression instead if a
statement?
Some people would think so, but some would think that would be tempting the weak ;-)
I think the most important reason that the assignment is a statement
and not an expression would apply less here because '==' is less easy
to turn into ':=' by mistake than into =

Even if people though that kind of bug was still too easy

2) What if we reversed the operation. Instead of var := expression,
we write expression =: var.

IMO this would make it almost impossible to write an assignment
by mistake in a conditional when you meant to test for equality.
It's an idea. You could also have both, and use it to differentiate
pre- and post-operation augassign variants. E.g.,

alist[i+:=2] # add and assign first, index value is value after adding

alist[i=:+2] # index value is value before adding and assigning

Some people might think that useful too ;-)

Hm, I wonder if any of these variations would combine usefully with the new
short-circuiting expr_true if cond_expr else expr_false ...

Sorry I'll miss the flames, I'll be off line a while ;-)

Regards,
Bengt Richter
 
A

Antoon Pardon

Op 2005-10-06 said:
The you have JAVA Object or C void*. Which cause all kinds of runtime
troubles.... because they essentially circumvene the typechecking!

Why do you call this a JAVA Object or C void*? Why don't you call
it a PYTHON object. It is this kind of reaction that IMO tells most
opponents can't think outside the typesystems they have already
seen and project the problems with those type systems on what
would happen with python should it acquire a type system.
The limitation is that in static languages I must _know_ what type to
cast such an ANY, before calling anything on it. Otherwise its useless.


See above.

Your answer tells more about you then about my suggestion.
 
S

Steve Holden

Antoon said:
Why do you call this a JAVA Object or C void*? Why don't you call
it a PYTHON object. It is this kind of reaction that IMO tells most
opponents can't think outside the typesystems they have already
seen and project the problems with those type systems on what
would happen with python should it acquire a type system.
[sigh]. No, it's just you being you. Diez' intention seemed fairly clear
to me: he is pointing out that strongly-typed systems invariably fall
back on generic declarations when they want to allow objects of any type
(which, it seems to me, is what you were proposing as well).

In other words, you want Python to be strongly-typed, but sometimes you
want to allow a reference to be to any object whatsoever. In which case
you can't possibly do any sensible type-checking on it, so this new
Python+ or whatever you want to call it will suffer from the same
shortcomings that C++ and java do, which is to say type checking can't
possibly do anything useful when the acceptable type of a reference is
specified as ANY.
Your answer tells more about you then about my suggestion.
Damn, I've been keeping away from this thread lest my exasperation lead
me to inappropriate behaviour.

Is there any statement that you *won't* argue about?

leaving-the-(hopefully)-last-word-to-you-ly y'rs - steve
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,156
Latest member
KetoBurnSupplement
Top