"no variable or argument declarations are necessary."

  • Thread starter James A. Donald
  • Start date
P

Paul Rubin

Steve Holden said:
In other words, you want Python to be strongly-typed, but sometimes
you want to allow a reference to be to any object whatsoever. In which
case you can't possibly do any sensible type-checking on it, so this
new Python+ or whatever you want to call it will suffer from the same
shortcomings that C++ and java do, which is to say type checking can't
possibly do anything useful when the acceptable type of a reference is
specified as ANY.

Let's see if I understand what you're saying:

C and Java: you get useful type checking except when you declare
a reference as type ANY. This is a shortcoming compared to:

Python: where you get no useful type checking at all.

That is not very convincing logic.
 
S

Steve Holden

Paul said:
Let's see if I understand what you're saying:

C and Java: you get useful type checking except when you declare
a reference as type ANY. This is a shortcoming compared to:

Python: where you get no useful type checking at all.

That is not very convincing logic.

As we say in Yorkshire, "There's none as thick as them that wants to
be". Let's try to get this in context.

Antoon:
Suppose we have a typesystem which has the type ANY, which would mean
such an object could be any type. You could then have homogenous lists
in the sense that all elements should be of the same declared type and
at the same time mix all kind of type in a particular list, just
as python does.
Diez:
The you have JAVA Object or C void*. Which cause all kinds of runtime
troubles.... because they essentially circumvene the typechecking!
Antoon:
Why do you call this a JAVA Object or C void*? Why don't you call
it a PYTHON object. It is this kind of reaction that IMO tells most
opponents can't think outside the typesystems they have already
seen and project the problems with those type systems on what
would happen with python should it acquire a type system.
Me:
Diez' intention seemed fairly clear
to me: he is pointing out that strongly-typed systems invariably fall
back on generic declarations when they want to allow objects of any type
(which, it seems to me, is what you were proposing as well).
You:
C and Java: you get useful type checking except when you declare
a reference as type ANY. This is a shortcoming compared to:

Python: where you get no useful type checking at all.
The points that have repeatedly been made are:

1. That even the strict typings required by languages like Java and C++
actually end up getting in the way when the pragmatic requirements of
real-world problems have to be taken into account.

2. That the benefits of declarations are overstated by many of their
proponents.

3. That Python as it is today allows the dynamic creation of names,
which are therefore inherently not available for declaration.

On existing evidence it's extremely unlikely that this post will end the
thread, but I certainly wish *something* would. Unfortunately I seem to
have become part of the problem in that respect :)

regards
Steve
 
D

Diez B. Roggisch

Why do you call this a JAVA Object or C void*? Why don't you call
it a PYTHON object. It is this kind of reaction that IMO tells most
opponents can't think outside the typesystems they have already
seen and project the problems with those type systems on what
would happen with python should it acquire a type system.

Well, because maybe I wanted you to give you an example of languages
that are statically typed and have such an any construct - that, by the
way, is not a piece of inguine imagination of yours, but has been
thought of before, e.g. CORBA (and called there any, too)? It makes no
sense putting python into that context - as it is _not_ statically
typed. Which you should know, after discussing this very subject way too
long.
Your answer tells more about you then about my suggestion.

Your answer tells us something too: Just because you don't know anything
about typechecking does not mean that you are in the position to make
assumptions on "how things could work if the people who know stuff
wouldn't be so stupid". That's like saying "cars can't fly because the
stupid engineers lack my sense of imagination."

Just blathering about the possibility of some super-duper-typechecker
and countering criticism or being told about problems in that domain by
making bold statements that this sure could work - provide us with an
implementation.

Or maybe - just maybe - you could sit back and think about the fact that
lots of people who are way cleverer than you and me have been working on
this subject, and so far haven't found a way. Which doesn't necessarily
mean that there is no way - but certainly its hard, theory-laden work
and won't emerge in a NG discussion by some snide remarks of either you
or anybody else.


Diez
 
D

Diez B. Roggisch

Paul said:
Let's see if I understand what you're saying:

C and Java: you get useful type checking except when you declare
a reference as type ANY. This is a shortcoming compared to:

Python: where you get no useful type checking at all.

That is not very convincing logic.

No, he said that this typechecking wouldn't make sense in the case of
ANY being used. And the plethorea of ClassCastExceptions and Segfault
proves the point :)

Diez
 
B

Ben Sizer

Paul said:
Let's see if I understand what you're saying:

C and Java: you get useful type checking except when you declare
a reference as type ANY. This is a shortcoming compared to:

Python: where you get no useful type checking at all.

That is not very convincing logic.

It's started to get very misleading - Python gives you plenty of
type-checking, as we all know, just not at compile-time. Also comparing
Python to C/Java as you have done is not very appropriate unless you
want Python to have the same sort of compile times as C and Java do.

I think you're doing a small disservice to respond to Steve when not
acknowledging the context of the thread, where Diez was explaining that
the system used in ML would not work in Python, then Antoon made a
suggestion that would fix that particular problem but make others
worse.

I'm not convinced that the Java route - where you type out lengthy type
declarations to get some compile-time typechecking which you usually
end up having to bypass later anyway - is at all beneficial, at least
not in the context of Python. I can't ever remember a time when I
thought "type checking really saved me from a bug there" when using
C/C++/Java, but I _can_ remember many times where I've had to consider
which cast or conversion to use, or had to write another overloaded
function to accommodate a similar-but-different type, or debug a
complex template message, or add a superfluous base class or interface,
all just to get the kind of genericity that Python gives for free. And
it's no good saying that variable declarations will be optional,
because as soon as these statically-typed variables enter the standard
library, every Python programmer will have to take these considerations
on board when writing their code, whether we want to use them or not.
 
R

Roy Smith

Ben Sizer said:
It's started to get very misleading - Python gives you plenty of
type-checking, as we all know, just not at compile-time.

There's more to it than just that. Python's type checking is not just not
done at compile time, it's done as late in run time as possible. One might
call it just-in-time type checking.

It's not hard to imagine a Python-like language which included (perhaps
optional) variable declarations. A declaration would essentially be an
assertion which was checked after each assignment to that name. So, you
could write:

int i = 5
i = 5.6

and the second statement would throw TypeError. This would give you
C++/Java style type safety, but it still wouldn't be compile time.

Perhaps a better way to describe it is that the checking isn't an is-a
assertion, but an acts-like assertion (sort of like Java's interfaces). To
take an example, in the function:

def first3(y):
if len(y) < 3:
return y
return y[0:3]

all I really need from the argument is that I can call len() on it and it
can be sliced. An easy way to describe this would be to say that y must be
a sequence, but that's not strictly accurate, since I can easily declare my
own class which meets those requirements without being a subclass of
sequence (even ignoring for the moment that 'sequence', while talked about
in the documentation, doesn't actually exist as something you can subclass).
 
A

Antoon Pardon

Op 2005-10-07 said:
Antoon said:
Why do you call this a JAVA Object or C void*? Why don't you call
it a PYTHON object. It is this kind of reaction that IMO tells most
opponents can't think outside the typesystems they have already
seen and project the problems with those type systems on what
would happen with python should it acquire a type system.
[sigh]. No, it's just you being you. Diez' intention seemed fairly clear
to me: he is pointing out that strongly-typed systems invariably fall
back on generic declarations when they want to allow objects of any type
(which, it seems to me, is what you were proposing as well).

It is not about falling back on generic declarartion, it is about
how such object will be treated. Diez seems to think that
strongly-typed language can only deal with generic declarations
by using something that allows circumventing the type system.
In other words, you want Python to be strongly-typed, but sometimes you
want to allow a reference to be to any object whatsoever. In which case
you can't possibly do any sensible type-checking on it, so this new
Python+ or whatever you want to call it will suffer from the same
shortcomings that C++ and java do, which is to say type checking can't
possibly do anything useful when the acceptable type of a reference is
specified as ANY.

And you are wrong. The problem with the C void* construct (I'm not that
familiar with java) is that all type information is lost. When you
use such a parameter in a function you have no idea what you are
working with.

But that doesn't need to be if you have a typesystem with an ANY type.
Such a type declaration would mean that object of any type could be
used here. However that doesn't imply that the type information
of the actual objects used, has to be lost. That type information
may still be available and usefull for further type checking.

That you and Diez can only think about C, C++ or java constructs
when I mention an ANY type, is your limitation. It doesn't
need to be the limitation of a specific type system.
 
C

Christophe

Roy Smith a écrit :
There's more to it than just that. Python's type checking is not just not
done at compile time, it's done as late in run time as possible. One might
call it just-in-time type checking.

It's more of a "Nearly too late" type checking I would say. Not that I
complain but it would be great if there were also some automatic type
checking to catch a few errors as soon as possible.
 
A

Antoon Pardon

Op 2005-10-07 said:
Well, because maybe I wanted you to give you an example of languages
that are statically typed and have such an any construct

But since I have no such type system in mind, such an example is useless.
- that, by the
way, is not a piece of inguine imagination of yours, but has been
thought of before, e.g. CORBA (and called there any, too)? It makes no
sense putting python into that context - as it is _not_ statically
typed. Which you should know, after discussing this very subject way too
long.

The fact that something else uses the same name, for something
doesn't mean it has to be implemented the same way.
Your answer tells us something too: Just because you don't know anything
about typechecking does not mean that you are in the position to make
assumptions on "how things could work if the people who know stuff
wouldn't be so stupid". That's like saying "cars can't fly because the
stupid engineers lack my sense of imagination."

Then argue against my ideas, and not your makings of it.

If I just use 'ANY' and you fill that in with C void* like
implementation and argue against that, then you are arguing
against your own ghosts, but not against what I have in mind.

It may very well turn out that my idea is useless, but I will
only accept that when someone comes with arguments against
my actual idea, and not with arguements against their projection
of it.
Just blathering about the possibility of some super-duper-typechecker
and countering criticism or being told about problems in that domain by
making bold statements that this sure could work - provide us with an
implementation.

You have not counterd my idea with criticism. You have decorated my
idea with how you think it would be implemented (C void*) and argued
against that. I don't need to give an implementation to notice, that
you jumped to a particular implementation and basicly just countered
that implementation, not the idea in general.
Or maybe - just maybe - you could sit back and think about the fact that
lots of people who are way cleverer than you and me have been working on
this subject, and so far haven't found a way. Which doesn't necessarily
mean that there is no way - but certainly its hard, theory-laden work
and won't emerge in a NG discussion by some snide remarks of either you
or anybody else.

As far as I'm concerned that was just meant as a matter of fact remark,
with no snide intentions.
 
F

Fredrik Lundh

Christophe said:
It's more of a "Nearly too late" type checking I would say. Not that I
complain but it would be great if there were also some automatic type
checking to catch a few errors as soon as possible.

use assert as the soonest possible point. implementing "type gates" is
trivial, if you think you need them.

</F>
 
M

mg

Hello,

In a recursive function like the following :


def foo( j ) :
j += 1
while j < n : j = foo( j )
return j


in found that the recursivity is limited (1000 iterations). Then, I have
two questions :
- why this mecanism has been implemented ?
- it is possible to increase or remove (and how) the number of iterations ?

Regards,
Mathieu
 
S

Steven D'Aprano

There's more to it than just that. Python's type checking is not just not
done at compile time, it's done as late in run time as possible. One might
call it just-in-time type checking.

Well there you go then. Instead of pulling our hair out that Python has no
type checking ("that's a bug in the language design, woe woe woe!!!") we
can just say that Python does JIT type checking, which not only is a
feature, but also satisfies the Pointy Haired Bosses who demand buzzwords
they can't understand.
 
C

Christophe

Fredrik Lundh a écrit :
:




use assert as the soonest possible point. implementing "type gates" is
trivial, if you think you need them.

Still, it would be great if there were also some automatic type checking
in place. Using assert is hardly automatic and non intrusive.

I mean, why not ? Why does the compiler let me do that when you know
perfectly that that code is incorrect :
def f():
return "a" + 5

Of course the system can't be perfect but it doesn't need to be. It
doesn't need to constrain us in any way but if it can detect some errors
early, then it is worth it.
 
J

Juho Schultz

mg said:
Hello,

In a recursive function like the following :


def foo( j ) :
j += 1
while j < n : j = foo( j )
return j


in found that the recursivity is limited (1000 iterations). Then, I have
two questions :
- why this mecanism has been implemented ?
- it is possible to increase or remove (and how) the number of iterations ?

Regards,
Mathieu

Try the following for answers to both questions:

import sys
print sys.setrecursionlimit.__doc__

I guess 1000 is the default value.
 
B

Brandon K

Is there no way to implement your idea in a classical loop? Usually the
syntax is cleaner, and there is no limit (except the limit of the range
function in certain cases). For example what would be wrong with.

def foo(j):
while j < n:
j+=1
return j

I don't know much about the internals of python, but to me it seems like
if you're going to be doing this on the level of 1000s of iterations,
there might be some overhead to using recursion (i.e. function calls)
that a loop wouldn't have (but that's just a guess).
Hello,

In a recursive function like the following :


def foo( j ) :
j += 1
while j < n : j = foo( j )
return j


in found that the recursivity is limited (1000 iterations). Then, I have
two questions :
- why this mecanism has been implemented ?
- it is possible to increase or remove (and how) the number of iterations ?

Regards,
Mathieu


----== Posted via Newsgroups.com - Usenet Access to over 100,000 Newsgroups ==----
Get Anonymous, Uncensored, Access to West and East Coast Server Farms!
----== Highest Retention and Completion Rates! HTTP://WWW.NEWSGROUPS.COM ==----
 
B

Brandon K

def foo(j):
while j < n:
j+=1
return j

of course I mean:
def foo(j):
while j < n:
j+=1
return j

sorry


----== Posted via Newsgroups.com - Usenet Access to over 100,000 Newsgroups ==----
Get Anonymous, Uncensored, Access to West and East Coast Server Farms!
----== Highest Retention and Completion Rates! HTTP://WWW.NEWSGROUPS.COM ==----
 
P

Paul Rubin

Fredrik Lundh said:
use assert as the soonest possible point. implementing "type gates" is
trivial, if you think you need them.

What this is about (to me at least) is the edit-debug cycle. Let's
say I write some Python code, using assert to validate datatypes.
Maybe I've made 4 errors. I then write a test function and run it.
Boom, the first assert fails. I fix the first error, run again.
Boom, the next assert fails. Fix the next error, run again, boom,
fix, etc. Four edit-debug cycles.

With static typing, I run the compiler, get 4 error messages, fix all
4, and can get on with the next phase of testing with three fewer edit
cycles. That's a definite benefit of languages like Java. It's not
imaginary. Unit tests on Python code don't make it go away. I have
less Java experience than Python experience by now, but I still find
that Java programs take me fewer iterations to get working than Python
programs. The trouble is that Java has a thousand deficiencies that
outweigh that particular benefit, so overall I like Python a lot
better anyway.

Now some of the Python-is-perfect crowd seems to suffer from a "Blub
paradox" (http://c2.com/cgi/wiki?BlubParadox). They see annoying,
static typed languages like C and Java, and they see pleasant,
dynamically typed languages like Python, and conclude that static
types = annoying, when in fact they can be orthogonal. So, can there
be a language that's both statically typed, and pleasant? I haven't
used one yet, but lately I've been reading about Haskell and want to
give it a try. I keep finding statements like:

To me, Haskell is what Python should have evolved to. As a long-time
Python programmer, I have been very, very pleased with Haskell and am
currently working on porting my code to it (and write new code in
Haskell at every opportunity).
(http://supybot.com/Members/jemfinch/haskell-sucks/document_view)

or:

Using Haskell to develop OpenAFP.hs led to programs that eat constant
2MB memory, scale linearly, and are generally 2OOM faster than my Perl
library.

Oh, and the code size is 1/10.
(http://www.perl.com/pub/a/2005/03/03/pugs_interview.html -
Autrijus also raves about how great the book "Types and
Programming Languages" supposedly is--I'm trying to borrow
a copy. Yeah, this is a Perl comparison, but I think of
Perl as being roughly equivalent to Python except a lot uglier).

or:

Haskell is the least-broken programming language available today. C,
C++, Perl, Python, Java, and all the other languages you've heard of
are all much more broken, so debating their merits is pointless. :)
Unfortunately Real Life involves dealing with brokenness.
(http://www106.pair.com/rhp/books.html)

or:

In conducting the independent design review at Intermetrics, there
was a significant sense of disbelief. We quote from [CHJ93]: "It
is significant that Mr. Domanski, Mr. Banowetz and Dr. Brosgol
were all surprised and suspicious when we told them that Haskell
prototype P1 (see appendix B) is a complete tested executable
program. We provided them with a copy of P1 without explaining
that it was a program, and based on preconceptions from their past
experience, they had studied P1 under the assumption that it was a
mixture of requirements specification and top level design. They
were convinced it was incomplete because it did not address issues
such as data structure design and execution order."
(http://haskell.org/papers/NSWC/jfp.ps - this was from a bake-off
for a military application where the Haskell solution had 85 lines
of code to Ada's 767, C++'s 1105, and Relational Lisp's 274).

Obviously I'm in the usual rose-colored-glasses phase of finding out
about something new and interesting, but I can't help thinking these
guys are onto something. Quite a few of the Haskell Cafe mailing list
members seem to have come to Haskell from Python. (Haskell tutorial:
http://www.isi.edu/~hdaume/htut/ - I've read most of this and it looks
pretty cool--definitely a steeper learning curve than Python but the
result looks a lot more powerful).
 
D

Diez B. Roggisch

Antoon said:
Then argue against my ideas, and not your makings of it.

If I just use 'ANY' and you fill that in with C void* like
implementation and argue against that, then you are arguing
against your own ghosts, but not against what I have in mind.

Well, you didn't tell us what you had in mind. You just said "let's
introduce something like any". I showed you existing implementations of
such a concept that have problems. You say "thats not what _I_ have in
mind, so your criticism doesn't apply." Guess what, I can't read your
mind. But you did not tell me in what your idea is different from
existing concepts.
You have not counterd my idea with criticism. You have decorated my
idea with how you think it would be implemented (C void*) and argued
against that. I don't need to give an implementation to notice, that
you jumped to a particular implementation and basicly just countered
that implementation, not the idea in general.

Again - where is your idea layed out in (more) detail, so that one can
discuss them? That was all that I'm asking - which of course you
carefully avoided...
As far as I'm concerned that was just meant as a matter of fact remark,
with no snide intentions.

Where exactly come the facts? All I see is some vague "there should be
something better, by introducing ANY". But no details how typechecking
then would work. I showed you that existing type systems can't properly
cope with ANY so far and allow for much errors. Just saying "but mine
won't" is a little bit thin, don't you think?l

Diez
 
D

Diez B. Roggisch

It is not about falling back on generic declarartion, it is about
how such object will be treated. Diez seems to think that
strongly-typed language can only deal with generic declarations
by using something that allows circumventing the type system.

No, I don't - now it's you who makes assumptions about what I think. ML
and other FPs show that genericity can be done without circumvening.
Templates and generics in C++ partially do so.
And you are wrong. The problem with the C void* construct (I'm not that
familiar with java) is that all type information is lost. When you
use such a parameter in a function you have no idea what you are
working with.

You don't know JAVA - I do. And nobody said that it lost that
type-information. It doesn't. Still, errors occur - namely
ClassCastEcxeptions.

That indicates that going back and forth via ANY doesn't necessarily
lose any type information, but the capability of today's type-systems
to keep that information across such a transition. This won't work:

Object foo = A();
B bar = (B) foo;

And please, pretty please don't argue with the simplicity of that
example - think of a bazillion statements between these two, possibly
done with run-time-instantiated classes that weren't known at
compile-time.
But that doesn't need to be if you have a typesystem with an ANY type.
Such a type declaration would mean that object of any type could be
used here. However that doesn't imply that the type information
of the actual objects used, has to be lost. That type information
may still be available and usefull for further type checking.

JAVA has that.
That you and Diez can only think about C, C++ or java constructs
when I mention an ANY type, is your limitation. It doesn't
need to be the limitation of a specific type system.

Again: where are the specifics of this system? In your head? Tell us
the gory detail, please.

Diez
 
R

Roy Smith

Paul Rubin said:
What this is about (to me at least) is the edit-debug cycle. Let's
say I write some Python code, using assert to validate datatypes.
Maybe I've made 4 errors. I then write a test function and run it.
Boom, the first assert fails. I fix the first error, run again.
Boom, the next assert fails. Fix the next error, run again, boom,
fix, etc. Four edit-debug cycles.

With static typing, I run the compiler, get 4 error messages, fix all
4, and can get on with the next phase of testing with three fewer edit
cycles.

That's certainly the plan, but my experience is that it's not the whole
story, for a few reasons.

1) I can often run 4 Python edit-debug cycles in the time it takes me to
run a single C++ cycle, especially if there's a whole pile of build system
gunk layered on top of the raw compile step.

2) When I get a bunch of compile errors, I know that many of them are just
cascaded from a single problem. Thus, I tend to fix the first one and only
take a quick look at all the others. If it's obvious what the problem is,
I'll fix it, but as often as not, I'll just recompile and see what pops out
the next time.

3) Many times, I'll spend more time making the compiler happy than the
protection it affords me is worth. C++ is such a complex language, it's
really hard to write a compiler which follows every detail of the spec, and
the details are what kills you. We had a case the other day where a
const_cast of a reference returned by a function worked just fine on
Solaris, but failed on HPUX. We ended up with three guys digging through
reference manuals trying to figure out how const_cast and references are
supposed to interact. We ended up deciding what we were doing was legal,
but we still had to devise a work-around so it compiled on all platforms.
It's actually a little more complex than that, because we don't even write
raw const_cast's, we use a CONST_CAST macro to work around older compilers
that don't support modern casting, so we burned a little more time
double-checking that our macro expansion wasn't at fault. We could have
done a lot of Python edit-debug cycles in the time it took to sort that one
out.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,152
Latest member
LorettaGur
Top