Am I the only one who would love these extentions? - Python 3.0 proposals (long)

R

Ron Adam

Ron Adam wrote:
...
Is it possible to make the argument optional for while? That may

Yes, it is technically possible -- you'd need to play around with the
Dreaded Grammar File, but as Python grammar tasks go this one seems
simple.
allow for an even faster time?

No, there is no reason why "while 1:", "while True:" once True is
a keyword, and "while:" were the syntax extended to accept it, could
or should compile down to code that is at all different.
while:
<instructions>

It would be the equivalent as making the default argument for while
equal to 1 or True. Could it optimize to single cpu instruction when
that format is used? No checks or look ups at all?

"while 1:" is _already_ compiled to a single (bytecode, of course)
instruction with "no checks or look ups at all". Easy to check:
1 0 SETUP_LOOP 19 (to 22)
3 JUMP_FORWARD 4 (to 10)
6 JUMP_IF_FALSE 11 (to 20)
9 POP_TOP13 CALL_FUNCTION 0
16 POP_TOP
17 JUMP_ABSOLUTE 1025 RETURN_VALUE

the four bytecodes from 10 to 17 are the loop: name 'foop'
is loaded, it's called with no argument, its result is
discarded, and an unconditional jump back to the first of
these steps is taken (this loop, of course, will only get
out when function foop [or the lookup for its name] cause
an exception). (The 2 opcodes at 6 and 9 never execute,
and the opcode at 3 could be eliminated if those two were,
but that's the typical kind of job for a peephole optimizer,
an easy but low-returns project).

Note the subtle difference when we use True rather than 1:
6 JUMP_IF_FALSE 11 (to 20)
9 POP_TOP
10 LOAD_NAME 1 (foop)
13 CALL_FUNCTION 0
16 POP_TOP
17 JUMP_ABSOLUTE 325 RETURN_VALUE

_Now_, the loop runs all the way through the bytecodes
from 3 to 20 included (the opcodes at 0 and 21 surround
it just like they surrounded the unconditional loop we
just examined). Before we can get to the "real job" of
bytecodes 10 to 17, each time around the loop, we need
to load the value of name True (implying a lookup), do
a conditional jump on it, otherwise discard its value.

If True was a keyword, the compiler could recognize it and
generate just the same code as it does for "while 1:" --
or as it could do for "while:", were that extension of
the syntax accepted into the language.


As to the chances that a patch, implementing "while:" as
equivalent to "while 1:", might be accepted, I wouldn't
be particularly optimistic. Still, one never knows!


Alex


Thanks for the detailed explanation. I would only suggest 'while:' if
there was a performance advantage. Since there is none, the only
plus is it save two keystrokes. That isn't reason enough to change
something that isn't broken.

Thanks for showing me how to use compile() and dis().
1 0 SETUP_LOOP 12 (to 15)
3 JUMP_FORWARD 4 (to 10)
6 JUMP_IF_FALSE 4 (to 13)
9 POP_TOP 18 RETURN_VALUE

Yes, this is as short as it gets. :)

In the case of using 'while 1:' it seems to me you end up loosing
what you gain because you have to test inside the loop which is more
expensive than testing at the 'while' statement.
1 0 LOAD_CONST 0 (1)
3 STORE_NAME 0 (a)

2 6 SETUP_LOOP 18 (to 27) 12 JUMP_IF_FALSE 10 (to 25)
15 POP_TOP

3 16 LOAD_CONST 1 (0)
19 STORE_NAME 0 (a)
22 JUMP_ABSOLUTE 9 30 RETURN_VALUE

1 0 LOAD_CONST 0 (1)
3 STORE_NAME 0 (a)

2 6 SETUP_LOOP 36 (to 45) 12 JUMP_IF_FALSE 28 (to 43)
15 POP_TOP

3 >> 16 LOAD_CONST 1 (0)
19 STORE_NAME 0 (a)

4 22 LOAD_NAME 0 (a)
25 LOAD_CONST 1 (0)
28 COMPARE_OP 2 (==)
31 JUMP_IF_FALSE 5 (to 39)
34 POP_TOP

5 35 BREAK_LOOP
36 JUMP_ABSOLUTE 9
This requires twice as many instructions, 12 vs 6. So it's probably
better to avoid 'while 1:' or 'while True': if possible. But it is
nice to know 'while 1:' is 20% faster than 'while True:' in those
situations where you need to use it.

_Ron Adam
 
D

Dave Benjamin

This would also have the advantage of more quickly catching certain
common programming errors....

This would break backward compatibility, especially with some of the more
popular variable names like "file" and "list" (most of us know not to do
that these days, but they're such tasty words, I'm sure there's enough code
out there to make the restriction painful).

I seem to remember Guido being opposed to command-line arguments that change
the language, but it seems like this could be a good opportunity for an
argument that locks down built-ins.
 
M

Michele Simionato

John Roth said:
There's a movement
to make just about everything in the built-in scope immutable and
not rebindable at any lower scope for performance reasons.

I am a bit worried by this movement. While I have found no reason to twick
the builtins in production code, I have found specific situations where
the ability of tinkering with the builtins is extremely convenient.
Typically this happens in the debugging phase. For instance I have used
tricks such as modifying the `object` class to induce tracing capabilities
on all my classes, withouth touching my source code.

So, I would be happy with the following:

1. by default builtins cannot be changed, resulting in a performance
benefit;

2. nevertheless, it is possible to use a command line switch to revert
to the current behavior.

If the project is to kill option 2, then I would be unhappy :-(


Michele
 
A

Andrew Dalke

Just to highlight a new suggestion I made that I hadn't seen before,
use eillipsis ("...") at the end of a list assignment to mean "ignore the
rest of
the RHS." This would allow a style of enum definition like

class Keywords:
AND, ASSERT, BREAK, CLASS, CONTINUE, DEF, \
DEL, ELIF, YIELD, ... = itertools.count()

class consts:
A, B, C, ... = itertools.count()
D, E, F, G, H, ... = itertools.count(10)

This could also be used in

calendar.py has
year, month, day, hour, minute, second = tuple[:6]
which would become
year, month, day, hour, minute, second, ... = tuple

and of course for
(MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY) = range(7)
in that same file.

HOWEVER, those are effectively the only two places in the top-level
Python lib which would use this construct correctly. There are places
where it can be used incorrectly, because I think changing
OK, BOOM, FAIL = range(3)
into
OK, BOOM, FAIL, ... = itertools.count()
is overkill -- there should be at least 5 elements before this ... is
useful.

and I think changing
FTEXT, FHCRC, FEXTRA, FNAME, FCOMMENT = 1, 2, 4, 8, 16
into
FTEXT, FHCRC, FEXTRA, FNAME, FCOMMENT, ... = x*x for x in
itertools.count(1)
is just plain stupid.

Therefore I withdraw this proposal, leaving it for contemplation and
posterity's.

Plus, if accepted then should the following be allowed?

..., x, y, z = itertools.count(100) # get the last three elements
a, b, c, ..., x, y, z = itertools.count(100) # get the first three and
last three elements
?

Andrew
(e-mail address removed)
 
A

Alexander Schmolck

Andrew Dalke said:
Just to highlight a new suggestion I made that I hadn't seen before,
use eillipsis ("...") at the end of a list assignment to mean "ignore the
rest of
the RHS." This would allow a style of enum definition like

class Keywords:
AND, ASSERT, BREAK, CLASS, CONTINUE, DEF, \
DEL, ELIF, YIELD, ... = itertools.count()

IMHO, this is a not such a good way to define enums. Why not do

AND, ASSERT, BREAK, CLASS, etc == "AND, ASSERT, BREAK, CLASS, etc".split(", ")

This is almost as easy to type (or rather paste) and you shouldn't loose
comparison efficiency as long as you compare with 'is':

``if thingy is CLASS: blah``

and you've got the significant advantage that screw-ups are less likely and
you automatically get sensible print values. If you need equality comparisons
often maybe ``map(intern,...`` would help, but I don't know about that.

Nonetheless I think would really like see something along those lines because
I think there a plenty other good uses (especially, as you suggest in
combination with iterators).I'd also like to have the apply-* work in
assignments, e.g. ``first, second, *rest = someList``. This would also make
python's list destructuring facilities much more powerful.

'as
 
A

Alex Martelli

Alexander Schmolck wrote:
...
IMHO, this is a not such a good way to define enums. Why not do

AND, ASSERT, BREAK, CLASS, etc == "AND, ASSERT, BREAK, CLASS,
etc".split(", ")

Unfortunately, this breaks the "once, and only once" principle: you
have to ensure the sequence of words is the same on both sides -- any
error in that respect during maintenance will cause small or big
headaches.
and you've got the significant advantage that screw-ups are less likely

I disagree with you on this point. Any time two parts of a program
have to be and remain coordinated during maintenance, screw-ups'
likelihood unfortunately increases.

Keeping enum values in their own namespace (a class or instance) is
also generally preferable to injecting them in global namespace,
IMHO. If you want each constant's value to be a string, that's easy:

class myenum(Enum):
AND = ASSERT = BREAK = CLASS = 1

with:

class Enum: __metaclass__ = metaEnum

and:

class metaEnum(type):
def __new__(mcl, clasname, clasbases, clasdict):
for k in clasdict:
clasdict[k] = k
return type.__new__(mcl, clasname, clasbases, clasdict)

The problem with this is that the attributes' _order_ in the
classbody is lost by the time the metaclass's __new__ runs, as
the class attributes are presented as a dictionary. But if the
values you want are strings equal to the attributes' names, the
order is not important, so this works fine.

But many applications of enums DO care about order, sigh.

in combination with iterators).I'd also like to have the apply-* work in
assignments, e.g. ``first, second, *rest = someList``. This would also
make python's list destructuring facilities much more powerful.

Yes, I agree on this one. Further, if the "*rest" was allowed to
receive _any remaining iterable_ (not just necessarily a list, but
rather often an iterator) you'd be able to use itertools.count()
and other unbounded-iterators on the RHS.


Alex
 
A

Alex Martelli

Andrew Dalke wrote:
...
and I think changing
FTEXT, FHCRC, FEXTRA, FNAME, FCOMMENT = 1, 2, 4, 8, 16
into
FTEXT, FHCRC, FEXTRA, FNAME, FCOMMENT, ... = x*x for x in
itertools.count(1)
is just plain stupid.

Not necessarily stupid, but it sure IS semantics-altering (as well
as syntactically incorrect in the proposed 2.4 syntax for genexps --
you'll need parentheses around a genexp) -- a sequence of squares
rather than one of powers of two.
(2**x for x in itertools.count()), however, would be fine:).

Therefore I withdraw this proposal, leaving it for contemplation and
posterity's.

I think you just grabbed the record for shortest-lived proposal
in Python's history. Applause, applause!-).


Alex
 
A

Alex Martelli

Michele said:
I am a bit worried by this movement. While I have found no reason to twick
the builtins in production code, I have found specific situations where
the ability of tinkering with the builtins is extremely convenient.
Typically this happens in the debugging phase. For instance I have used
tricks such as modifying the `object` class to induce tracing capabilities
on all my classes, withouth touching my source code.

So, I would be happy with the following:

1. by default builtins cannot be changed, resulting in a performance
benefit;

2. nevertheless, it is possible to use a command line switch to revert
to the current behavior.

If the project is to kill option 2, then I would be unhappy :-(

Sounds reasonable to me. If nailing down built-ins is seen as good
_only_ for performance, then perhaps the same -O switch that already
exists could be used. Personally, I think nailing down built-ins may
also be good for clarity in most cases, so I would prefer a more
explicit way to request [2]. However, we would then need a third
file-extension besides .pyc and .pyo -- clearly you would not want
the same bytecode for the "builtins nailed down" and "builtins can
be modified" modes; and that might pose some problems -- so perhaps
sticking to -O would be better overall.

I think that discussing the possibilities here is productive only up
to a point, so you might want to come to python-dev for the purpose,
at least when some clarification of possibilities has been reached.


Alex
 
A

Alexander Schmolck

Alex Martelli said:
Alexander Schmolck wrote:
...

Unfortunately, this breaks the "once, and only once" principle: you
have to ensure the sequence of words is the same on both sides -- any
error in that respect during maintenance will cause small or big
headaches.

True, but I'd still prefer this one (+ editor macro) in most cases to
(ab)using numbers as symbols (this was my main point; I'm sure I could also
think of various hacks to avoid the duplication and do the the stuffing in the
global namespace implicitly/separately -- if it must go there at all costs).
The main advantage the above has over custom enum types is that it's
immediately obvious. Maybe the standard library should grow a canonical enum
module that adresses most needs.

Keeping enum values in their own namespace (a class or instance) is
also generally preferable to injecting them in global namespace,
IMHO.

Yep, this would also be my preference. I'd presumably use something like

DAYS = enum('MON TUE WED [...]'.split())

maybe even without the split.

If you want each constant's value to be a string, that's easy:

Doesn't have to be, strings are just the simplest choice for something with a
sensible 'repr' (since you might want some total order of your enums a class
might be better -- if one goes through the trouble of creating one's own enum
mechanism).
class myenum(Enum):
AND = ASSERT = BREAK = CLASS = 1

with:

class Enum: __metaclass__ = metaEnum

and:

class metaEnum(type):
def __new__(mcl, clasname, clasbases, clasdict):
for k in clasdict:
clasdict[k] = k
return type.__new__(mcl, clasname, clasbases, clasdict)

Yuck! A nice metaclass demo but far too "clever" and misleading for such a
simple task, IMO.
The problem with this is that the attributes' _order_ in the
classbody is lost by the time the metaclass's __new__ runs, as
the class attributes are presented as a dictionary. But if the
values you want are strings equal to the attributes' names, the
order is not important, so this works fine.

But many applications of enums DO care about order, sigh.

If that prevents people from using the above, I'd consider it a good thing ;)
Yes, I agree on this one. Further, if the "*rest" was allowed to
receive _any remaining iterable_ (not just necessarily a list, but
rather often an iterator) you'd be able to use itertools.count()
and other unbounded-iterators on the RHS.

Indeed, as I noted in my previous post, iterators and such extensions would go
very well together.

'as
 
P

Peter Hansen

Michele said:
I am a bit worried by this movement. While I have found no reason to twick
the builtins in production code, I have found specific situations where
the ability of tinkering with the builtins is extremely convenient.
Typically this happens in the debugging phase. For instance I have used
tricks such as modifying the `object` class to induce tracing capabilities
on all my classes, withouth touching my source code.

Another use case I've raised in the past, and I believe it is accepted
that it is a valid one, is in automated unit testing or acceptance testing
where it is often extremely beneficial to be able to "mock" a builtin by
replacing it with a custom test method.

For us, the most common use of this is to change open() to return a
special mock file object which doesn't actually cause any disk access.

Removing the ability to do this would cripple our ability to properly
test some code.

Alex' suggestion of using -O to switch to a mode where builtins are
static is okay as far as it goes, but it would prevent proper testing
of code that relies on the aforementioned technique when one wishes
to verify that the standard -O improvements still work. I'd vote for
decoupling this behaviour from the rest of -O, as Michele suggests.

-Peter
 
A

Alex Martelli

Alexander Schmolck wrote:
...
True, but I'd still prefer this one (+ editor macro) in most cases to
(ab)using numbers as symbols (this was my main point; I'm sure I could

I'm used to take bus 20 to go downtown. And perhaps some of my ancestors
fought in the VII Legio (also known by nicknames such as Gemina and Felix,
and not to be confused with the _other_ VII Legio, also known as Claudia
and Pia). If using numbers just as arbitrary tags used to distinguish
items is "abuse", then there's been enough of that over the last few
millennia -- and there's by far enough of it in everyday life today --
that to keep considering it "abuse" is, IMHO, pedantry.

All the more so, when we also want our "tags" to have _some_ of the
properties of numbers (such as order) though not all (I'm not going
to "add" buses number 20 and 17 to get bus number 37, for sure).
also think of various hacks to avoid the duplication and do the the
stuffing in the global namespace implicitly/separately -- if it must go
there at all costs). The main advantage the above has over custom enum
types is that it's immediately obvious. Maybe the standard library should
grow a canonical enum module that adresses most needs.

"A canonical enum module" would be great, but the probability that a
design could be fashioned in such a way as to make _most_ prospective
users happy is, I fear, very low. Anybody's welcome to try their hand
at a pre-PEP and then a PEP, of course, but I suspect that enough
consensus on what features should get in will be hard to come by.

Keeping enum values in their own namespace (a class or instance) is
also generally preferable to injecting them in global namespace,
IMHO.

Yep, this would also be my preference. I'd presumably use something like

DAYS = enum('MON TUE WED [...]'.split())

maybe even without the split.

Sure, if you have an 'enum' factory function or type you may choose
to have it do its own splitting. But if DAYS.MON and DAYS.WED are
just strings, so you cannot _reliably_ test e.g. "x > DAYS.WED" in
the enumeration order, get "the next value after x", and so on, then
I fear your enum will satisfy even fewer typical, widespread
requirements than most other proposals I've seen floating around
over the years.

Doesn't have to be, strings are just the simplest choice for something
with a sensible 'repr' (since you might want some total order of your
enums a class might be better -- if one goes through the trouble of
creating one's own enum mechanism).

I don't think a repr of e.g. 'TUE' is actually optimal, anyway --
"DAYS.TUE" might be better (this of course would require the enum
builder to intrinsically know the name of the object it's building --
which again points to a metaclass as likely to be best).

But representation has not proven to be the crucial issue for enums in
C, and this makes me doubt that it would prove essential in Python; the
ability to get "the next value", "a range of values", and so on, appear
to meet more needs of typical applications, if one has to choose. Sure,
having both (via a type representing "value within an enumeration")
might be worth engineering (be that in a roll-your-own, or standardized
PEP, for enum).

I _think_ the rocks on which such a PEP would founder are:
-- some people will want to use enums as just handy collections of
integer values, with the ability, like in C, to set FOO=23 in the
middle of the enum's definition; the ability to perform bitwise
operations on such values (using them as binary masks) will likely
be a part of such requests;
-- other people will want "purer" enums -- not explicitly settable to
any value nor usable as such, but e.g. iterable, compact over
predecessor and successor operations, comparable only within the
compass of a single enum;
-- (many will pop in with specific requests, such as "any enum must
subclass int just like bool does", demands on str and/or repr,
the ability to convert to/from int or between different enums,
and so on, and so forth -- there will be many such "nonnegotiable
demands", each with about 1 to 3 proponents);
-- enough of a constituency will be found for each different vision
of enums to make enough noise and flames to kill the others, but
not enough to impose a single vision.

This may change if and when Python gets some kind of "optional
interface declarations", since then the constraints of having enums
fit within that scheme will reduce the too-vast space of design
possibilities. But, that's years away.

class myenum(Enum):
AND = ASSERT = BREAK = CLASS = 1

with:

class Enum: __metaclass__ = metaEnum

and:

class metaEnum(type):
def __new__(mcl, clasname, clasbases, clasdict):
for k in clasdict:
clasdict[k] = k
return type.__new__(mcl, clasname, clasbases, clasdict)

Yuck! A nice metaclass demo but far too "clever" and misleading for such a
simple task, IMO.

I disagree that defining an Enum as a class is "clever" or "misleading"
in any way -- and if each Enum is a class it makes a lot of sense to use
a metaclass to group them (with more functionality than the above, of
course). Of course, values such as 'myenum.ASSERT' should be instances
of myenum (that is not shown above, where I made them strings because
that's what you were doing -- the only point shown here is that there
should be no duplication).

If that prevents people from using the above, I'd consider it a good thing
;)

Oh, it's not hard to substitute that with e.g.

class myenum(Enum):
__values__ = 'AND ASSERT BREAK CLASS'.split()

(or, ditto w/o the splitting) so that order is preserved. But then
we have to check explicitly at class creation time that all the values
are valid identifiers, which is a little bit of a bother -- being able
to _use_ identifiers in the first place would let us catch such errors
automatically and as _syntax_ errors. Unfortunately, we can't do it
without counting, or having an alas very hypothetical construct to mean
"and all the rest goes here", as below.

Indeed, as I noted in my previous post, iterators and such extensions
would go very well together.

Now _that_ is a PEPworthy idea that might well garner reasonably vast
consensus here. Of course, getting it past python-dev's another issue:).


Alex
 
T

Terry Reedy

Paul Rubin said:
I've been thinking for a while that Python could benefit from a fork,
that's not intended to replace Python for production use for any group
of users, but rather to be a testbed for improvements and extensions
that would allow more concrete experiments than can be done through
pure thought and the PEP process. Most proposals for Python
improvements could then be implemented and tried out in the
experimental platform before being folded back into the regular
implementation.

This is such a good idea that some of the developers, including Guido,
actually do this for some things in a non-distribution sandbox branch
of the CVS tree. However, PyPy should make such experimentation
practically possible for more of us and much easier for almost
everyone.

TJR
 
T

Thomas Bellman

Alex Martelli said:
You are heartily welcome to perform the dynamic whole-code analysis
needed to prove whether True (or any other built-in identifier) may
or not be re-bound as a side effect of functions called within the
loop -- including the calls to methods that can be performed by just
about every operation in the language.

There are other ways. Ways that do change the semantics of the
language somewhat, making it somewhat less dynamic, but my
understanding is that Guido is actually positive to doing so.

It is quite easy to determine whether some code modifies a
specific binding in its own local or global scope. (The use of
'exec' would of course make the analyzer assume that the binding
*is* changed.) You could then add the restriction that a module
may not change the bindings in another modules global or builtin
namespace.

It seems a very worthy
research project, which might take good advantage of the results
of the pypy project (there is plenty of such advanced research that
is scheduled to take place as part of pypy, but there is also surely
space for a lot of other such projects; few are going to bear fruit,
after all, so, the more are started, the merrier).

It almost sounds as if you are implying that doing analysis of
the entire program is something that is almost unheard of. I
don't think you are actually meaning that, but for the benefit of
others, it might be worth noting that there have been compilers
that *do* data flow analysis on entire programs available for
several years now. The MIPS C compiler, or rather the linker,
did global register allocation more than a decade ago, and since
then other compilers have done even more ambitious analysis of
entire programs. I am under the impression that some LISP
compilers have done this too.

(But, yes, I do realize that it is more difficult to do this in a
dynamic language like Python than in a statically typed language
like C.)

If you, as I suspect, really mean that to implement this kind of
code analysis in Python is lots of hard work and involves lots of
trial-and-error testing to determine what is worth-while, then I
heartily agree.

As there is really no good use case for letting user code rebind the
built-in names None, True, and False, making them keywords has almost
only pluses (the "almost" being related to backwards compatibility
issues),

Note that I have no quibbles with this reasoning.
 
S

Skip Montanaro

Thomas> A much better way to achieve the same goal would be to make the
Thomas> optimizer recognize that True isn't re-bound within the loop.

Easier said than done. Given:

while True:
foo()

how do you know that foo() doesn't twiddle Python's builtins?

Thomas> Making the optimizer better would improve the performance of
Thomas> much more code than just 'while True' loops.

What optimizer? ;-)

Skip
 
A

Alex Martelli

Thomas said:
There are other ways. Ways that do change the semantics of the
language somewhat, making it somewhat less dynamic, but my
understanding is that Guido is actually positive to doing so.

Yes, for other built-ins. Not for these -- try assigning to None in Python
2.3 and see the warning. There's just NO use case for supporting None
being assigned to, thus, that was put in immediately; True and False need
to remain syntactically assignable for a while to support code that
_conditionally_ assigns them if they're not already defined, but while it's
therefore unacceptable to have such assignments give _syntax_ errors or
warnings, for the moment, they can nevertheless be blocked (and warned
against) at runtime whenever Guido thinks that's opportune.

It almost sounds as if you are implying that doing analysis of
the entire program is something that is almost unheard of. I

Oh, no, I do remember when it was a brand-new novelty in a
production compiler (exactly the MIPS one, on DEC workstations
I believe). Absolutely unusable for any program above toy-level,
of course -- and when we got shiny new workstations twice as
fast as the previous ones, it barely made a dent in the size of
programs it could deal with, since apparently the resources it
consumed went up exponentially with program size. We gritted
our teeth and let it run for some humungous time on the smallest
of the applications we sold, thinking that we'd measure the real
effects and see if it was worth restructuring all of our production
environment around such times.

We never found out what benefits whole-program analysis might
have given, because the resulting program was so bug-ridden it
couldn't perform ANY user-level task in our field (mechanical CAD).
Shutting off the whole-program optimizations gave us a perfectly
usable program, of course, just as we had on every other platform
we supported (a dozen at that time); they were _all_ "whole-program
analysis" bugs. That was basically the end of my interest in the issue
(that was supposed to be a _production_ compiler, it had been years
in research and beta stages already, as I recall...!)

The concept of turning Python's super-lightweight, lightning-fast,
"i don't even _peephole_ optimize" compiler into any semblance of
such a monster, in order to let people code "None=23" rather than
making None a keyword (etc), is _truly_ a hoot. (I realize that's
not what you're advocating, thanks to your clarification at the end,
but the incongruity of the image is still worth at least a smile).

(But, yes, I do realize that it is more difficult to do this in a
dynamic language like Python than in a statically typed language
like C.)

It's a nightmare to do it right for C, and I don't think any production
compiler currently claims to do it (MIPS went up in flames -- pity for
their chips, but their whole-program-analyzer is one thing I _don't_
miss:). Languages such as Haskell, with a much cleaner and rigid
design, maybe.

If you, as I suspect, really mean that to implement this kind of
code analysis in Python is lots of hard work and involves lots of
trial-and-error testing to determine what is worth-while, then I
heartily agree.

I don't have to spend lots of hard work to determine that _any_
substantial complication of the compiler for the purpose of letting
somebody code "None=23" is _not_ worthwhile. As for the various
possibilities regarding _optimization_ (which have nothing to do with
wanting to allow assignments to None etc), one of pypy's motivations
is exactly to make such experimentation much easier. But I don't
expect such research to turn into production-use technology in the
bat of an eye.

Note that I have no quibbles with this reasoning.

Oh good. "We'll design the language any which way and incredibly
smart optimizers will take care of the rest" has never been Python's
philosophy, and I think its emphasis of simplicity _including simplicity
of implementation_ has served it very well and it would be foolhardy
to chuck it away.


Alex
 
A

Andrew Dalke

Me:
Alex:
Not necessarily stupid, but it sure IS semantics-altering (as well
as syntactically incorrect in the proposed 2.4 syntax for genexps --
you'll need parentheses around a genexp) -- a sequence of squares
rather than one of powers of two.
(2**x for x in itertools.count()), however, would be fine:).

Oops! :)

What I considered 'stupid' was changing something that was easy
to verify by eye (1, 2, 4, 8, 16) into something which used a generator
(or list comprehensions; I was experimenting with the new idea) and
is harder to verify by eye.

The generator form has the advantage of allowing new terms to be
added without additional modifications, but given how unchanging
that code is, it's a false savings.
I think you just grabbed the record for shortest-lived proposal
in Python's history. Applause, applause!-).

I can do better than that ;)

I propose a requirement that all Python code include the
author's PSU name before the first exectuable statement
in the file. It must be of the form '#PSU: ....' as in

#PSU: Parrot Funny-Walker of Wensleydale

This would let ... err, just a mo', someone's at the
 
D

Douglas Alan

Alex Martelli said:
OTOH, the chance that the spelling of True, False and None will change
is close to that of a snowball in the _upper_ reaches of Hell (the
_lower_ ones are in fact frozen, as any reader of Alighieri well knows,
so the common idiom just doesn't apply...:).

Can someone remind me as to why these are capitalized? They clash
annoyingly with the common coding convention that capitalizes only
classes.

|>oug
 
A

Andrew Dalke

Douglas Alan:
Can someone remind me as to why [None, True, False] are
capitalized? They clash annoyingly with the common coding
convention that capitalizes only classes.

Here's a conjecture -- those are singletons, and note that classes
are singletons too.

Andrew
(e-mail address removed)
 
D

Dave Benjamin

Douglas Alan:
Can someone remind me as to why [None, True, False] are
capitalized? They clash annoyingly with the common coding
convention that capitalizes only classes.

Here's a conjecture -- those are singletons, and note that classes
are singletons too.

And so are modules... ;)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,599
Members
45,175
Latest member
Vinay Kumar_ Nevatia
Top