Pythonification of the asterisk-based collection packing/unpacking syntax

C

Chris Angelico

extended collection unpacking, as in 'head,*tail=sequence', is quite a
rare construct indeed, and here I very strongly feel a more explicit
syntax is preferrable.

You may be right, but...
... if collection packing/unpacking would be
presented as a more general construct from the start,
'head,tail::tuple=sequence' would be hard to miss.

.... it doesn't really justify a _change_. When a language is in its
infancy and the only code written in it is on the designers' own
computers, these sorts of debates can be tipped by relatively small
differences - is it more readable, is it quick enough to type, etc.
But once a language reaches a level of maturity, changes need to
overwhelm the "but it's a change" hurdle - breaking existing code is
majorly dangerous, and keeping two distinct ways of doing something
means you get the worst of both worlds.

We can argue till the cows come home as to which way would be better,
_had Python started with it_. I don't think there's anything like
enough difference to justify the breakage/duplication.

ChrisA
 
E

Eelco

You may be right, but...


... it doesn't really justify a _change_. When a language is in its
infancy and the only code written in it is on the designers' own
computers, these sorts of debates can be tipped by relatively small
differences - is it more readable, is it quick enough to type, etc.
But once a language reaches a level of maturity, changes need to
overwhelm the "but it's a change" hurdle - breaking existing code is
majorly dangerous, and keeping two distinct ways of doing something
means you get the worst of both worlds.

We can argue till the cows come home as to which way would be better,
_had Python started with it_. I don't think there's anything like
enough difference to justify the breakage/duplication.

That I agree with; I think it is a questionable idea to introduce this
in a new python 3 version. But I consider it a reasonable change for a
'python 4', or whatever the next major version change will be called.
Writing a code-conversion tool to convert from *args to args::tuple
would be quite easy indeed.
 
S

Steven D'Aprano

[...]
I'm afraid it is.

Here's the definition of assignment in Python 3:
http://docs.python.org/py3k/reference/simple_stmts.html#assignment-
statements

Que?

You have claimed that "constructing a list and binding it to the
identifier" is not how iterator unpacking is formulated in Python the
language. But that is wrong. That *is* how iterator unpacking is
formulated in Python the language. The reference manual goes into detail
on how assignment is defined in Python. You should read it.

'head, *tail = sequence'

Is how one currently unpacks a head and tail in idiomatic python

This is semantically equivalent to

'head = sequence[0]'
'tail = list(sequence[1:])'

There's a reason this feature is called *iterable* unpacking: it operates
on any iterable object, not just indexable sequences.

'head, *tail = sequence' is not just semantically equivalent to, but
*actually is* implemented as something very close to:

temp = iter(sequence)
head = next(temp)
tail = list(temp)
del temp

Extended iterable unpacking, as in 'head, *middle, tail = sequence' is a
little more complex, but otherwise the same: it operates using the
iterator protocol, not indexing. I recommend you read the PEP, if you
haven't already done so.

http://www.python.org/dev/peps/pep-3132/

But these forms are linguistically different, in too many different ways
to mention.

How about mentioning even one way? Because I have no idea what
differences you are referring to, or why you think they are important.


Again, you make a claim about Python which is contradicted by the
documented behaviour of the language. You claim that we DON'T have
something of the form 'tail = list_tail(sequence)', but that is *exactly*
what we DO have: extracting the tail from an iterator.

Obviously there is no built-in function "list_tail" but we get the same
effect by just using list() on an iterator after advancing past the first
item.

My claim is that the two semantically identical formulations above do
not have isomorphic linguistic form. As far as I can make sense of your
words, you seem to be disputing this claim, but its a claim as much
worth debating as that the sun rises in the east.

"Isomorphic linguistic form"? Are you referring to the idea from
linguistics that there are analogies between the structure of phonic and
semantic units? E.g. that the structures (phoneme, syllable, word) and
(sememe, onomateme, sentence) are analogous.

I don't see why this is relevant, or which units you are referring to --
the human-language description of what Python does, high-level Python
language features, Python byte-code, or machine code. Not that it
matters, but it is unlikely that any of those are isomorphic in the
linguistic sense, and I am not making any general claim that they are.

If not, I have no idea what you are getting at, except possibly trying to
hide behind obfuscation.


How python accomplishes any of this under the hood is entirely
immaterial. The form is that of a compile-time type constraint,
regardless of whether the BDFL ever thought about it in these terms.

Compile-time type constraints only have meaning in statically-typed
languages. Otherwise, you are applying an analogy that simply doesn't
fit, like claiming that the motion of paper airplanes and birds are
equivalent just because in English we use the word "fly" to describe them
both.

The differences between what Python actually does and compile-time type-
constraints are greater than the similarities.

Similarities:

(1) In both cases, 'tail' ends up as a list.

Differences:

(1) Compiler enforces the constraint that the identifier 'tail' is always
associated with a list and will prevent any attempt to associate 'tail'
with some value that is not a list. Python does nothing like this: the
identifier 'tail' has no type restrictions at all, and can be used for
any object.

(2) Errors are detectable at compile-time with an error that halts
compilation; Python detects errors at runtime with an exception that can
be caught and by-passed without halting execution.

(3) "Constraint" has the conventional meaning of a restriction, not a
result; a type-constraint refers to a variable being prevented from being
set to some other type, not that it merely becomes set to that type. For
example, one doesn't refer to 'x = 1' as a type-constraint merely because
x gets set to an int value; why do you insist that 'head, *tail =
sequence' is a type-constraint merely because tail gets set to a list?


'tail' is (re)declared on the spot as a brand-new identifier (type
constraint included); whether it exists before has no significance
whatsoever, since python allows rebinding of identifiers.

Given the following:

tail = "beautiful red plumage"
head, *tail = [1, 2, 3, 4]
tail = 42

is it your option that all three instantiations of the *identifier*
'tail' (one in each line) are *different* identifiers? Otherwise, your
statement about "brand new identifier" is simply wrong.

If you allow that they are the same identifier with three different
values (and three different types), then this completely destroys your
argument that there is a constraint on the identifier 'tail'. First
'tail' is a string, then it is a list, then it is an int. If that is a
type constraint (restriction) on 'tail', then constraint has no meaning.

If you state that they are three different identifiers that just happen
to be identical in every way that can be detected, then you are engaged
in counting angels dancing on the head of pins. Given this idea that the
middle 'tail' identifier is a different identifier from the other two,
then I might accept that there *could* be a type-constraint on middle
'tail', at least in some implementation of Python that hasn't yet been
created. But what a pointless argument to make.

Let me take a step back and reflect on the form of the argument we are
having. I claim the object in front of us is a 'cube'. You deny this
claim, by countering that it is 'just a particular configuration of
atoms'.

No no no, you have utterly misunderstood my argument. I'm not calling it
a configuration of atoms. I'm calling it a pyramid, and pointing out that
no matter how many times you state it is a cube, it actually has FOUR
sides, not six, and NONE of the sides are square, so it can't possibly be
a cube.


'look at the definition!', you say; 'its just a list of coordinates, no
mention of cubes whatsoever.'.

'Look at the definition of a cube', I counter. 'This particular list of
coordinates happens to fit the definition, whether the BDFT intended to
or not'.

But you are wrong. It does not fit the definition of a type-constraint,
except in the vacuous sense where you declare that there is some
invisible distinction between the identifier 'tail' in one statement and
the identical identifier 'tail' in another statement.

You are correct, in the sense that I do not disagree that it is a
particular configuration of atoms. But if you had a better understanding
of cubes, youd realize it meets the definition;

I might not share your deep and expert understanding of cubes, but I
understand what "type-constraint" means, and I know what Python does in
iterator unpacking, and I know that there is no sensible definition of
type-constraint that applies to iterator unpacking in Python.

the fact that people
might not make a habit of pausing at this fact (there is generally
speaking not much of a point in doing so in this case), does not
diminish the validity of this perspective. But again, if this
perspective does not offer you anything useful, feel free not to share
in it.

Quite frankly, I should just let you witter on about type-constraints,
because (again, in my opinion) it destroys your credibility and will
reduce even further the already minuscule chance that your proposal will
be accepted. Since I'm against the idea, I should encourage you to
describe Python's behaviour in terms that will all but guarantee others
will dismiss you as knowing little or nothing about Python.

[...]
Whatever the virtues of your critique of my proposal, you might end up
wasting less time doing so by reading a book or two on compiler theory.

Perhaps you should spend more time considering the reality of how Python
works and less time trying to force the triangular peg of Python concepts
into the square hole of your ideas about theoretical compilers.
 
S

Steven D'Aprano

]
That proves the original point of contention: that the below* is
suboptimal language design,

Well duh.

I was mocking the idea that the meaning of * is context-dependent is a
bad thing by pointing out that we accept context-dependent meaning for
round brackets () without any difficulties. Of course it is "suboptimal
language design" -- it couldn't fail to be. Context-dependency is not
necessarily a bad thing.

not because terseness always trumps
verbosity, but because commonly-used constructs (such as parenthesis or
round brackets or whatever you wish to call them)

Parentheses are not a construct. They are symbols (punctuation marks)
which are applied to at least three different constructs: grouping,
function calls, class inheritance lists.

are more deserving of
the limited space in both the ascii table and your reflexive memory,
than uncommonly used ones.

Right. And since sequence packing and unpacking is a common idiom, it
deserves to be given punctuation. That's my opinion.
 
S

Steven D'Aprano

Explicit and implicit are not well-defined terms,

We can at least agree on that.
but I would say that
at the moment the signal is implicit, in the sense that one cannot see
what is going on by considering the rhs in isolation.

That is a ridiculous argument. If you ignore half the statement, of
course you can't tell what is going on.

Normally in
python, an assignment just binds the rhs to the identifiers on the lhs,
but in case of collection (un)packing, this rule that holds almost all
of the time is broken, and the assignment statement implies a far more
complicated construct, with a far more subtle meaning, and non-constant
time complexity.

Python assignment in general is not constant time.

If the left hand side is a single identifier (a name, a dotted attribute,
a subscription or slice, etc.) then the assignment is arguably constant
time (hand-waving away complications like attribute access, hash table
collisions, descriptors, etc.); but if the left hand side is a list of
targets, e.g.:

a, b, c = sequence

then the assignment is O(N). Python has to walk the sequence (actually,
any iterator) and assign each of N items to N identifiers, hence O(N) not
O(1).

The trivial case of a single value on the left hand side:

x = 1

is just the degenerate case for N=1.

Thats not a terrible thing, but a little extra explicitness there would
not hurt,

I think it will. It makes the assignment more verbose to no benefit.

and like I argued many times before, it is a nice unification
with the situation where the unpacking can not be implicit, like inside
a function call rather than assignment.

That's one opinion; but there's no need for any "extra explicitness"
since the definition of assignment in Python already explicitly includes
iterable unpacking. How much explicitness do you need?


Now you are just discrediting yourself in terms of having any idea what
you are talking about.

::sequence is redundant and unnecessary whether it is inside a function
call to len or on the right hand side of an assignment. In both cases, it
adds nothing to the code except noise.

Your proposal is surreal. You started off this conversation arguing that
the existing idiom for extended iterator unpacking, e.g.

head, *tail = sequence

is too hard to understand because it uses punctuation, and have ended up
recommending that we replace it with:

head, tail:: = ::sequence

instead (in the generic case where you don't want to use a different type
for tail).

Your original use-case, where you want to change the type of tail from a
list to something else, is simply solved by one extra line of code:

head, *tail = sequence
tail = tuple(tail)


You have written thousands of words to save one line in an idiom which
you claim is very uncommon. Perhaps your understanding of "pythonic" is
not as good as you imagine.
 
R

Rick Johnson

On Sun, 25 Dec 2011 07:47:20 -0800, Eelco wrote:
Your original use-case, where you want to change the type of tail from a
list to something else, is simply solved by one extra line of code:

head, *tail = sequence
tail = tuple(tail)

i wonder if we could make this proposal a bit more "Pythonic"? Hmm...

head, tuple(tail) = sequence

....YEP!
 
C

Chris Angelico

Your original use-case, where you want to change the type of tail from a
list to something else, is simply solved by one extra line of code:

head, *tail = sequence
tail = tuple(tail)

That achieves the goal of having tail as a different type, but it does
have the additional cost of constructing and then discarding a
temporary list. I know this is contrived, but suppose you have a huge
set/frozenset using tuples as the keys, and one of your operations is
to shorten all keys by removing their first elements. Current Python
roughly doubles the cost of this operation, since you can't choose
what type the tail is made into.

But if that's what you're trying to do, it's probably best to slice
instead of unpacking. Fortunately, the Zen of Python "one obvious way
to do it" doesn't stop there being other ways that work too.

ChrisA
 
S

Steven D'Aprano

That achieves the goal of having tail as a different type, but it does
have the additional cost of constructing and then discarding a temporary
list. I know this is contrived, but suppose you have a huge
set/frozenset using tuples as the keys, and one of your operations is to
shorten all keys by removing their first elements. Current Python
roughly doubles the cost of this operation, since you can't choose what
type the tail is made into.

The First Rule of Program Optimization:
- Don't do it.

The Second Rule of Program Optimization (for experts only):
- Don't do it yet.


Building syntax to optimize imagined problems is rarely a good idea. The
difference between 2 seconds processing your huge set and 4 seconds
processing it is unlikely to be significant unless you have dozens of
such huge sets and less than a minute to process them all.

And your idea of "huge" is probably not that big... it makes me laugh
when people ask how to optimize code "because my actual data has HUNDREDS
of items!". Whoop-de-doo. Come back when you have a hundred million
items, then I'll take your question seriously.

(All references to "you" and "your" are generic, and not aimed at Chris
personally. Stupid English language.)

But if that's what you're trying to do, it's probably best to slice
instead of unpacking.

Assuming the iterable is a sequence.

Fortunately, most iterable constructors accept iterators directly, so for
the cost of an extra line (three instead of two), you can handle data
structures as big as will fit into memory:

# I want to keep both the old and the new set
it = iter(huge_set_of_tuples)
head = next(it) # actually an arbitrary item
tail = set(x[1:] for x in it) # and everything else

If you don't need both the old and the new:

head = huge_set_of_tuples.pop()
tail = set()
while huge_set_of_tuples:
tail.add(huge_set_of_tuples.pop()[1:])
assert huge_set_of_tuples == set([])

If you rely on language features, who knows how efficient the compiler
will be?

head, tail::tuple = ::sequence

may create a temporary list before building the tuple anyway. And why
not? That's what this *must* do:

head, second, middle::tuple, second_from_last, last = ::iterator

because tuples are immutable and can't be grown or shrunk, so why assume
the language designers special cased the first form above?

Fortunately, the Zen of Python "one obvious way to
do it" doesn't stop there being other ways that work too.

Exactly. It is astonishing how many people think that if there isn't a
built-in language feature, with special syntax, to do something, there's
a problem that needs to be solved.
 
C

Chris Angelico

The First Rule of Program Optimization:
- Don't do it.

The Second Rule of Program Optimization (for experts only):
- Don't do it yet.


Building syntax to optimize imagined problems is rarely a good idea. The
difference between 2 seconds processing your huge set and 4 seconds
processing it is unlikely to be significant unless you have dozens of
such huge sets and less than a minute to process them all.

And your idea of "huge" is probably not that big... it makes me laugh
when people ask how to optimize code "because my actual data has HUNDREDS
of items!". Whoop-de-doo. Come back when you have a hundred million
items, then I'll take your question seriously.

(All references to "you" and "your" are generic, and not aimed at Chris
personally. Stupid English language.)

And what you're seeing there is the _best possible_ situation I could
think of, the strongest possible justification for new syntax.
Granted, that may say more about me and my imagination than about the
problem, but the challenge is open: Come up with something that
actually needs this.

ChrisA
 
E

Eelco

[...]
How is 'head, *tail = sequence' or semantically entirely
equivalently, 'head, tail::list = sequence' any different then? Of
course after interpretation/compilation, what it boils down to is
that we are constructing a list and binding it to the identifier
tail, but that is not how it is formulated in python as a language
I'm afraid it is.
Here's the definition of assignment in Python 3:
http://docs.python.org/py3k/reference/simple_stmts.html#assignment-
statements

You have claimed that "constructing a list and binding it to the
identifier" is not how iterator unpacking is formulated in Python the
language. But that is wrong. That *is* how iterator unpacking is
formulated in Python the language. The reference manual goes into detail
on how assignment is defined in Python. You should read it.
'head, *tail = sequence'
Is how one currently unpacks a head and tail in idiomatic python
This is semantically equivalent to
'head = sequence[0]'
'tail = list(sequence[1:])'

There's a reason this feature is called *iterable* unpacking: it operates
on any iterable object, not just indexable sequences.

'head, *tail = sequence' is not just semantically equivalent to, but
*actually is* implemented as something very close to:

temp = iter(sequence)
head = next(temp)
tail = list(temp)
del temp

Extended iterable unpacking, as in 'head, *middle, tail = sequence' is a
little more complex, but otherwise the same: it operates using the
iterator protocol, not indexing. I recommend you read the PEP, if you
haven't already done so.

http://www.python.org/dev/peps/pep-3132/
But these forms are linguistically different, in too many different ways
to mention.

How about mentioning even one way? Because I have no idea what
differences you are referring to, or why you think they are important.

The one spans two lines; the other one. Need I go on?
Again, you make a claim about Python which is contradicted by the
documented behaviour of the language. You claim that we DON'T have
something of the form 'tail = list_tail(sequence)', but that is *exactly*
what we DO have: extracting the tail from an iterator.

Obviously there is no built-in function "list_tail" but we get the same
effect by just using list() on an iterator after advancing past the first
item.


"Isomorphic linguistic form"? Are you referring to the idea from
linguistics that there are analogies between the structure of phonic and
semantic units? E.g. that the structures (phoneme, syllable, word) and
(sememe, onomateme, sentence) are analogous.

I don't see why this is relevant, or which units you are referring to --
the human-language description of what Python does, high-level Python
language features, Python byte-code, or machine code. Not that it
matters, but it is unlikely that any of those are isomorphic in the
linguistic sense, and I am not making any general claim that they are.

If not, I have no idea what you are getting at, except possibly trying to
hide behind obfuscation.

I wasnt too worried about obfuscation, since I think there is little
left to lose on that front. Let me make a last-ditch effort to explain
though:

When python reads your code, it parses it into a symbolic graph
representation. The two snippets under consideration do not parse into
the same graph, in the same way that insignificant whitespace or
arbitrary identifier names do, for instance. Hence, not linguistically
isomorphic.

The very function of a programming language is to transform this
representation of the code into semantically identical but different
code; (machine code, eventually). You fail to grok the difference
between semantic equivalence and linguistic equivalence. Yes, there
can be a semantic equivalence between iterable unpacking and a syntax
of the form tail_list(sequence). And python does indeed probably apply
a transformation of this kind under the hood (though we couldnt care
less what it does exactly). AND IT PERFORMS THAT TRANSFORMATION USING
THE TYPE CONSTRAINT GIVEN.

Another example: the C code 'float x = 3', how would you interpret
that? Is it 'not a type constraint on x', because eventually your C
compiler turns it into machine code that doesnt leave a trace of that
type constraint anyway? No, its a type constraint, exactly because C
uses it to emit code specific to the type specified.
Compile-time type constraints only have meaning in statically-typed
languages. Otherwise, you are applying an analogy that simply doesn't
fit, like claiming that the motion of paper airplanes and birds are
equivalent just because in English we use the word "fly" to describe them
both.

The differences between what Python actually does and compile-time type-
constraints are greater than the similarities.

Similarities:

(1) In both cases, 'tail' ends up as a list.

Differences:

(1) Compiler enforces the constraint that the identifier 'tail' is always
associated with a list and will prevent any attempt to associate 'tail'
with some value that is not a list. Python does nothing like this: the
identifier 'tail' has no type restrictions at all, and can be used for
any object.

After it is rebound, yes. Within the context of the iterable unpacking
statement however, IT IS ALWAYS A LIST. Not only is it always a list,
PYTHON *NEEDS* THIS KNOWLEDGE ABOUT TAIL TO EMIT THE CORRECT CODE
(explicitly or implicitly; thats why I say 'it can be viewed as a type
constraint')

Just because the type constraint only applies within the statement
does not make it any less of a type constraint; just one with
different semantics, from say, C.
(3) "Constraint" has the conventional meaning of a restriction, not a
result; a type-constraint refers to a variable being prevented from being
set to some other type, not that it merely becomes set to that type. For
example, one doesn't refer to 'x = 1' as a type-constraint merely because
x gets set to an int value; why do you insist that 'head, *tail =
sequence' is a type-constraint merely because tail gets set to a list?

Yes, the constraint is unique in this case; that does not make it any
less of a constraint. A formula of the form 'x = 1' is called a
constraint in mathematics. Collections with one element are still
called collections. Do you have a problem with that too?
'tail' is (re)declared on the spot as a brand-new identifier (type
constraint included); whether it exists before has no significance
whatsoever, since python allows rebinding of identifiers.

Given the following:

tail = "beautiful red plumage"
head, *tail = [1, 2, 3, 4]
tail = 42

is it your option that all three instantiations of the *identifier*
'tail' (one in each line) are *different* identifiers? Otherwise, your
statement about "brand new identifier" is simply wrong.

If you allow that they are the same identifier with three different
values (and three different types), then this completely destroys your
argument that there is a constraint on the identifier 'tail'. First
'tail' is a string, then it is a list, then it is an int. If that is a
type constraint (restriction) on 'tail', then constraint has no meaning.

If you state that they are three different identifiers that just happen
to be identical in every way that can be detected, then you are engaged
in counting angels dancing on the head of pins. Given this idea that the
middle 'tail' identifier is a different identifier from the other two,
then I might accept that there *could* be a type-constraint on middle
'tail', at least in some implementation of Python that hasn't yet been
created. But what a pointless argument to make.
Let me take a step back and reflect on the form of the argument we are
having. I claim the object in front of us is a 'cube'. You deny this
claim, by countering that it is 'just a particular configuration of
atoms'.

No no no, you have utterly misunderstood my argument. I'm not calling it
a configuration of atoms. I'm calling it a pyramid, and pointing out that
no matter how many times you state it is a cube, it actually has FOUR
sides, not six, and NONE of the sides are square, so it can't possibly be
a cube.

For that to be true, you would have had to explain to me somewhere
what it is you mean by a type constraint, and how this does not fit
the bill. Granted, you did that for the first time in this post; and I
added further detail why I think your particular usage of the term is
too narrow; C does not have a unique claim to the concept. Or
whereever you picked up your use of the term. Like I said, I mean it
in its literal sense (and in the sense that happens to match the first
hit on google; probably no coincidence).
But you are wrong. It does not fit the definition of a type-constraint,
except in the vacuous sense where you declare that there is some
invisible distinction between the identifier 'tail' in one statement and
the identical identifier 'tail' in another statement.

Its not vacuous; its essential to the mechanics behind the
transformation of the iterable unpacking statement to its compiled
form. But its meaning does not carry beyond that, no.
I might not share your deep and expert understanding of cubes, but I
understand what "type-constraint" means, and I know what Python does in
iterator unpacking, and I know that there is no sensible definition of
type-constraint that applies to iterator unpacking in Python.

You seem to know what a type constraint is in a language like C, and
you seem to model your complete understanding from that particular
instance of its use, rather than look at the actual definition of the
term, and seeing where it might apply.
 
E

Eelco

On Mon, 26 Dec 2011 13:51:50 -0800, Eelco wrote:

[...]
That proves the original point of contention: that the below* is
suboptimal language design,

Well duh.

This is where the referee should interrupt you for snipping someones
citation right before a 'but'
I was mocking the idea that the meaning of * is context-dependent is a
bad thing by pointing out that we accept context-dependent meaning for
round brackets () without any difficulties. Of course it is "suboptimal
language design" -- it couldn't fail to be. Context-dependency is not
necessarily a bad thing.

You know, so you dont end up simply restating my point while trying to
make it seem like you disagree.
Parentheses are not a construct. They are symbols (punctuation marks)
which are applied to at least three different constructs: grouping,
function calls, class inheritance lists.

Parenthesis encompass a class of constructs. Happy now?
Right. And since sequence packing and unpacking is a common idiom, it
deserves to be given punctuation. That's my opinion.

Its a valid opinion. But if we are going to be quantitative about
terms such as 'common', you know that there will be at least an order
of magnitude difference between these constructs in commonality, if
not two. Thats what makes your example a poor one. If you could
verbosify a construct of the same commonality and arrive at equally
absurd code, you would have a point.
 
E

Eelco

i wonder if we could make this proposal a bit more "Pythonic"? Hmm...

head, tuple(tail) = sequence

...YEP!

That has been considered; it was my first thought too, but this
requires one to break the symmetry between collection packing and
unpacking.
 
E

Eelco

And what you're seeing there is the _best possible_ situation I could
think of, the strongest possible justification for new syntax.
Granted, that may say more about me and my imagination than about the
problem, but the challenge is open: Come up with something that
actually needs this.

ChrisA

I personally feel any performance benefits are but a plus; they are
not the motivating factor for this idea. I simply like the added
verbosity and explicitness, thats the bottom line.
 
L

Lie Ryan

I personally feel any performance benefits are but a plus; they are
not the motivating factor for this idea. I simply like the added
verbosity and explicitness, thats the bottom line.

Any performance benefits are a plus, I agree, as long as it doesn't make
my language looks like Perl. Now get off my lawn!
 
E

Eelco

Any performance benefits are a plus, I agree, as long as it doesn't make
my language looks like Perl. Now get off my lawn!

Im no perl expert, but it says on the wikipedia page a common
criticism is its overuse of otherwise meaningless special characters;
and I would agree; I puked a little in my mouth looking at the code
samples.

I would argue that the use of single special characters to signal a
relatively complex and uncommon construct is exactly what I am trying
to avoid with this proposal.
 
S

Steven D'Aprano

I would argue that the use of single special characters to signal a
relatively complex and uncommon construct is exactly what I am trying to
avoid with this proposal.

This would be the proposal to change the existing

head, *tail = sequence

to your proposed:

head, tail:: = ::sequence

(when happy with the default list for tail), or

head, tail::tuple = ::sequence

to avoid an explicit call to "tail = tuple(tail)" after the unpacking.

Either way, with or without an explicit type declaration on the left hand
side, you are increasing the number of punctuation characters from one to
four. If your aim is to minimize the number of punctuation characters,
you're doing it wrong.
 
E

Eelco

This would be the proposal to change the existing

    head, *tail = sequence

to your proposed:

    head, tail:: = ::sequence

(when happy with the default list for tail), or

    head, tail::tuple = ::sequence

to avoid an explicit call to "tail = tuple(tail)" after the unpacking.

Either way, with or without an explicit type declaration on the left hand
side, you are increasing the number of punctuation characters from one to
four. If your aim is to minimize the number of punctuation characters,
you're doing it wrong.

The goal is not to minimize the number of (special) characters to
type. To goal is to minimize the number of special characters which
are hard to interpret at a glance. I would prefer : over ::, but both
are a single special character construct. Adding those characters once
more on the rhs is similarly, not an increase in the number of
concepts employed; merely a more explicit form of the same construct.

And besides, I dont much like 'head, tail:: = ::sequence'. I threw
that out there to appease the terseness advocates, but to me it
largely defeats the purpose, because indeed it is hardly any different
from the original. I like the explicit mentioning of the collection
type to be constructed; that is what really brings it more towards
'for line in file' explicit obviousness to my mind.
 
L

Lie Ryan

This would be the proposal to change the existing

head, *tail = sequence

to your proposed:

head, tail:: = ::sequence

(when happy with the default list for tail), or

head, tail::tuple = ::sequence

to avoid an explicit call to "tail = tuple(tail)" after the unpacking.

Either way, with or without an explicit type declaration on the left hand
side, you are increasing the number of punctuation characters from one to
four. If your aim is to minimize the number of punctuation characters,
you're doing it wrong.

Another drawback of it is that it looks misleadingly similar to C++
namespace notation.
 
A

alex23

But I consider it a reasonable change for a
'python 4', or whatever the next major version change will be called.

You do realise there were 8 years between 2 & 3? You might be waiting
for quite some time.

Conversely, you could pitch in behind Rick Johnson's Python 4000 fork,
I sure it's progressing nicely given how long Rick has been talking it
up.
Writing a code-conversion tool to convert from *args to args::tuple
would be quite easy indeed.

You might want to ask people maintaining libraries in both 2.x & 3.x
via 2to3 just how well that's working out for them. If the impact of
changes was trivially obvious, the programming landscape would look
very different indeed.
 
R

Rick Johnson

Conversely, you could pitch in behind Rick Johnson's Python 4000 fork,
I sure it's progressing nicely given how long Rick has been talking it
up.

It's NOT a fork Alex. It IS in fact the next logical step in Python's
future evolution.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top