Death to tuples!

A

Antoon Pardon

Are you serious about that?

I'm at least that serious that I do consider the two cases
somewhat equivallent.
The semantics of default arguments are quite clearly defined (although
suprising to some people): the default argument is evaluated once when the
function is defined and the same value is then reused on each call.

The semantics of list constants are also clearly defined: a new list is
created each time the statement is executed. Consider:

res = []
for i in range(10):
res.append(i*i)

If the same list was reused each time this code was executed the list would
get very long. Pre-evaluating a constant list and creating a copy each time
wouldn't break the semantics, but simply reusing it would be disastrous.

This is not about how things are defined, but about should we consider
it a problem if it were defined differently. And no I am not arguing
python should change this. It would break too much code and would
make python all the more surprising.

But lets just consider. Your above code could simply be rewritten
as follows.

res = list()
for i in range(10):
res.append(i*i)

Personnaly I think that the two following pieces of code should
give the same result.

def Foo(l=[]): def Foo(l):
... ...

Foo() Foo([])
Foo() Foo([])

Just as I think, you want your piece of code to give the same
result as how I rewrote it.

I have a problem understanding people who find the below don't
have to be equivallent and the upper must.
 
D

Duncan Booth

Antoon said:
But lets just consider. Your above code could simply be rewritten
as follows.

res = list()
for i in range(10):
res.append(i*i)
I don't understand your point here? You want list() to create a new list
and [] to return the same (initially empty) list throughout the run of the
program?

Personnaly I think that the two following pieces of code should
give the same result.

def Foo(l=[]): def Foo(l):
... ...

Foo() Foo([])
Foo() Foo([])

Just as I think, you want your piece of code to give the same
result as how I rewrote it.

I have a problem understanding people who find the below don't
have to be equivallent and the upper must.

The left one is equivalent to:

__anon = []
def Foo(l):
...

Foo(__anon)
Foo(__anon)

The left has one list created outside the body of the function, the right
one has two lists created outside the body of the function. Why on earth
should these be the same?

Or to put it even more simply, it seems that you are suggesting:

__tmp = []
x = __tmp
y = __tmp

should do the same as:

x = []
y = []
 
A

Antoon Pardon

Antoon said:
But lets just consider. Your above code could simply be rewritten
as follows.

res = list()
for i in range(10):
res.append(i*i)
I don't understand your point here? You want list() to create a new list
and [] to return the same (initially empty) list throughout the run of the
program?

No, but I think that each occurence returning the same (initially empty)
list throughout the run of the program would be consistent with how
default arguments are treated.
Personnaly I think that the two following pieces of code should
give the same result.

def Foo(l=[]): def Foo(l):
... ...

Foo() Foo([])
Foo() Foo([])

Just as I think, you want your piece of code to give the same
result as how I rewrote it.

I have a problem understanding people who find the below don't
have to be equivallent and the upper must.

The left one is equivalent to:

__anon = []
def Foo(l):
...

Foo(__anon)
Foo(__anon)

So, why shouldn't:

res = []
for i in range(10):
res.append(i*i)

be equivallent to:

__anon = list()
...

res = __anon
for i in range(10):
res.append(i*i)
The left has one list created outside the body of the function, the right
one has two lists created outside the body of the function. Why on earth
should these be the same?

Why on earth should it be the same list, when a function is called
and is provided with a list as a default argument?

I see no reason why your and my question should be answered differently.
Or to put it even more simply, it seems that you are suggesting:

__tmp = []
x = __tmp
y = __tmp

should do the same as:

x = []
y = []

No, I'm not suggesting it should, I just don't see why it should be
considered a problem if it would do the same, provided this is the
kind of behaviour we already have with list as default arguments.

Why is it a problem when a constant list is mutated in an expression,
but isn't it a problem when a constant list is mutated as a default
argument?
 
D

Duncan Booth

Antoon said:
The left one is equivalent to:

__anon = []
def Foo(l):
...

Foo(__anon)
Foo(__anon)

So, why shouldn't:

res = []
for i in range(10):
res.append(i*i)

be equivallent to:

__anon = list()
...

res = __anon
for i in range(10):
res.append(i*i)

Because the empty list expression '[]' is evaluated when the expression
containing it is executed.
Why on earth should it be the same list, when a function is called
and is provided with a list as a default argument?

Because the empty list expression '[]' is evaluated when the
expression containing it is executed.
I see no reason why your and my question should be answered
differently.

We are agreed on that, the answers should be the same, and indeed they are.
In each case the list is created when the expression (an assigment or a
function definition) is executed. The behaviour, as it currently is, is
entirely self-consistent.

I think perhaps you are confusing the execution of the function body with
the execution of the function definition. They are quite distinct: the
function definition evaluates any default arguments and creates a new
function object binding the code with the default arguments and any scoped
variables the function may have.

If the system tried to delay the evaluation until the function was called
you would get surprising results as variables referenced in the default
argument expressions could have changed their values.
 
C

Christophe

Antoon Pardon a écrit :
Antoon said:
But lets just consider. Your above code could simply be rewritten
as follows.

res = list()
for i in range(10):
res.append(i*i)

I don't understand your point here? You want list() to create a new list
and [] to return the same (initially empty) list throughout the run of the
program?


No, but I think that each occurence returning the same (initially empty)
list throughout the run of the program would be consistent with how
default arguments are treated.

What about that :
def f(a):
res = [a]
return res

How can you return the same list that way ? Do you propose to make such
construct illegal ?
 
A

Antoon Pardon

Antoon said:
The left one is equivalent to:

__anon = []
def Foo(l):
...

Foo(__anon)
Foo(__anon)

So, why shouldn't:

res = []
for i in range(10):
res.append(i*i)

be equivallent to:

__anon = list()
...

res = __anon
for i in range(10):
res.append(i*i)

Because the empty list expression '[]' is evaluated when the expression
containing it is executed.

This doesn't follow. It is not because this is how it is now, that that
is the way it should be.

I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression
determining the default value.
Why on earth should it be the same list, when a function is called
and is provided with a list as a default argument?

Because the empty list expression '[]' is evaluated when the
expression containing it is executed.

Again you are just stating the specific choice python has made.
Not why they made this choice.
We are agreed on that, the answers should be the same, and indeed they are.
In each case the list is created when the expression (an assigment or a
function definition) is executed. The behaviour, as it currently is, is
entirely self-consistent.
I think perhaps you are confusing the execution of the function body with
the execution of the function definition. They are quite distinct: the
function definition evaluates any default arguments and creates a new
function object binding the code with the default arguments and any scoped
variables the function may have.

I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.

So when these kind of expression are evaluated at definition time,
I don't see what would be so problematic when other functions are
evaluated at definition time to.
If the system tried to delay the evaluation until the function was called
you would get surprising results as variables referenced in the default
argument expressions could have changed their values.

This would be no more surprising than a variable referenced in a normal
expression to have changed values between two evaluations.
 
A

Antoon Pardon

Antoon Pardon a écrit :
Antoon Pardon wrote:

But lets just consider. Your above code could simply be rewritten
as follows.

res = list()
for i in range(10):
res.append(i*i)


I don't understand your point here? You want list() to create a new list
and [] to return the same (initially empty) list throughout the run of the
program?


No, but I think that each occurence returning the same (initially empty)
list throughout the run of the program would be consistent with how
default arguments are treated.

What about that :
def f(a):
res = [a]
return res

How can you return the same list that way ? Do you propose to make such
construct illegal ?

I don't propose anything. This is AFAIC just a philosophical
exploration about the cons and pros of certain python decisions.

To answer your question. The [a] is not a constant list, so
maybe it should be illegal. The way python works now each list
is implicitely constructed. So maybe it would have been better
if python required such a construction to be made explicit.

If people would have been required to write:

a = list()
b = list()

Instead of being able to write

a = []
b = []

It would have been clearer that a and b are not the same list.
 
D

Duncan Booth

Antoon said:
I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.
Yes you could argue for that, but I think it would lead to a more complex
and confusing language.

The 'why' is probably at least partly historical. When default arguments
were added to Python there were no bound variables, so the option of
delaying the evaluation simply wasn't there. However, I'm sure that if
default arguments were being added today, and there was a choice between
using closures or evaluating the defaults at function definition time, the
choice would still come down on the side of simplicity and clarity.

(Actually, I think it isn't true that Python today could support evaluating
default arguments inside the function without further changes to how it
works: currently class variables aren't in scope inside methods so you
would need to add support for that at the very least.)

If you want the expressions to use closures then you can do that by putting
expressions inside the function. If you changed default arguments to make
them work in the same way, then you would have to play a lot more games
with factory functions. Most of the tricks you can play of the x=x default
argument variety are just tricks, but sometimes they can be very useful
tricks.
 
M

Mike Meyer

Antoon Pardon said:
I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.

The idiom to get a default argument evaluated at call time with the
current behavior is:

def f(arg = None):
if arg is None:
arg = BuildArg()

What's the idiom to get a default argument evaluated at definition
time if it were as you suggested?

<mike
 
F

Fuzzyman

Mike said:
The idiom to get a default argument evaluated at call time with the
current behavior is:

def f(arg = None):
if arg is None:
arg = BuildArg()

What's the idiom to get a default argument evaluated at definition
time if it were as you suggested?

Having default arguments evaluated at definition time certainly bites a
lot of newbies. It allows useful tricks with nested scopes though.

All the best,

Fuzzyman
http://www.voidspace.org.uk/python/index.shtml
 
A

Antoon Pardon

The idiom to get a default argument evaluated at call time with the
current behavior is:

def f(arg = None):
if arg is None:
arg = BuildArg()

What's the idiom to get a default argument evaluated at definition
time if it were as you suggested?

Well there are two possibilities I can think of:

1)
arg_default = ...
def f(arg = arg_default):
...

2)
def f(arg = None):
if arg is None:
arg = default.
 
M

Mike Meyer

Antoon Pardon said:
Well there are two possibilities I can think of:

1)
arg_default = ...
def f(arg = arg_default):
...

Yuch. Mostly because it doesn't work:

arg_default = ...
def f(arg = arg_default):
....

arg_default = ...
def g(arg = arg_default):

That one looks like an accident waiting to happen.
2)
def f(arg = None):
if arg is None:
arg = default.

Um, that's just rewriting the first one in an uglier fashion, except
you omitted setting the default value before the function.

This may not have been the reason it was done in the first place, but
this loss of functionality would seem to justify the current behavior.

And, just for fun:

def setdefaults(**defaults):
def maker(func):
def called(*args, **kwds):
defaults.update(kwds)
func(*args, **defaults)
return called
return maker

<mike
 
B

Bengt Richter

Antoon said:
The left one is equivalent to:

__anon = []
def Foo(l):
...

Foo(__anon)
Foo(__anon)

So, why shouldn't:

res = []
for i in range(10):
res.append(i*i)

be equivallent to:

__anon = list()
...

res = __anon
for i in range(10):
res.append(i*i)

Because the empty list expression '[]' is evaluated when the expression
containing it is executed.

This doesn't follow. It is not because this is how it is now, that that
is the way it should be.

I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression
^^^^^^^^^^^^^^^^^^^^^^^[1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
determining the default value.
^^^^^^^^^^^^^^^^^^^^^^^^^^^[2]
Ok, but "[]" (without the quotes) is just one possible expression, so
presumably you have to follow your rules for all default value expressions.
Plain [] evaluates to a fresh new empty list whenever it is evaluated,
but that's independent of scope. An expression in general may depend on
names that have to be looked up, which requires not only a place to look
for them, but also persistence of the name bindings. so def foo(arg=PI*func(x)): ...
means that at call-time you would have to find 'PI', 'func', and 'x' somewhere.
Where & how?
1) If they should be re-evaluated in the enclosing scope, as default arg expressions
are now, you can just write foo(PI*func(x)) as your call. So you would be asking
for foo() to be an abbreviation of that. Which would give you a fresh list if
foo was defined def foo(arg=[]): ...

Of course, if you wanted just the expression value as now at def time, you could write
def foo(...):...; foo.__default0=PI*fun(x) and later call foo(foo.__default0), which is
what foo() effectively does now.

2) Or did you want the def code to look up the bindings at def time and save them
in, say, a tuple __deftup0=(PI, func, x) that captures the def-time bindings in the scope
enclosing the def, so that when foo is called, it can do arg = _deftup0[0]*_deftup0[1](_deftup0[2])
to initialize arg and maybe trigger some side effects at call time.

3) Or did you want to save the names themselves, __default0_names=('PI', 'func', 'x')
and look them up at foo call time, which is tricky as things are now, but could be done?
It's not "provided with a list" -- it's provided with a _reference_ to a list.
You know this by now, I think. Do you want clone-object-on-new-reference semantics?
A sort of indirect value semantics? If you do, and you think that ought to be
default semantics, you don't want Python. OTOH, if you want a specific effect,
why not look for a way to do it either within python, or as a graceful syntactic
enhancement to python? E.g., def foo(arg{expr}):... could mean evaluate arg as you would like.
Now the ball is in your court to define "as you would like" (exactly and precisely ;-)

Because the empty list expression '[]' is evaluated when the
expression containing it is executed.

Again you are just stating the specific choice python has made.
Not why they made this choice.
Why are you interested in the answer to this question? ;-) Do you want
to write an accurate historical account, or are you expressing discomfort
from having had to revise your mental model of other programming languages
to fit Python? Or do you want to try to enhance Python in some way?
I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.
Maybe it was just easier, and worked very well, and no one showed a need
for doing it differently that couldn't easily be handled. If you want
an expression evaluated at call time, why don't you write it at the top
of the function body instead of lobbying for a change to the default arg
semantics? The answer could be a scoping problem, I suppose. Is there
something you'd like that couldn't be handled with (an efficent sugary version of)

sentinel = object()
def foo(arg=(sentinel,lambda:expr)):
if type(arg) is tuple and len(arg)==2 and arg[0] is sentinel: arg = arg[0]()
...

or would the expression evaluation maybe not suit once beyond expr being just []?
I'm trying to move off "why" onto "what" ;-)
So when these kind of expression are evaluated at definition time,
I don't see what would be so problematic when other functions are
evaluated at definition time to.


This would be no more surprising than a variable referenced in a normal
expression to have changed values between two evaluations.
Sure, you could have it work that way, but would it really be useful?

Is this a matter of thinking up some sugar for

def foo(arg=None)
if arg is None: arg = []

or what are we pursuing?
Hm, I was just going to say it might be nice to have a builtin standard sentinel,
or a convention for using something as such. I don't really like manufacturing
sentinel=object() when I need something other than None. So it just occurred to me
maybe

def foo(arg=NotImplemented)
if arg is NotImplemented: arg = []

maybe SENTINEL could be defined similarly as a builtin constant.

Regards,
Bengt Richter
 
B

bonono

Bengt said:
Because the empty list expression '[]' is evaluated when the
expression containing it is executed.

Again you are just stating the specific choice python has made.
Not why they made this choice.
Why are you interested in the answer to this question? ;-) Do you want
to write an accurate historical account, or are you expressing discomfort
from having had to revise your mental model of other programming languages
to fit Python? Or do you want to try to enhance Python in some way?

My WAG :

Because it is usually presented as "this is the best way" rather than
"this is the python way". For the former one, I think people would be
curious of why it is best(or better than other considered alternative),
as a learning excercise may be.
 
A

Antoon Pardon

Yuch. Mostly because it doesn't work:

arg_default = ...
def f(arg = arg_default):
...

arg_default = ...
def g(arg = arg_default):

That one looks like an accident waiting to happen.

It's not because accidents can happen, that it doesn't work.
IMO that accidents can happen here is because python
doesn't allow a name to be marked as a constant or unreboundable.
This may not have been the reason it was done in the first place, but
this loss of functionality would seem to justify the current behavior.

And, just for fun:

def setdefaults(**defaults):
def maker(func):
def called(*args, **kwds):
defaults.update(kwds)
func(*args, **defaults)
return called
return maker

So it seems that with a decorator there would be no loss of
functionality.
 
A

Antoon Pardon

Antoon Pardon wrote:

The left one is equivalent to:

__anon = []
def Foo(l):
...

Foo(__anon)
Foo(__anon)

So, why shouldn't:

res = []
for i in range(10):
res.append(i*i)

be equivallent to:

__anon = list()
...

res = __anon
for i in range(10):
res.append(i*i)

Because the empty list expression '[]' is evaluated when the expression
containing it is executed.

This doesn't follow. It is not because this is how it is now, that that
is the way it should be.

I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression
^^^^^^^^^^^^^^^^^^^^^^^[1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
determining the default value.
^^^^^^^^^^^^^^^^^^^^^^^^^^^[2]
Ok, but "[]" (without the quotes) is just one possible expression, so
presumably you have to follow your rules for all default value expressions.
Plain [] evaluates to a fresh new empty list whenever it is evaluated,

Yes, one of the questions I have is why python people whould consider
it a problem if it wasn't.

Personnaly I expect the following pieces of code

a = <const expression>
b = <same expression>

to be equivallent with

a = <const expression>
b = a

But that isn't the case when the const expression is a list.

A person looking at:

a = [1 , 2]

sees something resembling

a = (1 , 2)

Yet the two are treated very differently. As far as I understand the
first is translated into somekind of list((1,2)) statement while
the second is build at compile time and just bound.

This seems to go against the pythonic spirit of explicit is
better than implicit.

It also seems to go against the way default arguments are treated.
but that's independent of scope. An expression in general may depend on
names that have to be looked up, which requires not only a place to look
for them, but also persistence of the name bindings. so def foo(arg=PI*func(x)): ...
means that at call-time you would have to find 'PI', 'func', and 'x' somewhere.
Where & how?
1) If they should be re-evaluated in the enclosing scope, as default arg expressions
are now, you can just write foo(PI*func(x)) as your call.

I may be a bit pedantic. (Read that as I probably am)

But you can't necesarry write foo(PI*func(x)) as your call, because PI
and func maybe not within scope where the call is made.
So you would be asking
for foo() to be an abbreviation of that. Which would give you a fresh list if
foo was defined def foo(arg=[]): ...

This was my first thought.
Of course, if you wanted just the expression value as now at def time, you could write
def foo(...):...; foo.__default0=PI*fun(x) and later call foo(foo.__default0), which is
what foo() effectively does now.

2) Or did you want the def code to look up the bindings at def time and save them
in, say, a tuple __deftup0=(PI, func, x) that captures the def-time bindings in the scope
enclosing the def, so that when foo is called, it can do arg = _deftup0[0]*_deftup0[1](_deftup0[2])
to initialize arg and maybe trigger some side effects at call time.

This is tricky, I think it would depend on how foo(arg=[]) would be
translated.

2a) _deftup0=([]), with a subsequent arg = _deftup0[0]
or
2b) _deftup0=(list, ()), with subsequently arg = _deftup0[0](_deftup0[1])


My feeling is that this proposal would create a lot of confusion.

Something like def f(arg = s) might give very different results
depending on s being a list or a tuple.
3) Or did you want to save the names themselves, __default0_names=('PI', 'func', 'x')
and look them up at foo call time, which is tricky as things are now, but could be done?

No, this would make for some kind of dynamic scoping, I don't think it
would mingle with the static scoping python has now.

It's not "provided with a list" -- it's provided with a _reference_ to a list.
You know this by now, I think. Do you want clone-object-on-new-reference semantics?
A sort of indirect value semantics? If you do, and you think that ought to be
default semantics, you don't want Python. OTOH, if you want a specific effect,
why not look for a way to do it either within python, or as a graceful syntactic
enhancement to python? E.g., def foo(arg{expr}):... could mean evaluate arg as you would like.
Now the ball is in your court to define "as you would like" (exactly and precisely ;-)

I didn't start my question because I wanted something to change in
python. It was just something I wondered about. Now I wouldn't
mind python to be enhanced at this point, so should the python
people decide to work on this, I'll give you my proposal. Using your
syntax.

def foo(arg{expr}):
...

should be translated something like:

class _def: pass

def foo(arg = _def):
if arg is _def:
arg = expr
...

I think this is equivallent with your first proposal and probably
not worth the trouble, since it is not that difficult to get
the behaviour one wants.

I think such a proposal would be most advantaged for the newbees
because the two possibilities for default values would make them
think about what the differences are between the two, so they
are less likely to be confused about the def f(l=[]) case.
Because the empty list expression '[]' is evaluated when the
expression containing it is executed.

Again you are just stating the specific choice python has made.
Not why they made this choice.
Why are you interested in the answer to this question? ;-)

Because my impression is that a number of decisions were made
that are inconsistent with each other. I'm just trying to
understand how that came about.
Do you want
to write an accurate historical account, or are you expressing discomfort
from having had to revise your mental model of other programming languages
to fit Python? Or do you want to try to enhance Python in some way?

If there is discomfort, then that has more to do with having revised
my mental model to python in one aspect doesn't translate to
understanding other aspects of python enough.
Maybe it was just easier, and worked very well, and no one showed a need
for doing it differently that couldn't easily be handled. If you want
an expression evaluated at call time, why don't you write it at the top
of the function body instead of lobbying for a change to the default arg
semantics?

I'm not lobbying for a change. You are probably right that this is again
the "Practical beats purity" rule working again. But IMO the python
people are making use of that rule too much, making the total language
less pratical as a whole.

Purity is often practical, because it makes it easier to infer knowlegde
from things you already know. If you break the purity for the practical
you may make one specific aspect easier to understand, but make it
less practical to understand the language as a whole.

Personnally I'm someone for whom purity is practical in most cases.
If a language is pure/consistent it makes the langauge easier to
learn and understand, because your knowledge of one part of the
language will carry over to other parts.

Isn't is practical that strings tuples and list all treat '[]'
similarly for accessing an individual in the sequence. That means
I just have to learn what v[x] means for tuples and I know what
it means for lists, strings and a lot of other things.

Having a count method for lists but not for tuples breaks that
consistency and makes that I have to look it up for each sequence
whether or not it has that method. Not that practical IMO.
[ ... ]

or what are we pursuing?

What I'm pursuing I think is that people would think about what
impractical effects can arise when you drop purity for practicallity.

My impression is that when purity is balanced against practicallity
this balancing is only done on a local scale without considering
what practicallity is lost over the whole language by persuing
praticallity on a local aspect.
 
M

Mike Meyer

Antoon Pardon said:
It's not because accidents can happen, that it doesn't work.
IMO that accidents can happen here is because python
doesn't allow a name to be marked as a constant or unreboundable.

Loets of "accidents" could be fixed if Python marked names in various
ways: with a type, or as only being visible to certain other types, or
whatever. A change that requires such a construct in order to be
usable probably needs rethinking.

Even if that weren't a problem, this would still require introducting
a new variable into the global namespace for every such
argument. Unlike other similar constructs, you *can't* clean them up,
because the whole point is that they be around later.

The decorator was an indication of a possible solution. I know it
fails in some cases, and it probably fails in others as well.

<mike
 
M

Mike Meyer

Antoon Pardon said:
Antoon Pardon wrote:
I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression
^^^^^^^^^^^^^^^^^^^^^^^[1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
determining the default value.
^^^^^^^^^^^^^^^^^^^^^^^^^^^[2]
Ok, but "[]" (without the quotes) is just one possible expression, so
presumably you have to follow your rules for all default value expressions.
Plain [] evaluates to a fresh new empty list whenever it is evaluated,
Yes, one of the questions I have is why python people whould consider
it a problem if it wasn't.

That would make [] behave differently from [compute_a_value()]. This
doesn't seem like a good idea.
Personnaly I expect the following pieces of code
a = <const expression>
b = <same expression>
to be equivallent with
a = <const expression>
b = a
But that isn't the case when the const expression is a list.

It isn't the case when the const expression is a tuple, either:

x = (1, 2)
or an integer:

Every value (in the sense of a syntactic element that's a value, and
not a keyword, variable, or other construct) occuring in a program
should represent a different object. If the compiler can prove that an
value can't be changed, it's allowed to use a single instance for all
occurences of that value. Is there *any* language that behaves
differently from this?
A person looking at:
a = [1 , 2]
sees something resembling
a = (1 , 2)
Yet the two are treated very differently. As far as I understand the
first is translated into somekind of list((1,2)) statement while
the second is build at compile time and just bound.

No, that translation doesn't happen. [1, 2] builds a list of
values. (1, 2) builds and binds a constant, which is only possible
because it, unlike [1, 2], *is* a constant. list(1, 2) calls the
function "list" on a pair of values:
.... a = [1, 2]
.... b = list(1, 2)
.... c = (1, 2)
.... 2 0 LOAD_CONST 1 (1)
3 LOAD_CONST 2 (2)
6 BUILD_LIST 2
9 STORE_FAST 0 (a)

3 12 LOAD_GLOBAL 1 (list)
15 LOAD_CONST 1 (1)
18 LOAD_CONST 2 (2)
21 CALL_FUNCTION 2
24 STORE_FAST 2 (b)

4 27 LOAD_CONST 3 ((1, 2))
30 STORE_FAST 1 (c)
33 LOAD_CONST 0 (None)
36 RETURN_VALUE
This seems to go against the pythonic spirit of explicit is
better than implicit.

Even if "[arg]" were just syntactic sugar for "list(arg)", why would
that be "implicit" in some way?
It also seems to go against the way default arguments are treated.

Only if you don't understand how default arguments are treated.

<mike
 
B

Bengt Richter

Personnaly I expect the following pieces of code

a = <const expression>
b = <same expression>

to be equivallent with

a = <const expression>
b = a

But that isn't the case when the const expression is a list.
ISTM the line above is a symptom of a bug in your mental Python source interpreter.
It's a contradiction. A list can't be a "const expression".
We probably can't make real progress until that is debugged ;-)
Note: assert "const expression is a list" should raise a mental exception ;-)
A person looking at:

a = [1 , 2]
English: let a refer to a mutable container object initialized to contain
an ordered sequence of specified elements 1 and 2.
sees something resembling

a = (1 , 2)
English: let a refer to an immutable container object initialized to contain
an ordered sequence of specified elements 1 and 2.
Yet the two are treated very differently. As far as I understand the
first is translated into somekind of list((1,2)) statement while
They are of course different in that two different kinds of objects
(mutable vs immutable containers) are generated. This can allow an optimization
in the one case, but not generally in the other.
the second is build at compile time and just bound.
a = (1, 2) is built at compile time, but a = (x, y) would not be, since x and y
can't generally be known a compile time. This is a matter of optimization, not
semantics. a = (1, 2) _could_ be built with the same code as a = (x, y) picking up
1 and 2 constants as arguments to a dynamic construction of the tuple, done in the
identical way as the construction would be done with x and y. But that is a red herring
in this discussion, if we are talking about the language rather than the implementation.
This seems to go against the pythonic spirit of explicit is
better than implicit.
Unless you accept that '[' is explicitly different from '(' ;-)
It also seems to go against the way default arguments are treated.
I suspect another bug ;-)
I may be a bit pedantic. (Read that as I probably am)

But you can't necesarry write foo(PI*func(x)) as your call, because PI
and func maybe not within scope where the call is made.
Yes, I was trying to make you notice this ;-)
So you would be asking
for foo() to be an abbreviation of that. Which would give you a fresh list if
foo was defined def foo(arg=[]): ...

This was my first thought.
[...]
It's not "provided with a list" -- it's provided with a _reference_ to a list.
You know this by now, I think. Do you want clone-object-on-new-reference semantics?
A sort of indirect value semantics? If you do, and you think that ought to be
default semantics, you don't want Python. OTOH, if you want a specific effect,
why not look for a way to do it either within python, or as a graceful syntactic
enhancement to python? E.g., def foo(arg{expr}):... could mean evaluate arg as you would like.
Now the ball is in your court to define "as you would like" (exactly and precisely ;-)

I didn't start my question because I wanted something to change in
python. It was just something I wondered about. Now I wouldn't
I wonder if this "something" will still exist once you get
assert "const expression is a list" to raise a mental exception ;-)
mind python to be enhanced at this point, so should the python
people decide to work on this, I'll give you my proposal. Using your
syntax.

def foo(arg{expr}):
...

should be translated something like:

class _def: pass

def foo(arg = _def):
if arg is _def:
arg = expr
...

I think this is equivallent with your first proposal and probably
not worth the trouble, since it is not that difficult to get
the behaviour one wants.
Again, I'm not "proposing" anything except to help lay out evidence.
The above is just a spelling of typical idiom for mutable default
value initialization.
I think such a proposal would be most advantaged for the newbees
because the two possibilities for default values would make them
think about what the differences are between the two, so they
are less likely to be confused about the def f(l=[]) case.
So are you saying it's not worth the trouble or that it would be
worth the trouble to help newbies?
[...]
Because my impression is that a number of decisions were made
that are inconsistent with each other. I'm just trying to
understand how that came about.
An inconsistency in our impression of the world
is not an inconsistency in the world ;-)
If there is discomfort, then that has more to do with having revised
my mental model to python in one aspect doesn't translate to
understanding other aspects of python enough.
An example?

I'm not lobbying for a change. You are probably right that this is again
the "Practical beats purity" rule working again. But IMO the python
I didn't say this was a case of "Practical beats purity" -- I said "maybe
it was just easier" -- which doesn't necessarily mean impure to me, nor
that the more difficult choice would have been better ;-)
In fact I think the way default args work now works fine.
If I were to make a list of things to change, that would not be at the top.
people are making use of that rule too much, making the total language
less pratical as a whole.
IMO this is hand waving unless you can point to specifics, and a kind of
unseemly propaganda/innuendo if you can't.
Purity is often practical, because it makes it easier to infer knowlegde
from things you already know. If you break the purity for the practical
you may make one specific aspect easier to understand, but make it
less practical to understand the language as a whole.
That is a good point, but to have the real moral standing to talk about purity
one has to be able to demonstrate it, which is really hard.
Personnally I'm someone for whom purity is practical in most cases.
If a language is pure/consistent it makes the langauge easier to
learn and understand, because your knowledge of one part of the
language will carry over to other parts.
I agree, so long as the "knowledge of one part" is not a misconception.
Isn't is practical that strings tuples and list all treat '[]'
similarly for accessing an individual in the sequence. That means
I just have to learn what v[x] means for tuples and I know what
it means for lists, strings and a lot of other things.
But not all, since your example v[x] requires that v not be a dict.
Having a count method for lists but not for tuples breaks that
consistency and makes that I have to look it up for each sequence
whether or not it has that method. Not that practical IMO.
But I think you are having the wrong expectation of v[x] syntax.
It will generate code that looks for __getitem__ and having __getitem__
means that iteration syntax may access it if __iter__ does not preempt,
but if you expect this to guarantee the presence of other methods
such as count, then you are misreading v[x].

OTOH, you can have some expectations of list(v), which if it succeeds
will give you the list methods. As mentioned in another post, I think
if iter were a type, iter(v) could return an iterator object that could
have all the methods one might think appropriate for all sequences, and
could thus be a way of unifying sequence usage. iter could also allow
some handy methods that return further specialized iterators (a la itertools)
rather than consuming itself to return a specific result like a count.
[ ... ]

or what are we pursuing?

What I'm pursuing I think is that people would think about what
impractical effects can arise when you drop purity for practicallity.
I think this is nicely said and important. I wish it were possible
to arrive at a statement like this without wading though massive irrelevancies ;-)
My impression is that when purity is balanced against practicallity
this balancing is only done on a local scale without considering
what practicallity is lost over the whole language by persuing
praticallity on a local aspect.
It is hard to demonstrate personal impressions to others, since
we are not Vulcans capable of mind-melds, so good concrete examples
are critical, along with prose that focuses attention on the aspects
to be demonstrated. Good luck, considering that what may seem
like a valid impression to you may seem like a mis-reading to others ;-)

BTW, I am participating in this thread more out of interest in
the difficulties of human communication that in the topic per se,
so I am probably OT ;-)

Regards,
Bengt Richter
 
A

Antoon Pardon

Op 2005-12-02 said:
ISTM the line above is a symptom of a bug in your mental Python source interpreter.
It's a contradiction. A list can't be a "const expression".
We probably can't make real progress until that is debugged ;-)
Note: assert "const expression is a list" should raise a mental exception ;-)

Why should "const expression is a list" raise a mental exception with
me? I think it should have raised a mental exception with the designer.
If there is a problem with const list expression, maybe the language
shouldn't have something that looks so much like one?

This seems to go against the pythonic spirit of explicit is
better than implicit.
Unless you accept that '[' is explicitly different from '(' ;-)
It also seems to go against the way default arguments are treated.
I suspect another bug ;-)

The question is where is the bug? You can start from the idea that
the language is how it was defined and thus by definition correct
and so any problem is user problem.

You can also notice that a specific construct is a stumbling block
with a lot a new people and wonder if that doens't say something
about the design.
An example?

Well there is the documentation about function calls, which states
something like the first positional argument provided will go
to the first parameter, ... and that default values will be used
for parameters not filled by arguments. Then you stumble on
the build in function range with the signature:

range([start,] stop[, step])

Why if you only provide one arguments, does it go to the second
parameter?

Why are a number of constructs for specifying/creating a value/object
limited to subscriptions? Why is it impossible to do the following:

a = ...
f(...)
a = 3:8
tree.keys('a':'b')

Why is how you can work with defaults in slices not similar with
how you work with defaults in calls. You can do:

lst[:7]

So why can't you call range as follows:

range(,7)


lst[::] is a perfect acceptable slice, so why doesn't, 'slice()' work?

Positional arguments must come before keyword arguments, but when
you somehow need to do the following:

foo(arg0, *args, kwd = value)

You suddenly find out the above is illegal and it should be written

foo(arg0, kwd = value, *args)

IMO this is hand waving unless you can point to specifics, and a kind of
unseemly propaganda/innuendo if you can't.

IMO the use of negative indexing is the prime example in this case.
Sure it is practical that if you want the last element of a list,
you can just use -1 as a subscript. However in a lot of cases -1,
is just as out of bounds as an index greater than the list length.

At one time I was given lower en upperlimit for a slice from a list,
Both could range from 0 to len - 1. But I really needed three slices,
I needed lst[low:up], lst[low-1,up-1] and lst[low+1,up+1].
Getting lst[low+1:up+1] wasn't a problem, The way python treated
slices gave me just what I wanted, even if low and up were to big.
But when low or up were zero, the lst[low-1,up-1] gave trouble.

If I want lst[low:up] in reverse, then the following works in general:

lst[up-1 : low-1 : -1]

Except of course when low or up is zero.


Of course I can make a subclass list that works as I want it, but IMO that
is the other way around. People should use a subclass for special cases,
like indexes that wrap around, not use a subclass to remove the special
casing, that was put in the base class.

Of course this example fits between the examples above, and some of
those probably will fit here too.
I think this is nicely said and important. I wish it were possible
to arrive at a statement like this without wading though massive irrelevancies ;-)

Well I hope you didn't have to wade such much this time.
BTW, I am participating in this thread more out of interest in
the difficulties of human communication that in the topic per se,
so I am probably OT ;-)

Well I hope you are having a good time anyway.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,781
Messages
2,569,616
Members
45,305
Latest member
KetoMeltsupplement

Latest Threads

Top