Optional parameter object re-used when instantiating multiple objects

R

Rick Giuly

Hello All,

Why is python designed so that b and c (according to code below)
actually share the same list object? It seems more natural to me that
each object would be created with a new list object in the points
variable.

class Blob:
def __init__(self, points=[]):
self._points = points


b = Blob()
c = Blob()

b._points.append(1)
c._points.append(2)

print b._points

# this will show that b._points is the same object as c._points
 
A

Arnaud Delobelle

Rick Giuly said:
Hello All,
Hello,

Why is python designed so that b and c (according to code below)
actually share the same list object? It seems more natural to me that
each object would be created with a new list object in the points
variable.

class Blob:
def __init__(self, points=[]):
self._points = points


b = Blob()
c = Blob()

b._points.append(1)
c._points.append(2)

print b._points

# this will show that b._points is the same object as c._points

This is probably the MFAQ (Most FAQ)!

Have a look in http://www.python.org/doc/faq/ (I can't point at the
question as my internet pipes to the US are very rusty this morning)

HTH
 
S

Steven D'Aprano

Hello All,

Why is python designed so that b and c (according to code below)
actually share the same list object? It seems more natural to me that
each object would be created with a new list object in the points
variable.

That's not natural *at all*. You're initialising the argument "points"
with the same list every time. If you wanted it to have a different list
each time, you should have said so. Don't blame the language for doing
exactly what you told it to do.

class Blob:
def __init__(self, points=[]):
self._points = points

Let's analyze this. You create a method __init__. That function is
created *once*. As part of the process of creating the function, the
argument "points" is given the default value of an empty list. The
creation of that empty list happens *once*, when the method is created.
In the body of the function, you set the _points attribute to points.
Naturally it is the same list object.

Since the method is only created once, it is only natural that the
default value is also only created once. If you want something to be
created each time the function is called, you have to put it inside the
body of the function:

class Blob:
def __init__(self, points=None):
if points is None:
points = []
self._points = points

Now you will have _points set to a unique empty list each time.



This is no different from doing this:

alist = []
b1 = Blob(alist)
b2 = Blob(alist)

Would you be surprised that b1 and b2 share the same list? If yes, then
you need to think about how Python really works, rather than how you
imagine it works.
 
A

Aaron Brady

Hello All,

Why is python designed so that b and c (according to code below)
actually share the same list object? It seems more natural to me that
each object would be created with a new list object in the points
variable.

class Blob:
    def __init__(self, points=[]):
        self._points = points

b = Blob()
c = Blob()

b._points.append(1)
c._points.append(2)

print b._points

# this will show that b._points is the same object as c._points

Hi Rick,

I don't think Dennis or Steven read your post very well. You said
'Why does Python do X?', and 'It seems natural to you to do not X'.
Dennis and Steven both said, 'Python does X'.

Steven did get around to suggesting an answer though. He said:
If you want something to be
created each time the function is called, you have to put it inside the
body of the function:

Taking this to be true, the answer to your question is, 'Because the
object isn't created inside the body of the function,' or, 'Because
the argument list is outside the body of the function'.

From your post, it's hard to tell whether this 'duh'-type observation
would point out the salient feature of the construct, or whether
you're after something deeper.

If you're asking, 'Why isn't the argument list considered to be inside
the body?', then the answer is, it's pretty much arbitrary.
Regardless of which one the author of Python chose, the other's
workaround would be equally short, and neither one is obviously
indicated by the syntax.

And until someone sends you a link to Python's author's blog that
gives the answer, 'To make creating static variables really easy',
don't let them tell you so.
 
S

Steven D'Aprano

I don't think Dennis or Steven read your post very well.

It's possible.
You said 'Why
does Python do X?', and 'It seems natural to you to do not X'. Dennis
and Steven both said, 'Python does X'.

I also disputed that it is natural to do not-X (runtime creation of
default arguments), and explained why such an interpretation doesn't
match with the way Python operates. I admit I didn't answer the "why"
part.

Steven did get around to suggesting an answer though. He said:

If you want to be pedantic, then my "answer" (which you seem to approve
of) doesn't correspond to either of the original poster's questions. If
you're going to be pedantic, then be pedantic all the way, and criticize
me for answering a question that wasn't asked :p

Taking this to be true, the answer to your question is, 'Because the
object isn't created inside the body of the function,' or, 'Because the
argument list is outside the body of the function'.

Actually, the correct answer to "Why?" would be "Because that's the
decision Guido van Rossum made back in the early 1990s when he first
started designing Python". That of course leads to the obvious question
"Why did he make that decision?", and the answer to that is:

* it leads to far more efficient performance when calling functions;

E.g. if the default value is expensive to calculate, it is better to
calculate it once, when the function is created, than every single time
the function is called.

Additionally, the effbot once mentioned in a similar thread that there
are real performance benefits in the Python VM from binding the default
value once only. I don't know the exact details of that, but I trust
Fredrik knows what he's talking about.


* it has less scope for surprise when calling functions.

E.g. I would argue that most people would be surprised, and dismayed, if
this code fails:

x = 1
def foo(a, b=x):
return a+b

del x
print foo(2)

From your post, it's hard to tell whether this 'duh'-type observation
would point out the salient feature of the construct, or whether you're
after something deeper.

If you're asking, 'Why isn't the argument list considered to be inside
the body?', then the answer is, it's pretty much arbitrary.

No, it is not an arbitrary choice. I've given practical reasons why the
Python choice is better. If you want default argument to be created from
scratch when the function is called, you can get it with little
inconvenience, but the opposite isn't true. It is very difficult to get
static default arguments given a hypothetical Python where default
arguments are created from scratch. There's no simple, easy idiom that
will work. The best I can come up with is a convention:

# Please don't change this, or strange things will happen.
_private = ResultOfExpensiveCalculation()

def foo(a, b=_private):
return a+b

The above is still vulnerable to code accidentally over-writing _private
with some other value, or deleting it, but at least we avoid the
expensive calculation every time.

Or possibly:

def foo(a, b=foo._private):
return a+b

foo._private = ResultOfExpensiveCalculation()

which has obvious disadvantages with regard to shadowing, renaming,
anonymous functions, and so forth.


Regardless
of which one the author of Python chose, the other's workaround would be
equally short,

Not true. One has an obvious workaround, the other only has a *partial*
workaround.
and neither one is obviously indicated by the syntax.

I would disagree, but not enough to argue.
 
A

Arnaud Delobelle

Steven D'Aprano said:
That's not natural *at all*. You're initialising the argument "points"
with the same list every time. If you wanted it to have a different list
each time, you should have said so. Don't blame the language for doing
exactly what you told it to do.

Come on. The fact that this questions comes up so often (twice in 24h)
is proof that this is a surprising behaviour. I do think it is the
correct one but it is very natural to assume that when you write

def foo(bar=[]):
bar.append(6)
...

you are describing what happens when you _call_ foo, i.e.:

1. if bar is not provided, make it equal to []
2. Append 6 to bar
3. ...
 
G

George Sakkis

That's not natural *at all*. You're initialising the argument "points"
with the same list every time. If you wanted it to have a different list
each time, you should have said so. Don't blame the language for doing
exactly what you told it to do.

Come on.  The fact that this questions comes up so often (twice in 24h)
is proof that this is a surprising behaviour.  I do think it is the
correct one but it is very natural to assume that when you write

    def foo(bar=[]):
         bar.append(6)
         ...

you are describing what happens when you _call_ foo, i.e.:

    1. if bar is not provided, make it equal to []
    2. Append 6 to bar
    3. ...

+1. Understanding and accepting the current behavior (mainly because
of the extra performance penalty of evaluating the default expressions
on every call would incur) is one thing, claiming that it is somehow
natural is plain silly, as dozens of threads keep showing time and
time again. For better or for worse the current semantics will
probably stay forever but I wish Python grows at least a syntax to
make the expected semantics easier to express, something like:

def foo(bar=`[]`):
bar.append(6)

where `expr` would mean "evaluate the expression in the function
body". Apart from the obvious usage for mutable objects, an added
benefit would be to have default arguments that depend on previous
arguments:

def foo(x, y=`x*x`, z=`x+y`):
return x+y+z

as opposed to the more verbose and less obvious current hack:

def foo(x, y=None, z=None):
if y is None: y = x*x
if z is None: z = x+y
return x+y+z

George
 
S

Steven D'Aprano

Come on. The fact that this questions comes up so often (twice in 24h)
is proof that this is a surprising behaviour.

Of course it's surprising. People make an assumption based on other
languages. But there is nothing "natural" about that assumption. It may
be common, but that doesn't mean it's the only way to think about it.

If you check the archives of this newsgroup, you will see that some time
not very long ago I made the suggestion that perhaps Python should raise
a warning when it creates a function with an obviously mutable default
argument. In practice, that would mean checking for three special cases:
[], {} and set(). So I'm sympathetic towards the surprise people feel,
but I don't agree that it is natural. "Natural" is a thought-stopper.
It's meant to imply that any other alternative is unnatural, crazy,
stupid, perverted, or whatever other alternative to natural you prefer,
therefore stop all disagreement.


I do think it is the
correct one but it is very natural to assume that when you write

def foo(bar=[]):
bar.append(6)
...

you are describing what happens when you _call_ foo, i.e.:

1. if bar is not provided, make it equal to []
2. Append 6 to bar
3. ...


Which is *exactly* what happens -- except of course once you append six
to the list [], it now looks like [6].

Why assume that "make it equal to []" implies a different list every
time, rather than that it is a specific list that happens to start off as
[]? Why isn't it equally "natural" to assume that it's the same list each
time, and it starts off as [] but need not stay that way?
 
S

Steve Holden

George said:
Steven D'Aprano said:
On Sat, 15 Nov 2008 01:40:04 -0800, Rick Giuly wrote:
Hello All,
Why is python designed so that b and c (according to code below)
actually share the same list object? It seems more natural to me that
each object would be created with a new list object in the points
variable.
That's not natural *at all*. You're initialising the argument "points"
with the same list every time. If you wanted it to have a different list
each time, you should have said so. Don't blame the language for doing
exactly what you told it to do.
Come on. The fact that this questions comes up so often (twice in 24h)
is proof that this is a surprising behaviour. I do think it is the
correct one but it is very natural to assume that when you write

def foo(bar=[]):
bar.append(6)
...

you are describing what happens when you _call_ foo, i.e.:

1. if bar is not provided, make it equal to []
2. Append 6 to bar
3. ...

+1. Understanding and accepting the current behavior (mainly because
of the extra performance penalty of evaluating the default expressions
on every call would incur) is one thing, claiming that it is somehow
natural is plain silly, as dozens of threads keep showing time and
time again. For better or for worse the current semantics will
probably stay forever but I wish Python grows at least a syntax to
make the expected semantics easier to express, something like:

def foo(bar=`[]`):
bar.append(6)

where `expr` would mean "evaluate the expression in the function
body". Apart from the obvious usage for mutable objects, an added
benefit would be to have default arguments that depend on previous
arguments:
Would you also retain the context surrounding the function declaration
so it's obvious how it will be evaluated, or would you limit the default
values to expressions with no bound variables?
def foo(x, y=`x*x`, z=`x+y`):
return x+y+z

as opposed to the more verbose and less obvious current hack:

def foo(x, y=None, z=None):
if y is None: y = x*x
if z is None: z = x+y
return x+y+z
"Less obvious" is entirely in the mind of the reader. However I can see
far more justification for the behavior Python currently exhibits than
the semantic time-bomb you are proposing.

regards
Steve
 
S

Steve Holden

George said:
Steven D'Aprano said:
On Sat, 15 Nov 2008 01:40:04 -0800, Rick Giuly wrote:
Hello All,
Why is python designed so that b and c (according to code below)
actually share the same list object? It seems more natural to me that
each object would be created with a new list object in the points
variable.
That's not natural *at all*. You're initialising the argument "points"
with the same list every time. If you wanted it to have a different list
each time, you should have said so. Don't blame the language for doing
exactly what you told it to do.
Come on. The fact that this questions comes up so often (twice in 24h)
is proof that this is a surprising behaviour. I do think it is the
correct one but it is very natural to assume that when you write

def foo(bar=[]):
bar.append(6)
...

you are describing what happens when you _call_ foo, i.e.:

1. if bar is not provided, make it equal to []
2. Append 6 to bar
3. ...

+1. Understanding and accepting the current behavior (mainly because
of the extra performance penalty of evaluating the default expressions
on every call would incur) is one thing, claiming that it is somehow
natural is plain silly, as dozens of threads keep showing time and
time again. For better or for worse the current semantics will
probably stay forever but I wish Python grows at least a syntax to
make the expected semantics easier to express, something like:

def foo(bar=`[]`):
bar.append(6)

where `expr` would mean "evaluate the expression in the function
body". Apart from the obvious usage for mutable objects, an added
benefit would be to have default arguments that depend on previous
arguments:
Would you also retain the context surrounding the function declaration
so it's obvious how it will be evaluated, or would you limit the default
values to expressions with no bound variables?
def foo(x, y=`x*x`, z=`x+y`):
return x+y+z

as opposed to the more verbose and less obvious current hack:

def foo(x, y=None, z=None):
if y is None: y = x*x
if z is None: z = x+y
return x+y+z
"Less obvious" is entirely in the mind of the reader. However I can see
far more justification for the behavior Python currently exhibits than
the semantic time-bomb you are proposing.

regards
Steve
 
S

Steve Holden

Dennis said:
Why is python designed so that b and c (according to code below)
actually share the same list object? It seems more natural to me that
each object would be created with a new list object in the points
variable.
This is a FAQ... default arguments are evaluation only ONCE, during
the "compilation" of the function.
class Blob:
def __init__(self, points=[]):
self._points = points
The preferred/recommended form is to use (very explicit, one test,
one "assignment")

def __init__(self, points=None):
if points:
self._points = points
else:
self._points = []

or (shorter; one test, potentially two "assignments")

def __init__(self, points=None):
if not points: points = []
self._points = points
I hesitate to beat the thoroughly obvious to death with a stick, but
this is a *very* bad way to make the test. If you are using None as a
sentinel to indicate that no argument was provided to the call then the
correct test is

if points is None:
points = []

The code shown fails to distinguish between passing an empty list and
not passing an argument at all.

regards
Steve
 
G

George Sakkis

+1. Understanding and accepting the current behavior (mainly because
of the extra performance penalty of evaluating the default expressions
on every call would incur) is one thing, claiming that it is somehow
natural is plain silly, as dozens of threads keep showing time and
time again. For better or for worse the current semantics will
probably stay forever but I wish Python grows at least a syntax to
make the expected semantics easier to express, something like:
def foo(bar=`[]`):
bar.append(6)
where `expr` would mean "evaluate the expression in the function
body". Apart from the obvious usage for mutable objects, an added
benefit would be to have default arguments that depend on previous
arguments:

Would you also retain the context surrounding the function declaration
so it's obvious how it will be evaluated, or would you limit the default
values to expressions with no bound variables?

No, all expressions would be allowed, and the semantics would be
identical to evaluating them in the function body; not context would
be necessary.
"Less obvious" is entirely in the mind of the reader.

Without documentation or peeking into the function body, a None
default conveys little or no information, so I don't think it's just
in the mind of the reader. Do you find the following less obvious than
the current workaround ?

from datetime import date
from timedelta import timedelta

def make_reservation(customer,
checkin=`date.today()`,
checkout=`checkin+timedelta(days=3)`):
...

However I can see
far more justification for the behavior Python currently exhibits than
the semantic time-bomb you are proposing.

I didn't propose replacing the current behavior (that would cause way
too much breakage), only adding a new syntax which is now invalid, so
one would have to specify it explicitly.

George
 
C

Chris Rebert

+1. Understanding and accepting the current behavior (mainly because
of the extra performance penalty of evaluating the default expressions
on every call would incur) is one thing, claiming that it is somehow
natural is plain silly, as dozens of threads keep showing time and
time again. For better or for worse the current semantics will
probably stay forever but I wish Python grows at least a syntax to
make the expected semantics easier to express, something like:
def foo(bar=`[]`):
bar.append(6)
where `expr` would mean "evaluate the expression in the function
body". Apart from the obvious usage for mutable objects, an added
benefit would be to have default arguments that depend on previous
arguments:

Would you also retain the context surrounding the function declaration
so it's obvious how it will be evaluated, or would you limit the default
values to expressions with no bound variables?

No, all expressions would be allowed, and the semantics would be
identical to evaluating them in the function body; not context would
be necessary.
"Less obvious" is entirely in the mind of the reader.

Without documentation or peeking into the function body, a None
default conveys little or no information, so I don't think it's just
in the mind of the reader. Do you find the following less obvious than
the current workaround ?

from datetime import date
from timedelta import timedelta

def make_reservation(customer,
checkin=`date.today()`,
checkout=`checkin+timedelta(days=3)`):
...

However I can see
far more justification for the behavior Python currently exhibits than
the semantic time-bomb you are proposing.

I didn't propose replacing the current behavior (that would cause way
too much breakage), only adding a new syntax which is now invalid, so
one would have to specify it explicitly.

Minor FYI, but Guido has proscribed backticks ever being used in
Python again. See http://www.python.org/dev/peps/pep-3099/

Cheers,
Chris
 
A

Aaron Brady

Of course it's surprising. People make an assumption based on other
languages. But there is nothing "natural" about that assumption. It may
be common, but that doesn't mean it's the only way to think about it.

My point is that neither one is more natural than the other. I think
that more than one person has shirked the burden of proof in making
claims about naturality, as well as overstepped the bounds of what a
conclusion about naturality entails. In other words, "Prove it. So
what?"

def f( a= [] ):
a.append( 0 )
return a

a= f()
b= f()
c= f( [] )

a== b!= c, because '[]' is not the default value, because '[] is not
[]'.

If I run 'a= []' ten times in a loop, '[]' is executed ten times. If
I call 'f' ten times, '[]' is only executed once. You do have a case
that the function is defined once, but executed ten times.
Why assume that "make it equal to []" implies a different list every
time, rather than that it is a specific list that happens to start off as
[]? Why isn't it equally "natural" to assume that it's the same list each
time, and it starts off as [] but need not stay that way?

In other words, what does 'natural' mean? Either provide an analytic
definition (strict necessary and sufficient conditions), or some
paradigm cases. 'Natural' in the sense that killing is natural? Are
there any such senses? Is natural always best? Is natural always
obvious?

Oddly, http://dictionary.reference.com/browse/natural has 38
definitions in the longest entry.
 
A

Aaron Brady

George said:
On Sat, 15 Nov 2008 01:40:04 -0800, Rick Giuly wrote:
Hello All,
Why is python designed so that b and c (according to code below)
actually share the same list object? It seems more natural to me that
each object would be created with a new list object in the points
variable.
That's not natural *at all*. You're initialising the argument "points"
with the same list every time. If you wanted it to have a different list
each time, you should have said so. Don't blame the language for doing
exactly what you told it to do.
Come on.  The fact that this questions comes up so often (twice in 24h)
is proof that this is a surprising behaviour.  I do think it is the
correct one but it is very natural to assume that when you write
    def foo(bar=[]):
         bar.append(6)
         ...
you are describing what happens when you _call_ foo, i.e.:
    1. if bar is not provided, make it equal to []
    2. Append 6 to bar
    3. ...
+1. Understanding and accepting the current behavior (mainly because
of the extra performance penalty of evaluating the default expressions
on every call would incur) is one thing, claiming that it is somehow
natural is plain silly, as dozens of threads keep showing time and
time again. For better or for worse the current semantics will
probably stay forever but I wish Python grows at least a syntax to
make the expected semantics easier to express, something like:
def foo(bar=`[]`):
    bar.append(6)
where `expr` would mean "evaluate the expression in the function
body". Apart from the obvious usage for mutable objects, an added
benefit would be to have default arguments that depend on previous
arguments:

Would you also retain the context surrounding the function declaration
so it's obvious how it will be evaluated, or would you limit the default
values to expressions with no bound variables?
def foo(x, y=`x*x`, z=`x+y`):
    return x+y+z
as opposed to the more verbose and less obvious current hack:
def foo(x, y=None, z=None):
    if y is None: y = x*x
    if z is None: z = x+y
    return x+y+z

"Less obvious" is entirely in the mind of the reader. However I can see
far more justification for the behavior Python currently exhibits than
the semantic time-bomb you are proposing.

It is too bad you are not better at sharing what you see. I would
like to see 'far more' of it, if it's there.
 
A

Aaron Brady

On Sat, 15 Nov 2008 21:29:22 -0800, Aaron Brady wrote: ....
If you want to be pedantic, then my "answer" (which you seem to approve
of) doesn't correspond to either of the original poster's questions. If
you're going to be pedantic, then be pedantic all the way, and criticize
me for answering a question that wasn't asked :p

Not pedantic. He was questioning the reasons and motivation.

....
Actually, the correct answer to "Why?" would be "Because that's the
decision Guido van Rossum made back in the early 1990s when he first
started designing Python". That of course leads to the obvious question
"Why did he make that decision?", and the answer to that is:

* it leads to far more efficient performance when calling functions;

E.g. if the default value is expensive to calculate, it is better to
calculate it once, when the function is created, than every single time
the function is called.

It is best to calculate it just how many times you need to, and no
more.
Additionally, the effbot once mentioned in a similar thread that there
are real performance benefits in the Python VM from binding the default
value once only. I don't know the exact details of that, but I trust
Fredrik knows what he's talking about.

* it has less scope for surprise when calling functions.

Obviously, either behavior would be surprising to different sets of
people, with probably little overlap. Is one set larger, or has
certain characteristics, like contains more deep thinkers, etc.?
E.g. I would argue that most people would be surprised, and dismayed, if
this code fails:

x = 1
def foo(a, b=x):
   return a+b

del x
print foo(2)

Point taken.
No, it is not an arbitrary choice.

I didn't mean arbitrary as in out-of-the-blue. I meant arbitrary as
in dart-board decision from hand-picked possibilities, that is, that
the original decision maker thought, 'close call' between two, and
just picked one.

I have a different perspective than he did at the time, and he does
too now. It's not clear that if he came to think that the cases were
closer than he originally judged, he would say so, though knowing
humans at large, I'd guess that if the case was stronger, he would. I
would. Of course not just any human can invent Python.
I've given practical reasons why the
Python choice is better. If you want default argument to be created from
scratch when the function is called, you can get it with little
inconvenience, but the opposite isn't true. It is very difficult to get
static default arguments given a hypothetical Python where default
arguments are created from scratch. There's no simple, easy idiom that
will work. The best I can come up with is a convention:

I'm not so sure.

## Default evaluated at definition time. (Current.)

# Static arg.
def f( a= [] ):
...

# Non-static arg.
def f( a= None ):
if a is None: a= []

## Default evaluated at call-time. (Alternative.)

# Static arg.
@static( a= [] )
def f( a ):
...

# Non-static arg.
def f( a= [] ):
...

They're about the same difficulty. This comparison makes it look like
efficiency is the strongest argument-- just an 'if' vs. an entire
extra function call.
 
S

Steve Holden

Aaron said:
George said:
On Sat, 15 Nov 2008 01:40:04 -0800, Rick Giuly wrote:
Hello All,
Why is python designed so that b and c (according to code below)
actually share the same list object? It seems more natural to me that
each object would be created with a new list object in the points
variable.
That's not natural *at all*. You're initialising the argument "points"
with the same list every time. If you wanted it to have a different list
each time, you should have said so. Don't blame the language for doing
exactly what you told it to do.
Come on. The fact that this questions comes up so often (twice in 24h)
is proof that this is a surprising behaviour. I do think it is the
correct one but it is very natural to assume that when you write
def foo(bar=[]):
bar.append(6)
...
you are describing what happens when you _call_ foo, i.e.:
1. if bar is not provided, make it equal to []
2. Append 6 to bar
3. ...
+1. Understanding and accepting the current behavior (mainly because
of the extra performance penalty of evaluating the default expressions
on every call would incur) is one thing, claiming that it is somehow
natural is plain silly, as dozens of threads keep showing time and
time again. For better or for worse the current semantics will
probably stay forever but I wish Python grows at least a syntax to
make the expected semantics easier to express, something like:
def foo(bar=`[]`):
bar.append(6)
where `expr` would mean "evaluate the expression in the function
body". Apart from the obvious usage for mutable objects, an added
benefit would be to have default arguments that depend on previous
arguments:
Would you also retain the context surrounding the function declaration
so it's obvious how it will be evaluated, or would you limit the default
values to expressions with no bound variables?
def foo(x, y=`x*x`, z=`x+y`):
return x+y+z
as opposed to the more verbose and less obvious current hack:
def foo(x, y=None, z=None):
if y is None: y = x*x
if z is None: z = x+y
return x+y+z
"Less obvious" is entirely in the mind of the reader. However I can see
far more justification for the behavior Python currently exhibits than
the semantic time-bomb you are proposing.

It is too bad you are not better at sharing what you see. I would
like to see 'far more' of it, if it's there.

Well, briefly: it seems far simpler to me to use

def f(x=None):
if x is None:
x = <some value interpreted in the context of the current call>
...

than it does to use

def f(x=<some value to be interpreted in the context of the current call>):
...

particularly since at present the language contains no syntax for the
latter. But maybe I am focusing too much on the difficulty of compiling
whatever bizarre syntax is eventually adopted.

Understand, my point of view is biased by forty years of programming
experience, to the extent that I disagreed with Guido's proposal to make
integer division return a floating-point value. So I wouldn't
necessarily claim alignment with the majority.

Consider, though, the case were one argument value has to refer to
another. I would say the function body is the proper place to be doing
that computation. You appear to feel the def statement is the
appropriate place. If multiple statements are needed to perform the
argument initialization, how would you then propose the problem should
be solved?

regards
Steve
 
A

alex23

If multiple statements are needed to perform the
argument initialization, how would you then propose the problem should
be solved?

Why, with another function of course!

def f(x, y=`f_arg_computation(x)`): ...

Or my personal favourite:

def f(x, **`f_arg_computation(x)`): ...

Seriously, though, I agree with Steve; the function body -is- the
place for computation to occur.
 
A

Aaron Brady

On Nov 16, 12:52 am, Steven D'Aprano <st...@REMOVE-THIS-
I've given practical reasons why the
Python choice is better. If you want default argument to be created from
scratch when the function is called, you can get it with little
inconvenience, but the opposite isn't true. It is very difficult to get
static default arguments given a hypothetical Python where default
arguments are created from scratch. There's no simple, easy idiom that
will work. The best I can come up with is a convention:

I'm not so sure.

## Default evaluated at definition time.  (Current.)

# Static arg.
def f( a= [] ):
  ...

# Non-static arg.
def f( a= None ):
  if a is None: a= []

Oops. Forgot one, after the subsequent posts.

# Non-static arg.
@nonstatic( a= list )
def f( a ):
...

This can achieve the 'if a is None' effect. 'nonstatic' takes a
callable or a string, '@nonstatic( a= "[]" )'.

I don't see a way to achieve George Sakkis's example:

if y is None: y = x*x
if z is None: z = x+y

Without a change to the language (the other options don't need one).

#emulates 'def foo(x, y=`x*x`, z=`x+y`):'
@nonstatic( y= 'x*x' ) #illegal
@nonstatic( z= 'x+y' ) #illegal
def foo(x, y, z):
return x+y+z
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,534
Members
45,008
Latest member
Rahul737

Latest Threads

Top