Default method arguments

G

gregory.petrosyan

Hello everybody!
I have little problem:

class A:
def __init__(self, n):
self.data = n
def f(self, x = ????)
print x

All I want is to make self.data the default argument for self.f(). (I
want to use 'A' class as following :

myA = A(5)
myA.f()

and get printed '5' as a result.)
 
B

Bill Mill

Hello everybody!
I have little problem:

class A:
def __init__(self, n):
self.data = n
def f(self, x = ????)
print x

All I want is to make self.data the default argument for self.f(). (I
want to use 'A' class as following :

myA = A(5)
myA.f()

and get printed '5' as a result.)

class A:
def __init__(self, n):
self.data = n

def f(self, x=None):
if not x:
x = self.data
print x
5

Peace
Bill Mill
bill.mill at gmail.com
 
N

Nicola Larosa

I have little problem:
class A:
def __init__(self, n):
self.data = n
def f(self, x = ????)
print x

All I want is to make self.data the default argument for self.f(). (I
want to use 'A' class as following :

myA = A(5)
myA.f()

and get printed '5' as a result.)

# use new-style classes, if there's no cogent reason to do otherwise
class A(object):
def __init__(self, n):
self.data = n
def f(self, x = None)
# do NOT use "if not x" !
if x is None:
print self.data
else:
print x

--
Nicola Larosa - (e-mail address removed)

....Linux security has been better than many rivals. However, even
the best systems today are totally inadequate. Saying Linux is
more secure than Windows isn't really addressing the bigger issue
- neither is good enough. -- Alan Cox, September 2005
 
N

Nicola Larosa

def f(self, x=None):
if not x:

Ha! You fell for it! ;-D
(Hint: what about x being passed with a value of zero? :) )
x = self.data
print x

--
Nicola Larosa - (e-mail address removed)

....Linux security has been better than many rivals. However, even
the best systems today are totally inadequate. Saying Linux is
more secure than Windows isn't really addressing the bigger issue
- neither is good enough. -- Alan Cox, September 2005
 
D

Duncan Booth

Nicola said:
# use new-style classes, if there's no cogent reason to do otherwise
class A(object):
def __init__(self, n):
self.data = n
def f(self, x = None)
# do NOT use "if not x" !
if x is None:
print self.data
else:
print x

Using None might be problematic if None could be a valid argument. The
safest way all round is to use a unique object created just for this
purpose:

_marker = object()

class A(object):

def __init__(self, n):
self.data = n

def f(self, x=_marker)
if x is _marker:
x = self.data
print x
 
G

Gregory Petrosyan

Thanks a lot, but that's not what I do really want.
1) f() may have many arguments, not one
2) I don't whant only to _print_ x. I want to do many work with it, so
if I could simply write

def f(self, x = self.data) (*)

it would be much better.

By the way, using

class A(object):
data = 0
....
def f(self, x = data)

solves this problem, but not nice at all

So I think (*) is the best variant, but it doesn't work :(
 
B

Bill Mill

Ha! You fell for it! ;-D
(Hint: what about x being passed with a value of zero? :) )

I wasn't sure if you saw my post before you posted - good call. I just
tossed off an answer without thinking much, and we see the result. It
could have been a good debugging lesson for him if he'd tried to pass
0; I think I'll use that as my excuse.

Peace
Bill Mill
bill.mill at gmail.com
 
N

Nicola Larosa

Using None might be problematic if None could be a valid argument.

That's like saying that NULL could be a significant value in SQL. In
Python, "None" *is* the empty, not significant value, and should always be
used as such. Specifically, never exchange "None" for "False".

--
Nicola Larosa - (e-mail address removed)

....Linux security has been better than many rivals. However, even
the best systems today are totally inadequate. Saying Linux is
more secure than Windows isn't really addressing the bigger issue
- neither is good enough. -- Alan Cox, September 2005
 
B

bruno at modulix

Hello everybody!
I have little problem:

class A:
def __init__(self, n):
self.data = n
def f(self, x = ????)
print x

All I want is to make self.data the default argument for self.f(). (I
want to use 'A' class as following :

myA = A(5)
myA.f()

and get printed '5' as a result.)

class A(object): # Stop using old-style classes, please
def __init__(self, n):
self.data = n
def f(self, x = None):
if x is None:
x = self.data
print x
 
D

Dennis Lee Bieber

Hello everybody!
I have little problem:

class A:
def __init__(self, n):
self.data = n
def f(self, x = ????)
print x

All I want is to make self.data the default argument for self.f(). (I
want to use 'A' class as following :

myA = A(5)
myA.f()

and get printed '5' as a result.)

Well, a first cut would be

def f(self, x = None)
if x is None:
x = self.data
print x

but that may not be desirable if None is a valid value => myA.f(None),
so...

class A(object):
def __init__(self, n):
self.data =n
def f(self, *arg):
if len(arg) == 0:
x = self.data
else:
x = arg[0]
print x

anA = A("Pffft")

anA.f("Hello, Kitty")
anA.f()

anA.f(None)

--
 
A

Alex Martelli

Gregory Petrosyan said:
def f(self, x = self.data) (*) ...
So I think (*) is the best variant, but it doesn't work :(

It DOES work -- it just does not work the way you _expect_ it to work,
but rather, it works the way it's _defined_ to work.

Specifically: all the evaluation of default arguments' values happens as
a part of the execution of the 'def' statement, and so, in particular,
happens at the TIME 'def' is executing.

A 'def' statement which is at the top level in a 'class' statement
evaluates as part of the evaluation of 'class'.

So, if, *while the 'class' statement is evaluating*, something known as
'self' exists, and has a 'data' attribute, this will give you the
default value for argument 'x'. If the name 'self' is unknown, or
refers to an object which has no 'data' attribute, then of course
appropriate exceptions get raised (NameError or AttributeError) when
Python is TRYING to execute that 'def' statement.

Here's an example in which this works without exceptions:

class outer(object):
def __init__(self, data): self.data = data
def make_inner(self):
class inner(object):
def f(self, x=self.data):
print x
return inner()

Now, y = outer(23).make_inner() gives you an instance of an inner class,
such that y.f() is the same thing as y.f(23). The 'self.data' in the
'def f', since it's the evaluation of a default value for an argument,
evaluates at the time 'def' evaluates -- and, at that time, 'self'
refers to the instance of class outer that's the only argument to method
make_inner of class outer.

While such "factories" (classes and functions making and returning other
classes and functions) are rarely used by beginners, they are an
extremely important idiom for advanced users of Python. But the point
is that, by having extremely simple and universal rules, it takes no
exoteric knowledge to understand what the above Python code will do --
default argument values evaluate as 'def' executes, therefore there is
absolutely no ambiguity or difficulty to understand when this
'self.data' in particular evaluates.

If Python tried to guess at when to evaluate default argument values,
sometimes during the 'def', sometimes abstrusely storing "something"
(technically a 'thunk') for potential future evaluation, understanding
what's going on in any given situation would become extremely
complicated. There are many languages which attempt to ``helpfully''
"do what the programmer meant in each single case" rather than follow
simple, clear and universal rules about what happens when; as a
consequence, programmers in such "helpful" languages spend substantial
energy fighting their compilers to try and work around the compilers'
attempted "helpfulness".

Which is why I use Python instead. Simplicity is a GREAT virtue!


If it's crucial to you to have some default argument value evaluated at
time X, then, by Python's simple rules, you know that you must arrange
for the 'def' statement itself to execute at time X. In this case, for
example, if being able to have self.data as the default argument value
is the crucial aspect of the program, you must ensure that the 'def'
runs AFTER self.data has the value you desire.

For example:

class A(object):
def __init__(self, n):
self.data = n
def f(self, x = self.data)
print x
self.f = f

This way, of course, each instance a of class A will have a SEPARATE
callable attribute a.f which is the function you desire; this is
inevitable, since functions store their default argument values as part
of their per-function data. Since you want a.f and b.f to have
different default values for the argument (respectively a.data and
b.data), therefore a.f and b.f just cannot be the SAME function object
-- this is another way to look at your issue, in terms of what's stored
where rather than of what evaluates when, but of course it leads to
exactly the same conclusion.

In practice, the solutions based on None or sentinels that everybody has
been suggesting to you are undoubtedly preferable. However, you can, if
you wish, get as fancy as you desire -- the next level of complication
beyond the simple factory above is to turn f into a custom descriptor
and play similar tricks in the __get__ method of f (after which, one can
start considering custom metaclasses). Exactly because Python's rules
and building blocks are simple, clean, and sharp, you're empowered to
construct as much complication as you like on top of them.

That doesn't mean you SHOULD prefer complication to simplicity, but it
does mean that the decision is up to you.


Alex
 
B

bruno at modulix

Dennis said:
(snip)
but that may not be desirable if None is a valid value => myA.f(None),
so...

class A(object):
def __init__(self, n):
self.data =n
def f(self, *arg):
if len(arg) == 0:
x = self.data
else:
x = arg[0]
print x

Another solution to this is the use of a 'marker' object and identity test:

_marker = []
class A(object):
def __init__(self, n):
self.data =n
def f(self, x = _marker):
if x is _marker:
x = self.data
print x
 
D

Duncan Booth

Nicola said:
That's like saying that NULL could be a significant value in SQL. In
Python, "None" *is* the empty, not significant value, and should
always be used as such. Specifically, never exchange "None" for
"False".
You don't think there is a difference in SQL between a field explicitly set
to NULL or a field initialised with a default value?

What you should be saying here is never use "None" when you actually mean
"use the default non-None value for this parameter".
 
M

Martin Miller

Alex Martelli wrote, in part:
If it's crucial to you to have some default argument value evaluated at
time X, then, by Python's simple rules, you know that you must arrange
for the 'def' statement itself to execute at time X. In this case, for
example, if being able to have self.data as the default argument value
is the crucial aspect of the program, you must ensure that the 'def'
runs AFTER self.data has the value you desire.

For example:

class A(object):
def __init__(self, n):
self.data = n
def f(self, x = self.data)
print x
self.f = f

This way, of course, each instance a of class A will have a SEPARATE
callable attribute a.f which is the function you desire; this is
inevitable, since functions store their default argument values as part
of their per-function data. Since you want a.f and b.f to have
different default values for the argument (respectively a.data and
b.data), therefore a.f and b.f just cannot be the SAME function object
-- this is another way to look at your issue, in terms of what's stored
where rather than of what evaluates when, but of course it leads to
exactly the same conclusion.

FWIT and ignoring the small typo on the inner def statement (the
missing ':'), the example didn't work as I (and possibily others) might
expect. Namely it doesn't make function f() a bound method of
instances of class A, so calls to it don't receive an automatic 'self''
argument when called on instances of class A.

This is fairly easy to remedy use the standard new module thusly:

import new
class A(object):
def __init__(self, n):
self.data = n
def f(self, x = self.data):
print x
self.f = new.instancemethod(f, self, A)

This change underscores the fact that each instance of class A gets a
different independent f() method. Despite this nit, I believe I
understand the points Alex makes about the subject (and would agree).

-Martin
 
B

Benji York

bruno said:
> Another solution to this is the use of a 'marker' object and identity test:
>
> _marker = []
> class A(object):
> def __init__(self, n):
> self.data =n
> def f(self, x = _marker):
> if x is _marker:
> x = self.data
> print x

I'll add my 2 cents to the mix:

default = object()

class A(object):
def __init__(self, n):
self.data = n

def f(self, x=default):
if x is default:
x = self.data
print x
 
M

Mike Meyer

Hello everybody!
I have little problem:

class A:
def __init__(self, n):
self.data = n
def f(self, x = ????)
print x

All I want is to make self.data the default argument for self.f(). (I
want to use 'A' class as following :

Store your default value in a container, and test for it:

class A:
_data = [None]
def __init__(self, n):
self._data = [n]
def f(self, x = _data):
if x is self._data:
x = x[0]
print x

There are lots of variations on this theme.

<mike
 
M

Mike Meyer

Benji York said:
I'll add my 2 cents to the mix:

default = object()

class A(object):
def __init__(self, n):
self.data = n

def f(self, x=default):
if x is default:
x = self.data
print x

There were a lot of solutions like this. I'd like to point out that
you can put the "marker" in the class:

class A(object):
default = object()
def __init__(self, n):
self.data = n

def f(self, x = default):
if x is self.default:
x = self.data
print x

This way you don't pollute the module namespace with class-specific
names. You pollute the class namespace instead - which seems like an
improvement.

<mike
 
G

Gregory Petrosyan

I'm not very familiar with Python, so please explain me why should
containers be used?
For example in one of Paul Graham's essays there's an example of
'generator of accumulators' in Python:

def foo(n):
s = [n]
def bar(i):
s[0] += i
return s[0]
return bar

1) So, why just using 's = n' is not suitable? (It doesn't work, Python
'doesn't see' s, but why?)
2) Is 'foo.s = n' a correct solution? It seems to be a little more
elegant. (I tested it, and it worked well)

Sorry for possibly stupid questions.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,209
Latest member
NelsonJax

Latest Threads

Top