determining the number of output arguments

D

Darren Dale

Hello,

def test(data):

i = ? This is the line I have trouble with

if i==1: return data
else: return data[:i]

a,b,c,d = test([1,2,3,4])

How can I set i based on the number of output arguments defined in
(a,b,c,d)?

Thank you,
Darren
 
D

Dennis Lee Bieber

Hello,

def test(data):

i = ? This is the line I have trouble with

if i==1: return data
else: return data[:i]

a,b,c,d = test([1,2,3,4])

How can I set i based on the number of output arguments defined in
(a,b,c,d)?

This is rather confusing...

What do you expect to receive for

a,b,c,d = test([1,2,3,4,5])

Note, you are only passing ONE argument into the function, and
apparently wanting a tuple in return...

Problem: in the case of mismatched lengths (4 items in
destination, and 5 in the list) you get an exception. Otherwise you
could just use

a,b,c,d = tuple([1,2,3,4])


If you are actually trying to work with an unknown number of
input arguments (rather than a single argument of a list)...
.... print len(data)
....
test([1,2,3,4]) 1
test(1,2,3,4) 4

def test(*data):
.... return data
....
test([1,2,3,4]) ([1, 2, 3, 4],)
test(1,2,3,4) (1, 2, 3, 4)
def test(*data):
.... return list(data)
....
test([1,2,3,4]) [[1, 2, 3, 4]]
test(1,2,3,4) [1, 2, 3, 4]
Thank you,
Darren

--
 
A

Alex Martelli

Fernando Perez said:
suspect that by playing very nasty tricks with sys._getframe(), the dis
and the inspect modules, you probably _could_ get to this information, at
least if the caller is NOT a C extension module. But I'm not even 100%
sure this works, and it would most certainly the kind of black magic I'm
sure you are not asking about. But given the level of expertise here, I
better cover my ass ;-)

Yep, that's the cookbook recipe Jp was mentioning -- Sami Hangaslammi's
recipe 284742, to be precise. Yep, it _is_ going into the printed 2nd
edition (which I'm supposed to be working on right now -- deadline
closing in, help, help!-).


Alex
 
J

Jeff Shannon

Darren said:
Hello,

def test(data):

i = ? This is the line I have trouble with

if i==1: return data
else: return data[:i]

a,b,c,d = test([1,2,3,4])

How can I set i based on the number of output arguments defined in
(a,b,c,d)?

Something like this:

def test(*args, **kwargs):
i = len(args) + len(kwargs)

should work. But note that the usage example you gave will result in i
having a value of 1 -- you're passing in a single argument (which is a
list).

Of course, if you're always going to be passing a sequence into your
function, and you want to get the length of that sequence, then it's
pretty simple:

def test(data):
i = len(data)
return data[:i]

Note, however, that this function is effectively a no-op as it stands.
Presumably you're intending to do something to transform data, which may
change its length? Otherwise, it would be simpler to just modify the
list (or a copy of it) in-place in a for loop / list comp, and not worry
about the length at all.

Jeff Shannon
Technician/Programmer
Credit International
 
F

Fernando Perez

Alex said:
Yep, that's the cookbook recipe Jp was mentioning -- Sami Hangaslammi's
recipe 284742, to be precise. Yep, it _is_ going into the printed 2nd
edition (which I'm supposed to be working on right now -- deadline
closing in, help, help!-).

Well, I feel pretty good now: I didn't see Jp's mention of this, and just
guessed it should be doable with those three tools. I just looked it up, and
it seems it's exactly what I had in mind :) Cute hack, but I tend to agree
with Scott Daniels' comment that this kind of cleverness tends to promote
rather unreadable code. Maybe I just haven't seen a good use for it, but I
think I'd rather stick with more explicit mechanisms than this.

Anyway, is it true that this will only work for non-extension code? If you are
being called from a C extension, dis & friends are toast, no?

Cheers,

f
 
A

Alex Martelli

Fernando Perez said:
Well, I feel pretty good now: I didn't see Jp's mention of this, and just
guessed it should be doable with those three tools. I just looked it up, and
it seems it's exactly what I had in mind :) Cute hack, but I tend to agree
with Scott Daniels' comment that this kind of cleverness tends to promote
rather unreadable code. Maybe I just haven't seen a good use for it, but I
think I'd rather stick with more explicit mechanisms than this.

Yeah, but "once and only once" is a great principle of programming. Any
time you have to say something _TWICE_ there's something wrong going on.

So,

a, b, c, d = lotsa[:4]

_should_ properly give the impression of a code smell, if your "once and
only once" sense is finely tuned. What's the business of that ':4' on
the RHS? Showing the compiler that you can count correctly?! You're
having to tell twice that you're getting four items into separate
variables, once by listing exactly four variables on the LHS, and
another time by that ':4' on the RHS. IMHO, that's just as bogus as
struct.unpack's limitation of not having any way to indicate explicitly
'and all the rest of the bytes goes here', and for similar reasons.

I think it would be better to have a way to say 'and all the rest'.
Lacking that, some automated way to count how many items are being
unpacked on the LHS is probably second-best.
Anyway, is it true that this will only work for non-extension code? If
you are being called from a C extension, dis & friends are toast, no?

Yep, whenever your function isn't being called as the only item on the
RHS of a multiple assignment, then counting how many items are on the
LHS are being unpacked in that inexistent or unapplicable multiple
assignment is right out. Presumably, any way to count the number of
items in this fashion will need a way to indicate "not applicable",
though it's not obvious whether raising an exception, or returning a
clearly bogus value such as 0, is most useful.

Another case where a specific number of items is requested, which is not
(syntactically speaking) a multiple assignment, is assignment to an
_extended_ slice of, e.g., a list (only; assignment to a slice with a
stride of 1 is happy with getting whatever number of items are coming).
I don't particularly LIKE writing:
L[x:y:z] = len(L[x:y:z]) * [foo]
i.e. having to extract the extended slice first, on the RHS, just to
gauge how many times I must repeat foo. However, if I squint in just
the right way, I _can_ try to convince myself that this _isn't_ really
violating "once and only once"... and I do understand how hard it would
be to allow a 'how many items are needed' function to cover this
case:-(.


Alex
 
J

Josiah Carlson

Fernando Perez said:
Well, I feel pretty good now: I didn't see Jp's mention of this, and just
guessed it should be doable with those three tools. I just looked it up, and
it seems it's exactly what I had in mind :) Cute hack, but I tend to agree
with Scott Daniels' comment that this kind of cleverness tends to promote
rather unreadable code. Maybe I just haven't seen a good use for it, but I
think I'd rather stick with more explicit mechanisms than this.

Yeah, but "once and only once" is a great principle of programming. Any
time you have to say something _TWICE_ there's something wrong going on.

So,

a, b, c, d = lotsa[:4]

_should_ properly give the impression of a code smell, if your "once and
only once" sense is finely tuned. What's the business of that ':4' on
the RHS? Showing the compiler that you can count correctly?! You're
having to tell twice that you're getting four items into separate
variables, once by listing exactly four variables on the LHS, and
another time by that ':4' on the RHS. IMHO, that's just as bogus as
struct.unpack's limitation of not having any way to indicate explicitly
'and all the rest of the bytes goes here', and for similar reasons.

The slicing on the right is not so much to show the compiler that you
know how to count, it is to show the runtime that you are looking for
a specified slice of lotsa. How would you like the following two cases
to be handled by your desired Python, and how would that make more sense
than what is done now?

a,b,c = [1,2,3,4]
a,b,c = [1,2]

I think it would be better to have a way to say 'and all the rest'.
Lacking that, some automated way to count how many items are being
unpacked on the LHS is probably second-best.

Is it worth a keyword, or would a sys.getframe()/bytecode hack be
sufficient? In the latter case, I'm sure you could come up with such a
mechanism, and when done, maybe you want to offer it up as a recipe in
the cookbook *wink*.

Another case where a specific number of items is requested, which is not
(syntactically speaking) a multiple assignment, is assignment to an
_extended_ slice of, e.g., a list (only; assignment to a slice with a
stride of 1 is happy with getting whatever number of items are coming).
I don't particularly LIKE writing:
L[x:y:z] = len(L[x:y:z]) * [foo]


I much prefer...

for i in xrange(x, y, z):
L = foo

But then again, I don't much like extended list slicing (I generally
only use the L[:y], L[x:] and L[x:y] versions).

- Josiah
 
A

Alex Martelli

Josiah Carlson said:
a, b, c, d = lotsa[:4]

_should_ properly give the impression of a code smell, if your "once and
only once" sense is finely tuned. What's the business of that ':4' on
the RHS? Showing the compiler that you can count correctly?! You're
...
The slicing on the right is not so much to show the compiler that you
know how to count, it is to show the runtime that you are looking for
a specified slice of lotsa. How would you like the following two cases
to be handled by your desired Python, and how would that make more sense
than what is done now?

a,b,c = [1,2,3,4]
a,b,c = [1,2]

I would like to get exceptions in these cases, which, being exactly what
IS done now, makes exactly as much sense as itself. Note that there is
no indicator in either of these forms that non-rigid unpacking is
desired. Assuming the indicator for 'and all the rest' were a prefix
star, then:

a, b, *c = [1, 2, 3, 4]
a, b, *c = [1, 2]

should both set a to 1, b to 2, and c respectively to [3, 4] and []. Of
course there would still be failing cases:

a, b, *c = [1]

this should still raise -- 'a, b, *c' needs at least 2 items to unpack.

Is it worth a keyword, or would a sys.getframe()/bytecode hack be
sufficient? In the latter case, I'm sure you could come up with such a
mechanism, and when done, maybe you want to offer it up as a recipe in
the cookbook *wink*.

Two recipes in the 2nd ed of the CB can be combined to that effect
(well, nearly; c becomes an _iterator_ over 'all the rest'), one by
Brett Cannon and one by Sami Hangaslammi. Not nearly as neat and clean
as if the language did it, of course.

Another case where a specific number of items is requested, which is not
(syntactically speaking) a multiple assignment, is assignment to an
_extended_ slice of, e.g., a list (only; assignment to a slice with a
stride of 1 is happy with getting whatever number of items are coming).
I don't particularly LIKE writing:
L[x:y:z] = len(L[x:y:z]) * [foo]

I much prefer...

for i in xrange(x, y, z):
L = foo


Not the same semantics, in the general case. For example:

L[1:-1:2] = ...

rebinds (len(L)/2)-1 items; your version rebinds no items, since
xrange(1, -1, 2) is empty. To simulate the effect that assigning to an
extended slice has, you have to take a very different tack:

for i in slice(x, y, z).indices(len(L)):
L = foo

and that's still quite a bit less terse and elegant, as is usually the
case for fussy index-based looping whenever it's decently avoidable.
But then again, I don't much like extended list slicing (I generally
only use the L[:y], L[x:] and L[x:y] versions).

It may be that in your specific line of work there is no opportunity or
usefulness for the stride argument. Statistically, however, it's
somewhat more likely that you're not taking advantage of the
opportunities because, given your dislike, you don't even notice them --
as opposed to the opportunities not existing at all, or you noticing
them, evaluating them against the lower-level index-based-looping
alternatives, and selecting the latter. If you think that extended
slices are sort of equivalent to xrange, as above shown, for example,
it's not surprising that you're missing their actual use cases.


Alex
 
J

Josiah Carlson

Jeremy Bowers said:
Is it worth a keyword, or would a sys.getframe()/bytecode hack be
sufficient?

Who needs a keyword?

a, b, *c = [1, 2, 3, 4]
a, b, *c = [1, 2]


I'll post the same thing that was posted by James Knight on python-dev
about this precise syntax...


James said:


Guido hasn't updated his stance, so don't hold your breath.

- Josiah
 
F

Fernando Perez

Alex said:
But then again, I don't much like extended list slicing (I generally
only use the L[:y], L[x:] and L[x:y] versions).

It may be that in your specific line of work there is no opportunity or
usefulness for the stride argument. Statistically, however, it's

Indeed, those of us from the Numeric/numarray side of things rely every day on
extended slicing, and consider it an absolute necessity for writing compact,
clean, readable numerical python code.

It's no surprise that this syntax (as far as I know) originated from the needs
of the scientific computing community, a group where python is picking up users
every day.

Cheers,

f
 
A

Alex Martelli

Fernando Perez said:
Alex said:
But then again, I don't much like extended list slicing (I generally
only use the L[:y], L[x:] and L[x:y] versions).

It may be that in your specific line of work there is no opportunity or
usefulness for the stride argument. Statistically, however, it's

Indeed, those of us from the Numeric/numarray side of things rely every
day on extended slicing, and consider it an absolute necessity for writing
compact, clean, readable numerical python code.

It's no surprise that this syntax (as far as I know) originated from the
needs of the scientific computing community, a group where python is
picking up users every day.

Definitely no suprise to me -- although I didn't come to Python by way
of scientific programming, I do have a solid background in that field,
and the concept of addressing an array with a stride is very obvious to
me. Indeed, I was disappointed, early on, that lists didn't support
extended slicing, and very happy when we were able to add it to them.


Alex
 
J

Jeremy Bowers

Is it worth a keyword, or would a sys.getframe()/bytecode hack be
sufficient?

Who needs a keyword?

a, b, *c = [1, 2, 3, 4]
a, b, *c = [1, 2]

In the latter case I'd expect c to be the empty tuple.

Clear parallels to function syntax:
.... print c
.... ()

No parallel for **, but... *shrug* who cares?
 
P

Peter Otten

Jeremy said:
the instances I do have is a tree iterator that on every "next()" returns
a depth and the current node, because the code to track the depth based
on the results of running the iterator is better kept in the iterator than
in the many users of that iterator. But I don't like it; I'd rather make
it a property of the iterator itself or something, but there isn't a
code-smell-free way to do that, either, as the iterator is properly a
method of a certain class, and trying to pull it out into its own class
would entail lots of ugly accessing the inner details of another class.

You could yield Indent/Dedent (possibly the same class) instances whenever
the level changes - provided that the length of sequences of nodes with the
same depth does not approach one.

Peter
 
J

Josiah Carlson

Jeremy Bowers said:
I'm not in favor of it either. I just think that *if* it were going in, it
shouldn't be a "keyword".

I think variable size tuple returns are a code smell. Now that I think of
it, *tuple* returns are a code smell. (Remember, "code smells" are strong
hints of badness, not proof.) Of the easily thousands of functions in
Python I've written, less than 10 have returned a tuple that was expected
to be unpacked.

I agree with you on the one hand (I also think that variable lengthed
tuple returns are smelly), and have generally returned tuples of the
same length whenever possible. However, I can't agree with you on
general tuple returns. Why? For starters, dict.[iter]items(),
struct.unpack() and various client socket libraries in the standard
library that return both status codes and status messages/data on
command completion (smtplib, nntplib, imaplib).

Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much. One of
the instances I do have is a tree iterator that on every "next()" returns
a depth *and* the current node, because the code to track the depth based
on the results of running the iterator is better kept in the iterator than
in the many users of that iterator. But I don't like it; I'd rather make
it a property of the iterator itself or something, but there isn't a
code-smell-free way to do that, either, as the iterator is properly a
method of a certain class, and trying to pull it out into its own class
would entail lots of ugly accessing the inner details of another class.

The real question is whether /every/ return of more than one item
deserves to have its own non-tuple instance, and whether one really
wants the called function to define names for attributes on that
returned instance. Me, I'm leaning towards no. Two, three or even
four-tuple returns, to me, seem reasonable, and in the case of struct,
whatever suits the program/programmer. Anything beyond that should
probably be a class, but I don't think that the Python language should
artificially restrict itself when common sense would keep most people
from:
a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z = range(26)

...or at least I would hope.

- Josiah
 
J

Josiah Carlson

Josiah Carlson said:
a, b, c, d = lotsa[:4]

_should_ properly give the impression of a code smell, if your "once and
only once" sense is finely tuned. What's the business of that ':4' on
the RHS? Showing the compiler that you can count correctly?! You're
...
The slicing on the right is not so much to show the compiler that you
know how to count, it is to show the runtime that you are looking for
a specified slice of lotsa. How would you like the following two cases
to be handled by your desired Python, and how would that make more sense
than what is done now?

a,b,c = [1,2,3,4]
a,b,c = [1,2]

I would like to get exceptions in these cases, which, being exactly what
IS done now, makes exactly as much sense as itself. Note that there is
no indicator in either of these forms that non-rigid unpacking is
desired. Assuming the indicator for 'and all the rest' were a prefix
star, then:

a, b, *c = [1, 2, 3, 4]
a, b, *c = [1, 2]

should both set a to 1, b to 2, and c respectively to [3, 4] and []. Of
course there would still be failing cases:

a, b, *c = [1]

this should still raise -- 'a, b, *c' needs at least 2 items to unpack.

The only limitation right now is Guido. That is, you need to convince
Guido, and likely get the syntax implemented. See James Knight's post
on 11/12/2004 in python-dev (or my recent quoting here) with a quote
from Guido in regards to this syntax.


I have another response to this...what would be shorter, using some
automated 'how many items are on the left' discovery mechanism, or just
putting in the ':4'?

Certainly some symbol could be reused for something like...
a,b,c = L[:%] #first part
a,b,c = L[%:] #last part, automatically negating the value

But is something like the follwing even desireable?
a,b,c = L[i:%:j] #automatically calculate the ending index
a,b,c = L[%:i:j] #automatically calculate the start index

Regardless, I'm not sure I particularly like any symbol that could be
placed in either set of slices. If it is not some single-character
symbol, then it is actually shorter to just count them.

Two recipes in the 2nd ed of the CB can be combined to that effect
(well, nearly; c becomes an _iterator_ over 'all the rest'), one by
Brett Cannon and one by Sami Hangaslammi. Not nearly as neat and clean
as if the language did it, of course.

What is to stop the recipes from wrapping that final iterator with a
list() call?

Another case where a specific number of items is requested, which is not
(syntactically speaking) a multiple assignment, is assignment to an
_extended_ slice of, e.g., a list (only; assignment to a slice with a
stride of 1 is happy with getting whatever number of items are coming).
I don't particularly LIKE writing:
L[x:y:z] = len(L[x:y:z]) * [foo]

I much prefer...

for i in xrange(x, y, z):
L = foo


Not the same semantics, in the general case. For example:

L[1:-1:2] = ...


And I would just use...

for i in xrange(1, len(L)-1, 2):
L = ...

As in anything, if there is more than one way to do something, at least
a bit of translation is required.

rebinds (len(L)/2)-1 items; your version rebinds no items, since
xrange(1, -1, 2) is empty. To simulate the effect that assigning to an
extended slice has, you have to take a very different tack:

for i in slice(x, y, z).indices(len(L)):
L = foo

and that's still quite a bit less terse and elegant, as is usually the
case for fussy index-based looping whenever it's decently avoidable.


You know, terse != elegant. While extended slice assignments are terse,
I would not consider them elegant. Elegant is quicksort, the
Floyd-Warshall algorithm for APSP, Baruvka's MST algorithm, etc. Being
able to say "from here to here with this stride", that's a language
convenience, and its use is on par with using fcn(*args, **kwargs).


Have I used it? Sure, a few times. My most memorable experience is
using a similar functionality in C with MPI. Unbelievably useful for
chopping up data for distribution and reintegration. Was it elegant, I
wouldn't ever make such a claim, it was a library feature, and extended
slice assignments in Python are a language feature. A useful language
feature for a reasonably sized subset of the Python community certainly,
but elegant, not so much.


Using an imperative programming style with Python (using indices to
index a sequence), I thought, was to be encouraged; why else would
xrange and range be offered? Oh, I know, because people use 'fussy
index-based looping' in C/C++, Java, pascal...and don't want to change
the way they develop. Or maybe because not every RHS is a sequence, and
sequence indexing is the more general case which works for basically
everything (with minimal translation).

But then again, I don't much like extended list slicing (I generally
only use the L[:y], L[x:] and L[x:y] versions).

It may be that in your specific line of work there is no opportunity or
usefulness for the stride argument. Statistically, however, it's
somewhat more likely that you're not taking advantage of the
opportunities because, given your dislike, you don't even notice them --
as opposed to the opportunities not existing at all, or you noticing
them, evaluating them against the lower-level index-based-looping
alternatives, and selecting the latter. If you think that extended
slices are sort of equivalent to xrange, as above shown, for example,
it's not surprising that you're missing their actual use cases.

It is the case that I have rarely needed to replace non-contiguous
sections of lists. It has been a few years since I used MPI and the
associated libraries. Recently in those cases that I have found such a
need, I find using xrange to be far more readable. It's an opinion thing,
and we seem to differ in the case of slice assignments (and certainly a
few other things I am aware of). How about we agree to disagree?

I have also found little use of them in the portions of the standard
library that I peruse on occasion, which in my opinion, defines what it
means to be Pythonic (though obviously many extended slice usages are
application-specific).


- Josiah
 
P

Peter Otten

Jeremy said:
In this case, the depth of the node is multiplied by some indentation
parameter, or some similar operation, and it occurs in three or places, so
the duplication of the

if token == INDENT:
depth += 1
elif token == DEDENT:
depth -= 1
if depth == 0:
abort or something

three or four times was starting to smell itself.

I guess I don't understand, so I wrote a simple example:


DEDENT = object()
INDENT = object()

def _walk(items):
for item in items:
if isinstance(item, list):
yield INDENT
for child in _walk(item):
yield child
yield DEDENT
else:
yield item

class Tree(object):
def __init__(self, data):
self.data = data
def __iter__(self):
for item in _walk(self.data):
yield item

class WalkBase(object):
def __call__(self, tree):
dispatch = {
DEDENT: self.dedent,
INDENT: self.indent
}
default = self.default
for item in tree:
dispatch.get(item, default)(item)

class PrintIndent(WalkBase):
def __init__(self):
self._indent = ""
def indent(self, node):
self._indent += " "
def dedent(self, node):
self._indent = self._indent[:-4]
def default(self, node):
print self._indent, node

class PrintXml(WalkBase):
def indent(self, node):
print "<node>"
def dedent(self, node):
print "</node>"
def default(self, node):
print "<leaf>%s</leaf>" % node
def __call__(self, tree):
print "<tree>"
super(PrintXml, self).__call__(tree)
print "</tree>"

if __name__ == "__main__":
tree = Tree([
0,
[1, 2, 3],
[4,
[5, 6, 7],
[8, [9, 10]]],
[11],
12
])
for i, Class in enumerate([PrintIndent, PrintXml]):
print "%d " % i * 5
Class()(tree)


I think taking actions on indent/dedent "events" is easier and simpler than
keeping track of a numerical depth value, and at least the PrintXml example
would become more complicated if you wanted to infer the beginning/end of a
level from the change in the depth.
I do check the depth level twice (isinstance(item, list) and
dispatch.get()), but I think the looser coupling is worth it.
If you are looking for the cool version of such a dispatch mechanism,
Phillip J. Eby's article

http://peak.telecommunity.com/DevCenter/VisitorRevisited

(found in the Daily Python URL) might be interesting.

Peter
 
J

Jeremy Bowers

Guido hasn't updated his stance, so don't hold your breath.

I'm not in favor of it either. I just think that *if* it were going in, it
shouldn't be a "keyword".

I think variable size tuple returns are a code smell. Now that I think of
it, *tuple* returns are a code smell. (Remember, "code smells" are strong
hints of badness, not proof.) Of the easily thousands of functions in
Python I've written, less than 10 have returned a tuple that was expected
to be unpacked.

Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much. One of
the instances I do have is a tree iterator that on every "next()" returns
a depth *and* the current node, because the code to track the depth based
on the results of running the iterator is better kept in the iterator than
in the many users of that iterator. But I don't like it; I'd rather make
it a property of the iterator itself or something, but there isn't a
code-smell-free way to do that, either, as the iterator is properly a
method of a certain class, and trying to pull it out into its own class
would entail lots of ugly accessing the inner details of another class.
 
J

Jeremy Bowers

You could yield Indent/Dedent (possibly the same class) instances whenever
the level changes - provided that the length of sequences of nodes with the
same depth does not approach one.

In this case, the depth of the node is multiplied by some indentation
parameter, or some similar operation, and it occurs in three or places, so
the duplication of the

if token == INDENT:
depth += 1
elif token == DEDENT:
depth -= 1
if depth == 0:
abort or something

three or four times was starting to smell itself.
 
G

Greg Ewing

Jeremy said:
Generally, returning a tuple is either a sign that your return value
should be wrapped up in a class, or the function is doing too much.

While I suspect you may be largely right, I
find myself wondering why this should be so. We
don't seem to have any trouble with multiple inputs
to a function, so why should multiple outputs be
a bad thing? What is the reason for this asymmetry?

Perhaps it has something to do with positional vs.
keyword arguments. If a function has too many input
arguments to remember what order they go in, we
always have the option of specifying them by
keyword. But we can't do that with the return-a-tuple-
and-unpack technique for output arguments -- it's
strictly positional.

Maybe things would be better if we had "dict unpacking":

a, c, b = {'a': 1, 'b': 2, 'c': 3}

would give a == 1, c == 3, b == 2. Then we could
accept outputs by keyword as well as inputs...
 
T

Tim Hoffman

Jeremy said:
Now that I think of
it, *tuple* returns are a code smell. (Remember, "code smells" are strong
hints of badness, not proof.) Of the easily thousands of functions in
Python I've written, less than 10 have returned a tuple that was expected
to be unpacked.

Hmm have ever used PIL.

Lots of returns are tuples (admittedly not variable length)

things like size are an example. (100,200) or

w,h = object.size() # Actually it might be a property but really I

#don't see a big difference.

IMHO this doesn't warrant being wrapped in a class.

and is much better than

h = object.height()
w = object.width()

Rgds

Tim
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top