determining the number of output arguments

B

Bengt Richter

Are you sure it will work with locals?

I think he was alluding to proposed functionality, not as it
currently works. I thought it might be possible to have a locals()-like
proxy that would update _existing_ locals, so that your second example
would work.
... locals().update(d)
... print a
...
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 3, in f
NameError: global name 'a' is not defined

Or even:

... a = 1
... locals().update(d)
locals_proxy().update(d) # rebind existing local names matching keys in d
... print a
...
1

Regards,
Bengt Richter
 
A

Alex Martelli

Bengt Richter said:
I guess you mean keyword arguments, so yes,

I hate to call them 'keyword' arguments when they aren't (check with the
keyword module: it will confirm they aren't keywords!-).
but for other args I would argue that the caller's
knowledge of formal parameter names really only serves mnemonic purposes.

How does that differ from returning a tuple-with-names?
I.e., once the calling code is written, e.g. an obfuscated-symbols
version of the function may be substituted with no change in program
behavior. So in that sense, there is no coupling: the writer of the

If the caller doesn't use argument names, you can change argument names
without breaking the caller.

If the caller doesn't use field names in a returned tuple-with-names,
you can change the field names without breaking the caller.

I don't see any difference.
calling code is not forced to use any of the function's internal names

Same for a tuple-with-names being returned.
in _his_ source code. This is in contrast with e.g. a returned dict or
name-enhanced tuple: he _is_ forced to use the given actual names in _his_
code, much like keyword arguments again.

Dicts are another issue. As for tuples-with-names, no way:

a, b, c, d, e, f, g, h, i = time.localtime()

this works, of course, and the caller doesn't have to know which field
is named tm_day and which one tm_hour or whatever else, unless the
caller WANTS to use such names for mnemonic purposes.

The situation is as strictly parallel to passing functions to ordinary
Python functions as two issues can ever be in programming: foo(a, b, c)
works but so does foo(bar=b, baz=a, fee=c) if that's the calling style
the caller prefers for mnemonic/clarity/style reasons.


Alex
 
B

Bengt Richter

I hate to call them 'keyword' arguments when they aren't (check with the
keyword module: it will confirm they aren't keywords!-).
Of the language no, but the error message is suggestive:
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: foo() got an unexpected keyword argument 'z'

OTOH, I have been arguing from an untested assumption *<8^P. I didn't
realize that 'keyword' style named parameter passing could be used
with the ordinary named parameters!
1 2 3

That definitely is more than mnemonic. So the formal ordered parameter names
are not just internal information, they are part of the interface (which
you can optionally ignore). Really sorry if I misled anyone on that ;-/
How does that differ from returning a tuple-with-names?
I guess you are referring to those names' also being for optional use, i.e.,
that you can unpack a tuple-with-names the old fashioned way if you want. Hm...
but that might lead to ambiguities if we are to have automatic name-driven unpacking.
I.e.,

c,a,b = tuple_with_names

might be different from

c,a,b = tuple(tuple_with_names)

and

a,b = tuple_with_names

might be a legal extraction of a and b (whatever the order), but

a,b = tuple(tuple_with_names)[1:]

would need the [1:] to avoid an unpacking error.
That makes me wonder if

a,b,c,etc = someobj

it will need an UNPACK_SEQUENCE that looks for a name-lookup
capability in someobj before it looks for an iter capability
to do sequential unpacking. I think probing __getitem__ would
cause problems, since dict is already capable of both __iter__
and __getitem__, and the priority is already defined:
('y', 'x')


And since the underlying tuple would have to support __iter__,
I'm not sure how to generate code for name-driven unpacking
without having a __getnamedvalue__ method separate from __getitem__,
and which would have priority over __iter__. And then what do
you do with all those left-hand-side namelists that you mostly
won't need unless the RHS evaluates to an object with a __getnamedvalue__
method? Maybe we need to have an explict operator for name-driven unpacking, e.g.,

a,b <- someobj

Then plain old __getitem__ could be used on any object that currently supports it,
and the name list would only be compiled into the code for explicit a,b <- something.

Obviously this could take care of unpacking tuple-with-name objects too.
If the caller doesn't use argument names, you can change argument names
without breaking the caller.
That makes sense now, but I've been a caller who never used argument names
for the ordered parameters ;-)
If the caller doesn't use field names in a returned tuple-with-names,
you can change the field names without breaking the caller.

I don't see any difference.
Nor do I now, except for the above aspect of ambiguity in automated
serial vs name-driven unpacking.
Same for a tuple-with-names being returned.
Ok, yes, you can ignore the names. But if you do use names, you live with the
name choices of the function coder (both ways, as I've learned).
Dicts are another issue. As for tuples-with-names, no way:
But this is ignoring the names. See above re name-driven unpacking ;-)
a, b, c, d, e, f, g, h, i = time.localtime()

this works, of course, and the caller doesn't have to know which field
is named tm_day and which one tm_hour or whatever else, unless the
caller WANTS to use such names for mnemonic purposes.
Ok, but name-driven unpacking will have to have other-than-plain-assignment
syntax, it now seems to me.
The situation is as strictly parallel to passing functions to ordinary
Python functions as two issues can ever be in programming: foo(a, b, c)
works but so does foo(bar=b, baz=a, fee=c) if that's the calling style
the caller prefers for mnemonic/clarity/style reasons.
Can you believe I've gone years without integrating the fact that any
named python parameter can be passed in name=value form? I've only used
them with ** syntax. Sheesh. Habits can be unnecessarily constraining ;-/

Regards,
Bengt Richter
 
C

Carlos Ribeiro

I hate to call them 'keyword' arguments when they aren't (check with the
keyword module: it will confirm they aren't keywords!-).


How does that differ from returning a tuple-with-names?

There is a small difference. In some instances, it's *necessary* to
know the name of the argument; for example, if you want to provide a
partial argument list (the only plausible option involves knowing the
default values of the ommited arguments so you can provide all
arguments as positional ones, but that's ends up being about the same
as far as coupling is concerned).

On the other hand, when a named tuple is used to return a value the
caller isn't required to know the name of the argument. He can
*always* refer to it positionally. (the difference is in the fact that
there are no default values in the "return signature", if we may talk
about such beast).
If the caller doesn't use argument names, you can change argument names
without breaking the caller.

If the caller doesn't use field names in a returned tuple-with-names,
you can change the field names without breaking the caller.

I don't see any difference.


Same for a tuple-with-names being returned.


Dicts are another issue. As for tuples-with-names, no way:

a, b, c, d, e, f, g, h, i = time.localtime()

this works, of course, and the caller doesn't have to know which field
is named tm_day and which one tm_hour or whatever else, unless the
caller WANTS to use such names for mnemonic purposes.

The situation is as strictly parallel to passing functions to ordinary
Python functions as two issues can ever be in programming: foo(a, b, c)
works but so does foo(bar=b, baz=a, fee=c) if that's the calling style
the caller prefers for mnemonic/clarity/style reasons.

As I pointed out above, it's not *strictly* parallel. However, I
concede that knowledge about the positional parameters also introduces
a great deal of coupling, more than I assumed at first.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: (e-mail address removed)
mail: (e-mail address removed)
 
H

Hung Jung Lu

Steven Bethard said:
Even at 3 lines, do you really want to rewrite those every time you need
this functionality?

I have written the one-liner "class Generic: pass" all too many times.
:)

Generic objects can be used to represent hierarchical data in tree
fashion. As I said, generic objects are also good for
pickling/serialization. We are talking about communication between
multiple applications, across time and/or space. Other representations
include dictionaries or XML. But you can tell that when it comes to
compactness and ease of use, generic objects are the way to go. In
fact, most common hierarchical data structures only need: (1) the
generic object, (2) list, (3) basic types (numbers and strings.)

It seems odd that there is no standard generic object in Python.

Actually, if one thinks outside Python, in prototypish languages,
generic objects are more fundamental. They are the building block of
everything else. You don't build your 3-liner generic object from
something else... all on the contrary, you build everything else from
your generic object.

The syntax "object(year=2004, month=11, day=18)" certainly is nice. I
wouldn't be surprised that somewhere, someone has already written some
programming language that uses this syntax for their fundamental
generic object.
On the other hand, I usually find that in the few places where I have
used a record like this, I eventually replace the struct with a real
class...

This is true for single programs. It's true for function arguments or
outputs. In those places, generic objects are good as the quick and
easy way of using hierarchical data structure without the need of
formally defining a class. Once things deserve to be replaced by real
classes, they are replaced.

This is not true for pickling/serialization purpose. When you have
pickled data, you don't want to have to search for the definition of
the classes, which you or someone else may have written years ago. You
want to be able to unpickle/unserialize your data and use it, without
the need of class definition. Yes, dictionary can be used, but you:

(a) either use mydata['a']['b']['c']['d'] instead of mydata.a.b.c.d,
or

(b) have a class to convert dictionary-based back to object-based
(hence we come back to the problem: where is that code file of the
class that some guy wrote 7 years ago?)

If I have avoided anything more complicated than "class Generic:
pass", it's because this is a one-liner that I can remember how to
type anytime. :) Now, if in the language there is something standard,
even this trick won't be necessary. From all what I can see, "class
Generic: pass" will stay as the preferred choice for many people, for
a long time to come. Simplicity counts.

Hung Jung
 
G

graham__fawcett

Oops, let's try that again:

| class namedtuplewrapper(tuple):
| """
| wraps an existing tuple, providing names.
| """
|
| _names_ = []
|
| def __getattr__(self, name):
| try:
| x = self._names_.index(name)
| except ValueError:
| raise AttributeError, 'no such field: %s' % name
| if x >= 0:
| return self[x]
|
| class namedtuple(namedtuplewrapper):
| """
| Sugar for a class that constructs named tuples from
| positional arguments.
| """
|
| def __new__(cls, *args):
| return tuple.__new__(cls, args)
|
|
| if __name__ == '__main__':
|
| # namedtuple example
|
| class foo(namedtuple):
| _names_ = ['one', 'two', 'three', 'four', 'five']
|
| f = foo(1, 2, 3, 4, 5)
| assert f.one + f.four == f.five
|
|
| # wrapper example
|
| class loctime(namedtuplewrapper):
| _names_ = [
| 'year', 'month', 'day',
| 'hour', 'min', 'sec',
| 'wday', 'yday', 'isdst'
| ]
|
| import time
| print time.localtime()
| loc = loctime(time.localtime())
| print loc.year, loc.month, loc.day
|
| # arbitrary naming...
| loc._names_ = ['a', 'b', 'c']
| print loc.a

-- Graham
 
S

Steven Bethard

[snip a bunch of good arguments for including generic objects]

It does sound like there's some support for putting something like this
into the language. My feeling is that the right place to start would be
to put such an object into the collections module. (If necessary, it
could get moved to builtins later.)

Is this something a PEP should be written for?

Steve
 
J

Jeff Shannon

Steven said:
[snip a bunch of good arguments for including generic objects]

It does sound like there's some support for putting something like
this into the language. My feeling is that the right place to start
would be to put such an object into the collections module. (If
necessary, it could get moved to builtins later.)

Is this something a PEP should be written for?


It would need a PEP before it could have a chance of being included in
the standard lib. And if it gets rejected, at least you'd then have a
record of some rationale for *not* having a standard Generic/Bunch class.

So, yeah, if you want this to be anything other than an ephemeral Usenet
conversation, a PEP would be the next step, I think. :)

Jeff Shannon
Technician/Programmer
Credit International
 
A

Alex Martelli

Bengt Richter said:
...
TypeError: foo() got an unexpected keyword argument 'z'

Yeah, it's official python terminology, just hateful.
OTOH, I have been arguing from an untested assumption *<8^P. I didn't
realize that 'keyword' style named parameter passing could be used
with the ordinary named parameters!

Ah, OK, couldn't realize that misapprehension on your part.
I guess you are referring to those names' also being for optional use,
i.e., that you can unpack a tuple-with-names the old fashioned way if you
want. Hm...

Yes, I took that for granted.
but that might lead to ambiguities if we are to have automatic name-driven
unpacking. I.e.,

c,a,b = tuple_with_names

might be different from

c,a,b = tuple(tuple_with_names)

It had better not be, otherwise the tuples-with-names returned by
standard library modules such as time, os, resource, would behave
differently from these new tuple_with_names. IOW, we don't want
name-driven unpacking from these tuples with names -- at least, I most
assuredly don't. Are these tuple_with_names not iterables?! Then they
must be unpackable this way like ANY other iterable:
a c b

If you're talking about backwards incompatible changes (Python 3000) it
might be good to separate the thread from one which I was participating
in because of potential interest for Python 2.5. As long as we're not
considering backwards incompatible changes, it's unthinkable to have
iterables that can't be unpacked, even though the results might not be
what you might expect.

method? Maybe we need to have an explict operator for name-driven
unpacking, e.g.,

Probably some kind of distinguished syntax, though a new
assignment-operator seems way overkill for the tiny benefit.



Alex
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,772
Messages
2,569,588
Members
45,100
Latest member
MelodeeFaj
Top