A critic of Guido's blog on Python's lambda

A

Alexander Schmolck

[trimmed groups]

Ken Tilton said:
yes, but do not feel bad, everyone gets confused by the /analogy/ to
spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
period I swore off the analogy because it was so invariably misunderstood.
Even Graham misunderstood it.

Count me in.
Very roughly speaking, that is supposed to be the code, not the output. So you
would start with (just guessing at the Python, it has been years since I did
half a port to Python):


v1 = one
a = determined_by(negate(sin(pi/2)+v1)
b = determined_by(negate(a)*10)
print(a) -> -2.0 ;; this and the next are easy
print(b) -> 20
v1 = two ;; fun part starts here
print(b) -> 40 ;; of course a got updated, too

do you mean 30?

I've translated my interpretation of the above to this actual python code:

from math import sin, pi
v1 = cell(lambda: 1)
a = cell(lambda:-(sin(pi/2)+v1.val), dependsOn=[v1])
b = cell(lambda: -a.val*10, dependsOn=[a],
onChange=lambda *args: printChangeBlurp(name='b',*args))
print 'v1 is', v1
print 'a is', a # -2.0 ;; this and the next are easy
print 'b is', b # 20
v1.val = 2 # ;; fun part starts here
print 'v1 now is', v1
print 'b now is', b # 30 ;; of course a got updated, too


I get the following printout:

v1 is 1
a is -2.0
b is [cell 'b' changed from <__main__.unbound object at 0xb4e2472c> to 20.0,
it was not bound]20.0
[cell 'b' changed from 20.0 to 30.0, it was bound ] v1 now is 2
b now is 30.0

Does that seem vaguely right?
The other thing we want is (really inventing syntax here):

on_change(a,new,old,old-bound?) print(list(new, old, old-bound?)

Is the above what you want (you can also dynamically assign onChange later
on, as required or have a list of procedures instead)?
Then the print statements Just Happen. ie, It is not as if we are just hiding
computed variables behind syntax and computations get kicked off when a value
is read. Instead, an underlying engine propagates any assignment throughout
the dependency graph before the assignment returns.

Updating on write rather than recalculating on read does in itself not seem
particularly complicated.
My Cells hack does the above, not with global variables, but with slots (data
members?) of instances in the CL object system. I have thought about doing it
with global variables such as a and b above, but never really seen much of
need, maybe because I like OO and can always think of a class to create of
which the value should be just one attribute.

OK, so in what way does the quick 35 line hack below also completely miss your
point?


# (NB. for lispers: 'is' == EQ; '==' is sort of like EQUAL)

def printChangeBlurp(someCell, oldVal, newVal, bound, name=''):
print '[cell %r changed from %r to %r, it was %s]' % (
name, oldVal, newVal, ['not bound', 'bound '][bound]),

_unbound = type('unbound', (), {})() # just an unique dummy value
def updateDependents(dependents):
seen = {}
for dependent in dependents:
if dependent not in seen:
seen[dependent] = True
dependent.recalculate()
updateDependents(dependent._dependents)
class cell(object):
def __init__(self, formula, dependsOn=(), onChange=None):
self.formula = formula
self.dependencies = dependsOn
self.onChange = onChange
self._val = _unbound
for dependency in self.dependencies:
if self not in dependency._dependents:
dependency._dependents.append(self)
self._dependents = []
def __str__(self):
return str(self.val)
def recalculate(self):
newVal = self.formula()
if self.onChange is not None:
oldVal = self._val
self.onChange(self, oldVal, newVal, oldVal is not _unbound)
self._val = newVal
def getVal(self):
if self._val is _unbound:
self.recalculate()
return self._val
def setVal(self, value):
self._val = value
updateDependents(self._dependents)
val = property(getVal, setVal)



'as
 
B

Bill Atkins

Alexander Schmolck said:
[trimmed groups]

Ken Tilton said:
yes, but do not feel bad, everyone gets confused by the /analogy/ to
spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
period I swore off the analogy because it was so invariably misunderstood.
Even Graham misunderstood it.

Count me in.
Very roughly speaking, that is supposed to be the code, not the output. So you
would start with (just guessing at the Python, it has been years since I did
half a port to Python):


v1 = one
a = determined_by(negate(sin(pi/2)+v1)
b = determined_by(negate(a)*10)
print(a) -> -2.0 ;; this and the next are easy
print(b) -> 20
v1 = two ;; fun part starts here
print(b) -> 40 ;; of course a got updated, too

do you mean 30?

I've translated my interpretation of the above to this actual python code:

from math import sin, pi
v1 = cell(lambda: 1)
a = cell(lambda:-(sin(pi/2)+v1.val), dependsOn=[v1])
b = cell(lambda: -a.val*10, dependsOn=[a],
onChange=lambda *args: printChangeBlurp(name='b',*args))
print 'v1 is', v1
print 'a is', a # -2.0 ;; this and the next are easy
print 'b is', b # 20
v1.val = 2 # ;; fun part starts here
print 'v1 now is', v1
print 'b now is', b # 30 ;; of course a got updated, too


I get the following printout:

v1 is 1
a is -2.0
b is [cell 'b' changed from <__main__.unbound object at 0xb4e2472c> to 20.0,
it was not bound]20.0
[cell 'b' changed from 20.0 to 30.0, it was bound ] v1 now is 2
b now is 30.0

Does that seem vaguely right?
The other thing we want is (really inventing syntax here):

on_change(a,new,old,old-bound?) print(list(new, old, old-bound?)

Is the above what you want (you can also dynamically assign onChange later
on, as required or have a list of procedures instead)?
Then the print statements Just Happen. ie, It is not as if we are just hiding
computed variables behind syntax and computations get kicked off when a value
is read. Instead, an underlying engine propagates any assignment throughout
the dependency graph before the assignment returns.

Updating on write rather than recalculating on read does in itself not seem
particularly complicated.
My Cells hack does the above, not with global variables, but with slots (data
members?) of instances in the CL object system. I have thought about doing it
with global variables such as a and b above, but never really seen much of
need, maybe because I like OO and can always think of a class to create of
which the value should be just one attribute.

OK, so in what way does the quick 35 line hack below also completely miss your
point?


# (NB. for lispers: 'is' == EQ; '==' is sort of like EQUAL)

def printChangeBlurp(someCell, oldVal, newVal, bound, name=''):
print '[cell %r changed from %r to %r, it was %s]' % (
name, oldVal, newVal, ['not bound', 'bound '][bound]),

_unbound = type('unbound', (), {})() # just an unique dummy value
def updateDependents(dependents):
seen = {}
for dependent in dependents:
if dependent not in seen:
seen[dependent] = True
dependent.recalculate()
updateDependents(dependent._dependents)
class cell(object):
def __init__(self, formula, dependsOn=(), onChange=None):
self.formula = formula
self.dependencies = dependsOn
self.onChange = onChange
self._val = _unbound
for dependency in self.dependencies:
if self not in dependency._dependents:
dependency._dependents.append(self)
self._dependents = []
def __str__(self):
return str(self.val)
def recalculate(self):
newVal = self.formula()
if self.onChange is not None:
oldVal = self._val
self.onChange(self, oldVal, newVal, oldVal is not _unbound)
self._val = newVal
def getVal(self):
if self._val is _unbound:
self.recalculate()
return self._val
def setVal(self, value):
self._val = value
updateDependents(self._dependents)
val = property(getVal, setVal)



'as

Here's how one of the cells examples might look in corrupted Python
(this is definitely not executable):

class FallingRock:
def __init__(self, pos):
define_slot( 'velocity', lambda: self.accel * self.elapsed )
define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
initial_position = cell_initial_value( 100 ) )
self.accel = -9.8

rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100

rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, -9.8

rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -627.2

Make sense? The idea is to declare what a slot's value represents
(with code) and then to stop worrying about keeping different things
synchronized.

Here's another of the examples, also translated into my horrific
rendition of Python (forgive me):

class Menu:
def __init__(self):
define_slot( 'enabled',
lambda: focused_object( self ).__class__ == TextEntry and
focused_object( self ).selection )

Now whenever the enabled slot is accessed, it will be calculated based
on what object has the focus. Again, it frees the programmer from
having to keep these different dependencies updated.
 
B

Bill Atkins

Bill Atkins said:
Alexander Schmolck said:
[trimmed groups]

Ken Tilton said:
yes, but do not feel bad, everyone gets confused by the /analogy/ to
spreadsheets into thinking Cells /is/ a spreadsheet. In fact, for a brief
period I swore off the analogy because it was so invariably misunderstood.
Even Graham misunderstood it.

Count me in.
But it is such a great analogy! <sigh>

but what's the big deal about PyCells?
Here is 22-lines barebones implementation of spreadsheet in Python,
later I create 2 cells "a" and "b", "b" depends on a and evaluate all
the cells. The output is
a = negate(sin(pi/2)+one) = -2.0

b = negate(a)*10 = 20.0

Very roughly speaking, that is supposed to be the code, not the output. So you
would start with (just guessing at the Python, it has been years since I did
half a port to Python):


v1 = one
a = determined_by(negate(sin(pi/2)+v1)
b = determined_by(negate(a)*10)
print(a) -> -2.0 ;; this and the next are easy
print(b) -> 20
v1 = two ;; fun part starts here
print(b) -> 40 ;; of course a got updated, too

do you mean 30?

I've translated my interpretation of the above to this actual python code:

from math import sin, pi
v1 = cell(lambda: 1)
a = cell(lambda:-(sin(pi/2)+v1.val), dependsOn=[v1])
b = cell(lambda: -a.val*10, dependsOn=[a],
onChange=lambda *args: printChangeBlurp(name='b',*args))
print 'v1 is', v1
print 'a is', a # -2.0 ;; this and the next are easy
print 'b is', b # 20
v1.val = 2 # ;; fun part starts here
print 'v1 now is', v1
print 'b now is', b # 30 ;; of course a got updated, too


I get the following printout:

v1 is 1
a is -2.0
b is [cell 'b' changed from <__main__.unbound object at 0xb4e2472c> to 20.0,
it was not bound]20.0
[cell 'b' changed from 20.0 to 30.0, it was bound ] v1 now is 2
b now is 30.0

Does that seem vaguely right?
The other thing we want is (really inventing syntax here):

on_change(a,new,old,old-bound?) print(list(new, old, old-bound?)

Is the above what you want (you can also dynamically assign onChange later
on, as required or have a list of procedures instead)?
Then the print statements Just Happen. ie, It is not as if we are just hiding
computed variables behind syntax and computations get kicked off when a value
is read. Instead, an underlying engine propagates any assignment throughout
the dependency graph before the assignment returns.

Updating on write rather than recalculating on read does in itself not seem
particularly complicated.
My Cells hack does the above, not with global variables, but with slots (data
members?) of instances in the CL object system. I have thought about doing it
with global variables such as a and b above, but never really seen much of
need, maybe because I like OO and can always think of a class to create of
which the value should be just one attribute.

OK, so in what way does the quick 35 line hack below also completely miss your
point?


# (NB. for lispers: 'is' == EQ; '==' is sort of like EQUAL)

def printChangeBlurp(someCell, oldVal, newVal, bound, name=''):
print '[cell %r changed from %r to %r, it was %s]' % (
name, oldVal, newVal, ['not bound', 'bound '][bound]),

_unbound = type('unbound', (), {})() # just an unique dummy value
def updateDependents(dependents):
seen = {}
for dependent in dependents:
if dependent not in seen:
seen[dependent] = True
dependent.recalculate()
updateDependents(dependent._dependents)
class cell(object):
def __init__(self, formula, dependsOn=(), onChange=None):
self.formula = formula
self.dependencies = dependsOn
self.onChange = onChange
self._val = _unbound
for dependency in self.dependencies:
if self not in dependency._dependents:
dependency._dependents.append(self)
self._dependents = []
def __str__(self):
return str(self.val)
def recalculate(self):
newVal = self.formula()
if self.onChange is not None:
oldVal = self._val
self.onChange(self, oldVal, newVal, oldVal is not _unbound)
self._val = newVal
def getVal(self):
if self._val is _unbound:
self.recalculate()
return self._val
def setVal(self, value):
self._val = value
updateDependents(self._dependents)
val = property(getVal, setVal)



'as

Here's how one of the cells examples might look in corrupted Python
(this is definitely not executable):

class FallingRock:
def __init__(self, pos):
define_slot( 'velocity', lambda: self.accel * self.elapsed )
define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
initial_position = cell_initial_value( 100 ) )
self.accel = -9.8

rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100

rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, -9.8

rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -627.2

Make sense? The idea is to declare what a slot's value represents
(with code) and then to stop worrying about keeping different things
synchronized.

Here's another of the examples, also translated into my horrific
rendition of Python (forgive me):

class Menu:
def __init__(self):
define_slot( 'enabled',
lambda: focused_object( self ).__class__ == TextEntry and
focused_object( self ).selection )

Now whenever the enabled slot is accessed, it will be calculated based
on what object has the focus. Again, it frees the programmer from
having to keep these different dependencies updated.

Oh dear, there were a few typos:

class FallingRock:
def __init__(self, pos):
define_slot( 'velocity', lambda: self.accel * self.elapsed )
define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
initial_value = cell_initial_value( 100 ) )
self.accel = -9.8

rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100

rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, 90.2

rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -527.2
 
A

Alex Martelli

I agree and Python supports this. What is interesting is how
counter-intuitive many programmers find this. For example, one of my

Funny: I have taught/mentored large number of people in Python, people
coming from all different levels along the axis of "previous knowledge
of programming in general", and closures are not among the issues where
I ever noticed large number of people having problems.
So I try to use this sort of pattern sparingly because many programmers
don't think of closures as a way of saving state. That might be because
it is not possible to do so in most mainstream languages.

I don't normally frame it in terms of "saving" state, but rather of
"keeping some amount of state around" -- which means more or less the
same thing but may perhaps be easier to digest (just trying to see what
could explain the difference between my experience and yours).
There are already some people in the Python community who think that
Python has already gone too far in supporting "complex" language
features and now imposes two steep a learning curve i.e. you now have
to know a lot to be considered a Python expert. And there is a lot of
resistance to adding features that will raise the bar even higher.

I might conditionally underwrite this, myself, but I guess my emphasis
is different from that of the real "paladins" of this thesis (such as
Mark Shuttleworth, who gave us all an earful about this when he
delivered a Keynote at Europython 2004).

I'm all for removing _redundant_ features, but I don't think of many
things on the paladins' hitlist as such -- closures, itertools, genexps,
etc, all look just fine to me (and I have a good track record of
teaching them...). I _would_ love to push (for 3.0) further
simplifications, e.g., I do hate it that
[ x for x in container if predicate(x) ]
is an exact synonym of the more legible
list( x for x in container if predicate(x) )
and the proposed
{1, 2, 3}
is an exact synonym of
set((1, 2, 3))
just to focus on a couple of redundant syntax-sugar ideas (one in
today's Python but slated to remain in 3.0, one proposed for 3.0). It's
not really about there being anything deep or complex about this, but
each and every such redundancy _does_ "raise the bar" without any
commensurate return. Ah well.


Alex
 
A

Alex Martelli

It's good that you agree. I think that the ability to add new
productive developers to a project/team/organization is at least part
of what Alex means by "scaleability". I'm sure that he will correct me
if I am wrong.

I agree with your formulation, just not with your spelling of
"scalability";-).
[1] I'm considering introducing bugs or misdesigns that have to be
fixed
as part of training for the purposes of this discussion. Also the

Actually, doing it _deliberately_ (on "training projects" for new people
just coming onboard) might be a good training technique; what you learn
by finding and fixing bugs nicely complements what you learn by studying
"good" example code. I do not know of this technique being widely used
in real-life training, either by firms or universities, but I'd love to
learn about counterexamples.
time needed to learn to coordinate with the rest of the team.

Pair programming can help a lot with this (in any language, I believe)
if the pairing is carefully chosen and rotated for the purpose.


Alex
 
A

Alex Martelli

Carl Friedrich Bolz said:
...
I have not looked at Cells at all, but what you are saying here sounds
amazingly like Python's properties to me. You specify a function that
calculates the value of an attribute (Python lingo for something like a

You're right that the above-quoted snipped does sound exactly like
Python's properties, but I suspect that's partly because it's a very
concise summary. A property, as such, recomputes the value each and
every time, whether the computation is necessary or not; in other words,
it performs no automatic caching/memoizing.

A more interesting project might therefore be a custom descriptor, one
that's property-like but also deals with "caching wherever that's
possible". This adds interesting layers of complexity, some of them not
too hard (auto-detecting dependencies by introspection), others really
challenging (reliably determining what attributes have changed since
last recomputation of a property). Intuition tells me that the latter
problem is equivalent to the Halting Problem -- if somewhere I "see" a
call to self.foo.zap(), even if I can reliably determine the leafmost
type of self.foo, I'm still left with the issue of analyzing the code
for method zap to find out if it changes self.foo on this occasion, or
not -- there being no constraint on that code, this may be too hard.

The practical problem of detecting alterations may be softened by
realizing that some false positives are probably OK -- if I know that
self.foo.zap() *MAY* alter self.foo, I might make my life simpler by
assuming that it *HAS* altered it. This will cause some recomputations
of property-like descriptors' values that might theoretically have been
avoided, "ah well", not a killer issue. Perhaps a more constructive
approach would be: start by assuming the pseudoproperty always
recomputes, like a real property would; then move toward avoiding SOME
needless recomputations when you can PROVE they're needless. You'll
never avoid ALL needless recomputations, but if you avoid enough of them
to pay for the needed introspection and analysis, it may still be a win.
As to whether it's enough of a real-world win to warrant the project, I
pass -- in a research setting it would surely be a worthwhile study, in
a production setting there are other optimizations that look like
lower-hanging fruits to me. But, I'm sure the Cells people will be back
with further illustrations of the power of their approach, beyond mere
"properties with _some_ automatic-caching abilities".


Alex
 
A

Alex Martelli

Tomasz Zielonka said:
I was a bit unclear. I didn't mean constants (I agree with you on
magic numbers), but results of computations, for example

Ah, good that we agree on _some_thing;-)
(x * 2) + (y * 3)

Here (x * 2), (y * 3) and (x * 2) + 3 are anonymous numbers ;-)

Would you like if you were forced to write it this way:

a = x * 2
b = y * 3
c = a * b

?

Thanks for your answers to my questions.

I do not think there would be added value in having to name every
intermediate result (as opposed to the starting "constants", about which
we agree); it just "spreads things out". Fortunately, Python imposes no
such constraints on any type -- once you've written out the starting
"constants" (be they functions, numbers, classes, whatever), which may
require naming (either language-enforced, or just by good style),
instances of each type can be treated in perfectly analogous ways (e.g.,
calling callables that operate on them and return other instances) with
no need to name the intermediate results.

The Function type, by design choice, does not support any overloaded
operators, so the analogy of your example above (if x and y were
functions) would be using named higher-order-functions (or other
callables, of course), e.g.:

add_funcs( times_num(x, 2), times_num(y, 3) )

whatever HOF's add and times were doing, e.g.

def add_funcs(*fs):
def result(*a):
return sum(f(*a) for f in fs)
return result

def times_num(f, k):
def result(*a):
return k * f(*a)
return result

or, add polymorphism to taste, if you want to be able to use (e.g.) the
same named HOF to add a mix of functions and constants -- a side issue
that's quite separate from having or not having a name, but rather
connected with how wise it is to overload a single name for many
purposes (PEAK implements generic-functions and multimethods, and it or
something like it is scheduled for addition to Python 3.0; Python 2.*
has no built-in way to add such arbitrary overloads, and multi-dispatch
in particular, so you need to add a framework such as PEAK for that).


Alex
 
A

Alex Martelli

I V said:
Monads are one of those parts of functional programming I've never really
got my head around, but as I understand them, they're a way of
transforming what looks like a sequence of imperative programming
statements that operate on a global state into a sequence of function
calls that pass the state between them.

Looks like a fair enough summary to me (but, I'm also shaky on monads,
so we might want confirmation from somebody who isn't;-).
So, what would be a statement in an imperative language is an anonymous
function that gets added to the monad, and then, when the monad is run,
these functions get executed. The point being, that you have a lot of
small functions (one for each statement) which are likely not to be used
anywhere else, so defining them as named functions would be a bit of a
pain in the arse.

It seems to me that the difference between, say, a hypothetical:

monad.add( lambda state:
temp = zipper(state.widget, state.zrup)
return state.alteredcopy(widget=temp)
)

and the you-can-use-it-right now alternative:

def zipperize_widget(state):
temp = zipper(state.widget, state.zrup)
return state.alteredcopy(widget=temp)
monad.add(zipperize_widget)

is trivial to the point of evanescence. Worst case, you name all your
functions Beverly so you don't have to think about the naming; but you
also have a chance to use meaningful names (such as, presumably,
zipperize_widget is supposed to be here) to help the reader.

IOW, monads appear to me to behave just about like any other kind of
HOFs (for a suitably lax interpretation of that "F") regarding the issue
of named vs unnamed functions -- i.e., just about like the difference
between:

def double(f):
return lambda *a: 2 * f(*a)

and

def double(f):
def doubled(*a): return 2 * f(*a)
return doubled

I have no real problem using the second form (with a name), and just
don't see it as important enough to warrant adding to the language (a
language that's designed to be *small*, and *simple*, so each addition
is to be seen as a *cost*) a whole new syntaxform 'lambda'.

((The "but you really want macros" debate is a separate one, which has
been held many times [mostly on comp.lang.python] and I'd rather not
repeat at this time, focusing instead on named vs unnamed...))


Alex
 
A

Alex Martelli

Frank Buss said:
This is true, but with lambda it is easier to read:

http://www.frank-buss.de/lisp/functional.html
http://www.frank-buss.de/lisp/texture.html

Would be interesting to see how this would look like in Python or some of
the other languages to which this troll thread was posted :)

Sorry, but I just don't see what lambda is buying you here. Taking just
one simple example from the first page you quote, you have:

(defun blank ()
"a blank picture"
(lambda (a b c)
(declare (ignore a b c))
'()))

which in Python would be:

def blank():
" a blank picture "
return lambda a, b, c: []

while a named-function variant might be:

def blank():
def blank_picture(a, b, c): return []
return blank_picture

Where's the beef, really? I find the named-function variant somewhat
more readable than the lambda-based variant, but even if your
preferences are the opposite, this is really such a tiny difference that
I can't see why so many bits should gets wasted debating it (perhaps
it's one of Parkinson's Laws at work...).


Alex
 
A

Alex Martelli

1. They don't add anything new to the language semantically i.e. you
can always used a named function to accomplish the same task
as an unnamed one.
2. Giving a function a name acts as documentation (and a named
function is more likely to be explicitly documented than an unnamed
one). This argument is pragmatic rather than theoretical.
3. It adds another construction to the language.

Creating *FUNCTIONS* on the fly is a very significant benefit, nobody on
the thread is disputing this, and nobody ever wanted to take that
feature away from Python -- it's the obsessive focus on the functions
needing to be *unnamed* ones, that's basically all the debate. I wonder
whether all debaters on the "unnamed is a MUST" side fully realize that
a Python's def statement creates a function on the fly, just as much as
a lambda form does. Or maybe the debate is really about the distinction
between statement and expression: Python does choose to draw that
distinction, and while one could certainly argue that a language might
be better without it, the distinction is deep enough that nothing really
interesting (IMHO) is to be gleaned by the debate, except perhaps as
pointers for designers of future languages (and there are enough
programming languages that I personally see designing yet more of them
as one of the least important tasks facing the programming community;-).


Alex
 
P

Paul Rubin

I do hate it that
[ x for x in container if predicate(x) ]
is an exact synonym of the more legible
list( x for x in container if predicate(x) )

Heh, I hate it that it's NOT an exact synonym (the listcomp leaves 'x'
polluting the namespace and clobbers any pre-existing 'x', but the
gencomp makes a new temporary scope).
and the proposed
{1, 2, 3}
is an exact synonym of
set((1, 2, 3))

There's one advantage that I can think of for the existing (and
proposed) list/dict/set literals, which is that they are literals and
can be treated as such by the parser. Remember a while back that we
had a discussion of reading expressions like
{'foo': (1,2,3),
'bar': 'file.txt'}
from configuration files without using (unsafe) eval. Aside from that
I like the idea of using constructor functions instead of special syntax.
 
F

Frank Buss

Alex said:
Sorry, but I just don't see what lambda is buying you here. Taking just
one simple example from the first page you quote, you have:

(defun blank ()
"a blank picture"
(lambda (a b c)
(declare (ignore a b c))
'()))

You are right, for this example it is not useful. But I assume you need
something like lambda for closures, e.g. from the page
http://www.frank-buss.de/lisp/texture.html :

(defun black-white (&key function limit)
(lambda (x y)
(if (> (funcall function x y) limit)
1.0
0.0)))

This function returns a new function, which is parametrized with the
supplied arguments and can be used later as building blocks for other
functions and itself wraps input functions. I don't know Python good
enough, maybe closures are possible with locale named function definitions,
too.
 
C

Carl Friedrich Bolz

Bill Atkins wrote:
[snip]
Oh dear, there were a few typos:

class FallingRock:
def __init__(self, pos):
define_slot( 'velocity', lambda: self.accel * self.elapsed )
define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
initial_value = cell_initial_value( 100 ) )
self.accel = -9.8

rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100

rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, 90.2

rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -527.2

you mean something like this? (and yes, this is executable python):


class FallingRock(object):
def __init__(self, startpos):
self.startpos = startpos
self.elapsed = 0
self.accel = -9.8

velocity = property(lambda self: self.accel * self.elapsed)
pos = property(lambda self: self.startpos + self.accel *
(self.elapsed ** 2) / 2)

rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100

rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, 95.1

rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -213.6


Cheers,

Carl Friedrich Bolz
 
A

Alex Martelli

Paul Rubin said:
I do hate it that
[ x for x in container if predicate(x) ]
is an exact synonym of the more legible
list( x for x in container if predicate(x) )

Heh, I hate it that it's NOT an exact synonym (the listcomp leaves 'x'
polluting the namespace and clobbers any pre-existing 'x', but the
gencomp makes a new temporary scope).

Yeah, that's gonna be fixed in 3.0 (can't be fixed before, as it would
break backwards compatibility) -- then we'll have useless synonyms.

There's one advantage that I can think of for the existing (and
proposed) list/dict/set literals, which is that they are literals and
can be treated as such by the parser. Remember a while back that we
had a discussion of reading expressions like
{'foo': (1,2,3),
'bar': 'file.txt'}
from configuration files without using (unsafe) eval. Aside from that

And as I recall I showed how to make a safe-eval -- that could easily be
built into 3.0, btw (including special treatment for builtin names of
types that are safe to construct). I'd be all in favor of specialcasing
such names in the parser, too, but that's a harder sell.
I like the idea of using constructor functions instead of special syntax.

Problem is how to make _GvR_ like it too;-)


Alex
 
A

Alex Martelli

Frank Buss said:
You are right, for this example it is not useful. But I assume you need
something like lambda for closures, e.g. from the page

Wrong and unfounded assumption.
http://www.frank-buss.de/lisp/texture.html :

(defun black-white (&key function limit)
(lambda (x y)
(if (> (funcall function x y) limit)
1.0
0.0)))

This function returns a new function, which is parametrized with the
supplied arguments and can be used later as building blocks for other
functions and itself wraps input functions. I don't know Python good
enough, maybe closures are possible with locale named function definitions,
too.

They sure are, I gave many examples already all over the thread. There
are *NO* semantic advantages for named vs unnamed functions in Python.

Not sure what the &key means here, but omitting that

def black_white(function, limit):
def result(x,y):
if function(x, y) > limit: return 1.0
else: return 0.0
return result


Alex
 
O

olsongt

Alex said:
I think "ridiculous" is a better characterization than "curious", even
if you're seriously into understatement.

When you consider that there was just a big flamewar on comp.lang.lisp
about the lack of standard mechanisms for both threading and sockets in
Common Lisp (with the lispers arguing that it wasn't needed) I find it
"curious" that someone can say Common Lisp scales well.
 
P

Patrick May

Sure, but it won't necessarily be as expressive or as convenient.

Using lambda in an expression communicates the fact that it will
be used only in the scope of that expression. Another benefit is that
declaration at the point of use means that all necessary context is
available without having to look elsewhere. Those are two pragmatic
benefits.

That's a very minimal cost relative to the benefits.

You haven't made your case for named functions being preferable.

Regards,

Patrick
 
F

Frank Buss

Alex said:
Not sure what the &key means here, but omitting that

def black_white(function, limit):
def result(x,y):
if function(x, y) > limit: return 1.0
else: return 0.0
return result

&key is something like keyword arguments in Python. And looks like you are
right again (I've tested it in Pyhton) and my assumption was wrong, so the
important thing is to support closures, which Python does, even with local
function definitions.
 
B

Bill Atkins

When you consider that there was just a big flamewar on comp.lang.lisp
about the lack of standard mechanisms for both threading and sockets in
Common Lisp (with the lispers arguing that it wasn't needed) I find it
"curious" that someone can say Common Lisp scales well.

It's not all that curious. Every Common Lisp implementation supports
sockets, and most support threads. The "flamewar" was about whether
these mechanisms should be (or could be) standardized across all
implementation. It has little to do with CL's ability to scale well.
You simply use the socket and thread API provided by your
implementation; if you need to move to another, you write a thin
compatibility layer. In Python, since there is no standard and only
one implementation that counts, you write code for that implementation
the same way you write for the socket and thread API provided by your
Lisp implementation.

I still dislike the phrase "scales well," but I don't see how
differences in socket and thread API's across implementations can be
interpreted as causing Lisp to "scale badly." Can you elaborate on
what you mean?
 
A

Alexander Schmolck

Bill Atkins said:
Here's how one of the cells examples might look in corrupted Python
(this is definitely not executable):

class FallingRock:
def __init__(self, pos):
define_slot( 'velocity', lambda: self.accel * self.elapsed )
define_slot( 'pos', lambda: self.accel * (self.elapsed ** 2) / 2,
initial_position = cell_initial_value( 100 ) )
self.accel = -9.8

rock = FallingRock(100)
print rock.accel, rock.velocity, rock.pos
# -9.8, 0, 100

rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# -9.8, -9.8, -9.8

rock.elapsed = 8
print rock.accel, rock.velocity, rock.pos
# -9.8, -78.4, -627.2

Make sense?

No, not at all.

Why do you pass a ``pos`` parameter to the constructor you never use? Did you
mean to write ``cell_initial_value(pos)``?

Why is elapsed never initialized? Is the dependency computation only meant to
start once elapsed is bound? But where does the value '0' for velocity come
from then? Why would it make sense to have ``pos`` initially be completely
independent of everything else but then suddenly reset to something which is
in accordance with the other parameters?

What happens if I add ``rock.pos = -1; print rock.pos ``? Will I get an error?
Will I get -1? Will I get -627.2?

To make this more concrete, here is how I might implement a falling rock:

class FallingRock(object):
velocity = property(lambda self:self.accel * self.elapsed)
pos = property(lambda self: 0.5*self.accel * self.elapsed**2)
def __init__(self, elapsed=0):
self.elapsed = elapsed
self.accel = -9.8

rock = FallingRock()
print rock.accel, rock.velocity, rock.pos
# => -9.8 -0.0 -0.0
rock.elapsed = 1
print rock.accel, rock.velocity, rock.pos
# => -9.8 -9.8 -4.9
rock.elapsed = 9
print rock.accel, rock.velocity, rock.pos
# => -9.8 -88.2 -396.9

How would you like the behaviour to be different from that (and why)?
The idea is to declare what a slot's value represents
(with code) and then to stop worrying about keeping different things
synchronized.

That's what properties (in python) and accessors (in lisp) are for -- if you
compute the slot-values on-demand (i.e. each time a slot is accessed) then you
don't need to worry about stuff getting out of synch.

So far I haven't understood what cells (in its essence) is meant to offer over
properties/accessors apart from a straightforward efficiency hack (instead of
recomputing the slot-values on each slot-access, you recompute them only when
needed, i.e. when one of the other slots on which a slot-value depends has
changed). So what am I missing?
Here's another of the examples, also translated into my horrific
rendition of Python (forgive me):

class Menu:
def __init__(self):
define_slot( 'enabled',
lambda: focused_object( self ).__class__ == TextEntry and

OK, now you've lost me completely. How would you like this to be different in
behaviour from:

class Menu(object):
enabled = property(lambda self: isinstance(focused_object(self),TextEntry) \
and focused_object(self).selection)

???
Now whenever the enabled slot is accessed, it will be calculated based
on what object has the focus. Again, it frees the programmer from
having to keep these different dependencies updated.

Again how's that different from the standard property/accessor solution as
above?

'as
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,614
Members
45,287
Latest member
Helenfem

Latest Threads

Top