python3: 'where' keyword

  • Thread starter Andrey Tatarinov
  • Start date
S

Steve Holden

Paul said:
You asked me to compare the notion of macros with the Zen list. I did
so. I didn't see any serious conflict, and reported that finding.
Now you've changed your mind and you say you didn't really want me to
make that comparison after all.
Well I for one disagreed with many of your estimates of the zen's
applicability to macros, but I just couldn't be arsed saying so.
An amazing amount of the headaches that both newbies and experienced
users have with Python, could be solved by macros. That's why there's
been an active interest in macros for quite a while. It's not clear
what the best way to do design them is, but their existence can have a
profound effect on how best to do these ad-hoc syntax extensions like
"where". Arbitrary limitations that are fairly harmless without
macros become a more serious pain in the neck if we have macros.
This is not a justifiable assertion, IMHO, and if you think that newbies
will have their lives made easier by the addition of ad hoc syntax
extensions then you and I come from a different world (and I suspect the
walls might be considerably harder in mine than in yours).
So, we shouldn't consider these topics separately from each other.
They are likely to end up being deeply related.

I don't really understand why, if macros are so great (and you are
reading the words of one who was using macros back in the days of
Strachey's GPM) why somebody doesn't produce a really useful set of
(say) M4 macros to prove how much they'd improve Python.

Now that's something that would be a bit less ignorable than this
apparently interminable thread.

regards
Steve

PS: Your continued use of the NOSPAM.invalid domain is becoming much
more irritating than your opinions on macros in Python. Using a bogus
URL is piling crap on top of more crap.
 
P

Paul Rubin

Steve Holden said:
Well I for one disagreed with many of your estimates of the zen's
applicability to macros, but I just couldn't be arsed saying so.

Well, I was being somewhat flip with them, as I felt Carl was being
snotty in referring me to the Zen list. The point there is that one
can interpret each of the Zen points in many ways regarding macros. I
don't feel there's a conflict between macros and the Zen list. Macros
in Python are a deep subject and gurus have been discussing them for a
long time. I think that with PyPy it will become easier to experiment
with possible approaches. In other posts I've suggested a moratorium
on new Python syntax until after PyPy is deployed.
This is not a justifiable assertion, IMHO, and if you think that
newbies will have their lives made easier by the addition of ad hoc
syntax extensions then you and I come from a different world (and I
suspect the walls might be considerably harder in mine than in yours).

I'm saying that many proposals for ad hoc extensions could instead be
taken care of with macros. Newbies come to clpy all the time asking
how to do assignment expressions, or conditional expressions, or
call-by-reference. Sometimes new syntax results. Lots of times,
macros could take care of it.
I don't really understand why, if macros are so great (and you are
reading the words of one who was using macros back in the days of
Strachey's GPM) why somebody doesn't produce a really useful set of
(say) M4 macros to prove how much they'd improve Python.

You can't really do Python macros with something like M4. How would
M4 even generate multi-line output that's indented to the right depth
for the place where the macro was called? How would you write an
m4 macro like cond(x,y,z) that does the equivalent of (x ? y : z)?
Even if you could, it doesn't begin to address the hassle of running
Python scripts through m4 before you can execute the scripts, especially
in an interactive environment.
 
N

Nick Coghlan

Nick said:
Semantics
---------
The code::

<statement> with:
<suite>

translates to::

def unique_name():
<suite>
<statement>
unique_name()

Bleh. Not only was my proposed grammar change wrong, my suggested semantics are
wrong, too.

Raise your hand if you can see the problem with applying the above semantics to
the property descriptor example.

So I think the semantics will need to be more along the lines of "pollute the
namespace but mangle the names so they're unique, and the programmer can *act*
like the names are statement local".

This will be much nicer in terms of run-time performance, but getting the
locals() builtin to behave sensibly may be a challenge.

Cheers,
Nick.
 
A

Andrey Tatarinov

Nick said:
Bleh. Not only was my proposed grammar change wrong, my suggested
semantics are wrong, too.

Raise your hand if you can see the problem with applying the above
semantics to the property descriptor example.

So I think the semantics will need to be more along the lines of
"pollute the namespace but mangle the names so they're unique, and the
programmer can *act* like the names are statement local".

This will be much nicer in terms of run-time performance, but getting
the locals() builtin to behave sensibly may be a challenge.

afair you told yourself that

var = <statement> where:
<suite>

translates to:

def unique_name():
<suite>
return <statement>
var = unique_name()

in this case class gets unique_name() function? is it that bad?

anyway I'd prefer to change semantics deeper. adding new statement-only
scope and adding our suite-definitions there.
 
N

Nick Coghlan

Nick said:
Bleh. Not only was my proposed grammar change wrong, my suggested
semantics are wrong, too.

Raise your hand if you can see the problem with applying the above
semantics to the property descriptor example.

Eh, never mind. The following works today, so the semantics I proposed are
actually fine. (This is exactly the semantics proposed for the property example)

Py> class C(object):
.... def _x():
.... def get(self):
.... print "Hi!"
.... def set(self, value):
.... print "Hi again!"
.... def delete(self):
.... print "Bye"
.... return property(get, set, delete)
.... x = _x()
....
Py> C.x
<property object at 0x009E6738>
Py> C().x
Hi!
Py> C().x = 1
Hi again!
Py> del C().x
Bye

Cheers,
Nick.
 
N

Nick Coghlan

Andrey said:
afair you told yourself that

var = <statement> where:
<suite>

translates to:

def unique_name():
<suite>
return <statement>
var = unique_name()

in this case class gets unique_name() function? is it that bad?

No, I wasn't thinking clearly and saw problems that weren't there.

However, you're right that the semantic definition should include unbinding the
unique name after the statement finishes. E.g. for assignments:

def unique_name():
<suite>
return <expr>
anyway I'd prefer to change semantics deeper. adding new statement-only
scope and adding our suite-definitions there.

A new scope essentially *is* a nested function :)

My main purpose with the nested function equivalent is just to make the intended
semantics clear - whether an implementation actually _does_ things that way is
immaterial.

Cheers,
Nick.
 
N

Nick Coghlan

Andrey said:
I think using 'with' keyword can cause some ambiguity. for example I
would surely try to write


and using with at the end of block brings more ambiguity:


compare to:


a way different semantics with just one word added/deleted.

Except that for a "with <expr>:" block, attributes of the expression must be
preceded by a dot:

Py> stmt1()
Py> stmt2()
Py> with self:
.... .member = stmt3()

The advantages of the 'with' block are that 'self' is only looked up once, and
you only need to type it once. The leading dot is still required to disambiguate
attribute references from standard name references.

Despite that, I think you are right that the ambiguity is greater than I first
thought. Correct code is reasonably easy to distinguish, but in the presence of
errors it is likely to be unclear what was intended, which would make life more
difficult than it needs to be.

However, I still agree with Alex that the dual life of "where" outside of Python
(as an 'additional definitions' clause, as in mathematics, and as a
'conditional' clause, as in SQL), and the varied background of budding
Pythoneers is a cause for concern.

'in' is worth considering, as it is already used by Python at least once for
declaring use of a namespace (in the 'exec' statement). However, I suspect it
would suffer from ambiguity problems similar to those of 'with' (consider
"<expr> in <expr>" and "<expr> in: <expr>"). There's also the fact that the
statement isn't *really* executed in the inner namespace - any name binding
effects are seen in the outer scope, whereas 'exec x in dict' explicitly
protects the containing namespace from alteration.

So of the four keywords suggested so far ('where', 'with', 'in', 'using'), I'd
currently vote for 'using' with 'where' a fairly close second. My vote goes to
'using' because it has a fairly clear meaning ('execute the statement using this
extra information'), and doesn't have the conflicting external baggage that
'where' does.

Cheers,
Nick.
 
A

Andrey Tatarinov

So of the four keywords suggested so far ('where', 'with', 'in',
'using'), I'd currently vote for 'using' with 'where' a fairly close
second. My vote goes to 'using' because it has a fairly clear meaning
('execute the statement using this extra information'), and doesn't have
the conflicting external baggage that 'where' does.

I should agree with you =)

Though I love "with" for historical reasons and addiction to functional
languages "using" is not that bad and I do not mind using it. =)
 
J

Jeff Shannon

Paul said:
Steve Holden said:
[...] and if you think that
newbies will have their lives made easier by the addition of ad hoc
syntax extensions then you and I come from a different world (and I
suspect the walls might be considerably harder in mine than in yours).

I'm saying that many proposals for ad hoc extensions could instead be
taken care of with macros. Newbies come to clpy all the time asking
how to do assignment expressions, or conditional expressions, or
call-by-reference. Sometimes new syntax results. Lots of times,
macros could take care of it.

Personally, given the requests in question, I'm extremely thankful
that I don't have to worry about reading Python code that uses them.
I don't *want* people to be able to make up their own
control-structure syntax, because that means I need to be able to
decipher the code of someone who wants to write Visual Basic as
filtered through Java and Perl... If I want mental gymnastics when
reading code, I'd use Lisp (or Forth). (These are both great
languages, and mental gymnastics would probably do me good, but I
wouldn't want it as part of my day-to-day requirements...)

Jeff Shannon
Technician/Programmer
Credit International
 
A

Antoon Pardon

Op 2005-01-11 said:
Paul said:
Steve Holden said:
[...] and if you think that
newbies will have their lives made easier by the addition of ad hoc
syntax extensions then you and I come from a different world (and I
suspect the walls might be considerably harder in mine than in yours).

I'm saying that many proposals for ad hoc extensions could instead be
taken care of with macros. Newbies come to clpy all the time asking
how to do assignment expressions, or conditional expressions, or
call-by-reference. Sometimes new syntax results. Lots of times,
macros could take care of it.

Personally, given the requests in question, I'm extremely thankful
that I don't have to worry about reading Python code that uses them.
I don't *want* people to be able to make up their own
control-structure syntax, because that means I need to be able to
decipher the code of someone who wants to write Visual Basic as
filtered through Java and Perl...

No you don't.

You could just as well claim that you don't want people to write
code in other languages because you then would need to be able
to decipher code written in that language.
If I want mental gymnastics when
reading code, I'd use Lisp (or Forth). (These are both great
languages, and mental gymnastics would probably do me good, but I
wouldn't want it as part of my day-to-day requirements...)

Your day-to-day requirements are a contract between you and your
employer or between you and your clients. That you don't want
mental gymnastics as part of that, shouldn't be a concern for
how the language develops.
 
N

Nick Coghlan

Nick said:
Semantics
---------
The code::

<statement> with:
<suite>

translates to::

def unique_name():
<suite>
<statement>
unique_name()

I've come to the conclusion that these semantics aren't what I would expect from
the construct. Exactly what I would expect can't really be expressed in current
Python due to the way local name bindings work. The main thing to consider is
what one would expect the following to print:

def f():
a = 1
b = 2
print 1, locals()
print 3, locals() using:
a = 2
c = 3
print 2, locals()
print 4, locals()

I think the least suprising result would be:

1 {'a': 1, 'b': 2} # Outer scope
2 {'a': 2, 'c': 3} # Inner scope
3 {'a': 2, 'b': 2, 'c': 3} # Bridging scope
4 {'a': 1, 'b': 2} # Outer scope

In that arrangement, the statement with a using clause is executed normally in
the outer scope, but with the ability to see additional names in its local
namespace. If this can be arranged, then name binding in the statement with the
using clause will work as we want it to.

Anyway, I think further investigation of the idea is dependent on a closer look
at the feasibility of actually implementing it. Given that it isn't as
compatible with the existing nested scope structure as I first thought, I
suspect it will be both tricky to implement, and hard to sell to the BDFL
afterwards :(

Cheers,
Nick.
 
J

Jeff Shannon

Nick said:
def f():
a = 1
b = 2
print 1, locals()
print 3, locals() using:
a = 2
c = 3
print 2, locals()
print 4, locals()

I think the least suprising result would be:

1 {'a': 1, 'b': 2} # Outer scope
2 {'a': 2, 'c': 3} # Inner scope
3 {'a': 2, 'b': 2, 'c': 3} # Bridging scope
4 {'a': 1, 'b': 2} # Outer scope

Personally, I think that the fact that the bridging statement is
executed *after* the inner code block guarantees that results will be
surprising. The fact that it effectively introduces *two* new scopes
just makes matters worse.

It also seems to me that one could do this using a nested function def
with about the same results. You wouldn't have a bridging scope with
both sets of names as locals, but your nested function would have
access to the outer namespace via normal nested scopes, so I'm really
not seeing what the gain is...

(Then again, I haven't been following the whole using/where thread,
because I don't have that much free time and the initial postings
failed to convince me that there was any real point...)

Jeff Shannon
Technician/Programmer
Credit International
 
A

Andrey Tatarinov

Nick said:
I've come to the conclusion that these semantics aren't what I would
expect from the construct. Exactly what I would expect can't really be
expressed in current Python due to the way local name bindings work. The
main thing to consider is what one would expect the following to print:

def f():
a = 1
b = 2
print 1, locals()
print 3, locals() using:
a = 2
c = 3
print 2, locals()
print 4, locals()

I think the least suprising result would be:

1 {'a': 1, 'b': 2} # Outer scope
2 {'a': 2, 'c': 3} # Inner scope
3 {'a': 2, 'b': 2, 'c': 3} # Bridging scope
4 {'a': 1, 'b': 2} # Outer scope

as for me, I would expect following:

1 {'a': 1, 'b': 2}
2 {'a': 2, 'b': 2, 'c': 3'}
3 {'a': 2, 'b': 2, 'c': 3'}
4 {'a': 1, 'b': 2}

otherwise that would be impossible to do calculations based on scope
variables and "using:" would be useless =), consider example of usage:

current_position = 1
current_environment # = ...
current_a_lot_of_other_parameters # = ...
scores = [count_score(move) for move in aviable_moves] using:
def count_score(move):
#walking through current_environment
return score
 
B

Bengt Richter

as for me, I would expect following:

1 {'a': 1, 'b': 2}
2 {'a': 2, 'b': 2, 'c': 3'}
3 {'a': 2, 'b': 2, 'c': 3'}
4 {'a': 1, 'b': 2}

locals() doesn't automatically show everything that is visible from its local scope:
... a = 1
... b = 2
... print 1, locals()
... def inner():
... a = 2
... c = 3
... print 2, locals()
... inner()
... print 4, locals()
... 1 {'a': 1, 'b': 2}
2 {'a': 2, 'c': 3}
4 {'a': 1, 'b': 2, 'inner': <function inner at 0x02EE8BFC>}

-- unless you actually use it (this is a bit weird maybe?):
... a = 1
... b = 2
... print 1, locals()
... def inner():
... a = 2
... c = 3
... print 2, locals(), 'and b:', b
... inner()
... print 4, locals()
... 1 {'a': 1, 'b': 2}
2 {'a': 2, 'c': 3, 'b': 2} and b: 2
4 {'a': 1, 'b': 2, 'inner': <function inner at 0x02EE8D4C>}

of course a difference using the new syntax is that 'inner' is not bound to a persistent name.
otherwise that would be impossible to do calculations based on scope
variables and "using:" would be useless =), consider example of usage:

current_position = 1
current_environment # = ...
current_a_lot_of_other_parameters # = ...
scores = [count_score(move) for move in aviable_moves] using:
def count_score(move):
#walking through current_environment
return score
No worry, UIAM.

Regards,
Bengt Richter
 
B

Bengt Richter

I've come to the conclusion that these semantics aren't what I would expect from
the construct. Exactly what I would expect can't really be expressed in current
Python due to the way local name bindings work. The main thing to consider is
what one would expect the following to print:

def f():
a = 1
b = 2
print 1, locals()
print 3, locals() using:
a = 2
c = 3
print 2, locals()
print 4, locals()

I think the least suprising result would be:

1 {'a': 1, 'b': 2} # Outer scope
2 {'a': 2, 'c': 3} # Inner scope
3 {'a': 2, 'b': 2, 'c': 3} # Bridging scope
4 {'a': 1, 'b': 2} # Outer scope

In that arrangement, the statement with a using clause is executed normally in
the outer scope, but with the ability to see additional names in its local
namespace. If this can be arranged, then name binding in the statement with the
using clause will work as we want it to.

Anyway, I think further investigation of the idea is dependent on a closer look
at the feasibility of actually implementing it. Given that it isn't as
compatible with the existing nested scope structure as I first thought, I
suspect it will be both tricky to implement, and hard to sell to the BDFL
afterwards :(
In the timbot's let/in format:

def f():
a = 1
b = 2
print 1, locals()
let:
a = 2
c = 3
print 2, locals()
in:
print 3, locals()
print 4, locals()

I think the effect would be as if
... a = 1
... b = 2
... print 1, locals()
... def __unique_temp():
... a = 2
... c = 3
... print 2, locals()
... def __unique_too():
... print 3, locals()
... __unique_too()
... __unique_temp()
... del __unique_temp
... print 4, locals()
... 1 {'a': 1, 'b': 2}
2 {'a': 2, 'c': 3}
3 {}
4 {'a': 1, 'b': 2}

print 3, locals() doesn't show a,b,c in locals() unless you use them
somehow in that scope, e.g.,
... a = 1
... b = 2
... print 1, locals()
... def __unique_temp():
... a = 2
... c = 3
... print 2, locals()
... def __unique_too():
... print 3, locals(), (a,b,c) # force references for locals()
... __unique_too()
... __unique_temp()
... del __unique_temp
... print 4, locals()
... 1 {'a': 1, 'b': 2}
2 {'a': 2, 'c': 3, 'b': 2}
3 {'a': 2, 'c': 3, 'b': 2} (2, 2, 3)
4 {'a': 1, 'b': 2}

Of course, locals() does not include globals, even though they're
referenced and visible: ... print locals(), b
... {} global b

The trouble with this is that bindings created in __unique_too all get thrown away,
and you wouldn't want that limitation. So I proposed specifying the (re)bindable names
in a parenthesized list with the let, like "let(k, q, w): ..." so that those names
would be (re)bindable in the same scope as the let(...): statement.

As an extension, I also proposed optionally binding __unique_temp to a specified name
and not calling it automatically, instead of the automatic call and del.

That provides new ways to factor updates to local namespaces into local functions
with selective write-through (bind/rebind) to local names. E.g.,

# define case blocks for switch
# better sugar later, this is to demo functinality ;-)
#a
let(x):
k = 123
in foo:
x = k
#b
let(x, y):
q = 456
from math import pi as r
in bar:
x = q
y=r # extra binding created if bar is called
#c
let(x):in baz:x=789 # most compact form, where nothing in the let clause

switch = dict(a=foo, b=bar, c=baz)

Now you can update local bindings with a case switch:
x = 0
case = 'b'
print x # => 0
switch[case]() # executes bar() in this example, which assigns local x=456 and y=pi
print x # => 456


This spare example is easy to dismiss, but think of foo, bar, and baz as arbitrary sequences of statements
in the local namespace, except you can factor them out as a single named group and invoke them
safely by name(), and have them affect only the local names you specify in the group's let(x, y, ...): spec.

It provides a new way of factoring. As well as things no one has thought of yet ;-)

The other thing to think about is that the let suite could be strictly def-time, which would
provide the opportunity to avoid re-calculating things in functions without abusing default args,
and using the easy closure-variable creation instead. E.g.,

let(foo):
preset = big_calc()
in:
def foo(x):
return x * preset

(This "in:" has no "in xxx:" name, so the effect is immediate execution of the anonymously
defined function, which writes through to foo with the def, as permitted by let(foo):).

Problems? (Besides NIH, which I struggle with regularly, and had to overcome to accept Tim's
starting point in this ;-)

Regards,
Bengt Richter
 
N

Nick Coghlan

Bengt said:
Problems? (Besides NIH, which I struggle with regularly, and had to overcome to accept Tim's
starting point in this ;-)

The ideas regarding creating blocks whose name bindings affect a different scope
are certainly interesting (and relevant to the 'using' out-of-order execution
syntax as well).

Out-of-order execution appeals to me, but the ability to flag 'hey, this is just
setup for something I'm doing later' might be a reasonable alternative
(particularly with the affected names highlighted on the first line). As Jeff
pointed out, it would be significantly less surprising for those encountering
the construct for the first time. Folding code editors would be able to keep the
setup clause out of the way if you really wanted to hide it.

On the other hand, it might be feasible to construct a virtually identical
out-of-order two suite syntax, similar to the mathematical phrasing "let f =
c/lambda where f is the frequency, c is the speed of light and lambda is the
wavelength". Either way, you've convinced me that two suites (and a new compound
statement), as well as specifying which names can be rebound in the containing
scope, is a better way to go than trying to mess with the definition of Python
statements.

On keywords, while 'let' is nice for assignments, I find it just doesn't parse
properly when I put function or class definitions in the clause. So, I'll swap
it for 'use' in the examples below. The statement could then be read "use these
outer bindable names, and this additional code, in this suite". YMMV, naturally.

Let's consider some of the examples given for 'where' using an in-order let/in
type syntax (the examples only bind one name at a time, but would allow multiple
names):

# Anonymous functions
use res:
def f(x):
d = {}
exec x in d
return d
in:
res = [f(i) for i in executable]

# Declaring properties
class C(object):
use x:
def get(self):
print "Demo default"
def set(self, value):
print "Demo default set"
in:
x = property(get, set)

# Design by contract
use foo:
def pre():
pass
def post():
pass
in:
@dbc(pre, post)
def foo():
pass

# Singleton classes
use C:
class _C:
pass
in:
C = _C()

# Complex default values
use f:
def default():
return "Demo default"
in:
def f(x=default()):
pass

They actually read better than I expected. Nicely, the semantics of this form of
the syntax *can* be articulated cleanly with current Python:

use <names>: <use-suite>
in: <in-suite>

as equivalent to:

def __use_stmt():
<use-suite>
def _in_clause():
<in-suite>
return <names>
return _in_clause()
__use_stmt_args = {}
<names> = __use_stmt()
del __use_stmt

Those semantics don't allow your switch statement example, though, since it
doesn't use any magic to write to the outer scope - it's just a normal return
and assign.

However, I don't think starting with these semantics would *preclude* adding the
ability to name the second block at a later date, and make the name rebinding
part of executing that block - the standard usage doesn't really care *how* the
names in the outer scope get bound, just so long as they do. Whether I think
that's a good idea or not is an entirely different question :)

Another aspect to consider is whether augmented assignment operations in the
inner-scopes should work normally - if so, it would be possible to alter the
semantics to include passing the existing values as arguments to the inner scopes.

Moving on to considering a two-suite out-of-order syntax, this would have
identical semantics to the above, but a syntax that might look something like:

as <names>: <in-suite>
using: <use-suite>

# Anonymous functions
as res:
res = [f(i) for i in executable]
using:
def f(x):
d = {}
exec x in d
return d

# Declaring properties
class C(object):
as x:
x = property(get, set)
using:
def get(self):
print "Demo default"
def set(self, value):
print "Demo default set"

# Design by contract
as foo:
@dbc(pre, post)
def foo():
pass
using:
def pre():
pass
def post():
pass

# Singleton classes
as C:
C = _C()
using:
class _C:
pass

# Complex default values
as f:
def f(x=default()):
pass
using:
def default():
return "Demo default"

Cheers,
Nick.
 
N

Nick Coghlan

Nick said:
as equivalent to:

def __use_stmt():
<use-suite>
def _in_clause():
<in-suite>
return <names>
return _in_clause()
__use_stmt_args = {}
<names> = __use_stmt()
del __use_stmt

The more I think about this return-based approach, the less I like it. It could
probably be made to work, but it just feels like a kludge to work around the
fact that the only mechanisms available for altering the bindings of local names
are assignment and definition statements.

For class namespaces, getattr(), setattr() and delattr() work a treat, and
globals() works fine for module level name binding.

locals() is an unfortunate second class citizen, since it writes to it aren't
propagated back to the executing frame. Programmatic interrogation of locals is
fine, but update is impossible.

What would be interesting is if locals() returned a dictionary whose __setitem__
method invoked PyFrame_LocalsToFast on the relevant frame, instead of a vanilla
dictionary as it does now.

Then locals()["x"] = foo would actually work properly.

Notice that you can get this effect today, by using exec to force invocation of
PyFrame_LocalsToFast:

Py> def f():
.... n = 1
.... def g(outer=locals()):
.... outer["n"] += 1
.... g() # Does not affect n
.... print n
.... exec "g()" # DOES affect n
.... print n
....
Py> f()
1
2

(The call to g() has to be inside the exec statement, since the exec statement
evaluation starts with a call to PyFrame_FastToLocals).

Assuming a writeable locals(), the semantics for the normal case are given by:
============
def __use_stmt(__outer):
<use-suite>
<in-suite>
__inner = locals()
for name in <names>:
__outer[name] = __inner[name]

__use_stmt(locals())
del __use_stmt
============

And for the 'delayed execution' case:
============
def __named_use_stmt(__outer):
<use-suite>
def __delayed_block():
<in-suite>
__inner = locals()
for name in <names>:
__outer[name] = __inner[name]

return __delayed_block

<in-name> = __named_use_stmt(locals())
del __named_use_stmt
============

Cheers,
Nick.
 
A

Andrey Tatarinov

Nick said:
# Anonymous functions
use res:
def f(x):
d = {}
exec x in d
return d
in:
res = [f(i) for i in executable]

as for me, I found construction "use <name>:" unobvious and confusing.
Also there is great possibility to forget some of variables names.

I think that syntax

<block>
where:
<block>

is more obvious. (and we already have defined semantics for it)

we have two problems, that we try to solve
1) create method to nest scopes
2) create method to reverse execution order for better readability

"using:" solves both at once.
but your "use ... in ..." syntax shows, that you want to be able to
solve 1) independently i.e. create nested scope without reversing
execution order.

so, I can suggest one more keyword "do:", which will create nested
scope, just as "def f(): ... ; f()" do (and that could be just syntaxic
sugar for it.

so "use ... in ..." would look the following way:

do:
res = [f(i) for i in executable]
#some more equations here
using:
def f(x):
d = {}
exec x in d
return d

that seems good for me. of course if you want to return something from
the nest scope you must show that variable is from parent scope.

// while writing that I realized that it's too complex to be implemented
in python in that way. consider it as some type of brainstorming.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,056
Latest member
GlycogenSupporthealth

Latest Threads

Top