Tail recursion to while iteration in 2 easy steps

T

Terry Reedy

Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it is.

Assume that you have already done the work of turning a body recursive
('not tail recursive') form like

def fact(n): return 1 if n <= 1 else n * fact(n-1)

into a tail recursion like

def fact(n, _fac=1):
'''Return factorial for any count n.

Users are not expected to override private parameter _fac.
'''
if n <= 1:
return _fac
else: # optional
return fact(n-1, n * _fac)

(This conversion nearly requires adding an accumulator parameter, as
done here.

Turn this into while iteration with two easy steps.

1. If not already done, make if-else a statement, rather than an
expression, with the recursive call in the if branch. If nothing else,
just use 'not condition' to invert the condition.

def fact(n, _fac=1):
if n > 1: # not n <= 1
return fact(n-1, n * _fac)
else: # optional
return _fac

While contrary to what is often published, this order makes logical
sense. The recursive call means 'go to the top of this function with new
bindings for the parameters'. So put it closest to the top. The base
case means 'we are done, time to leave'. So put it at the bottom.

2. This order also makes the follow substeps work:
2a. Replace 'if' with 'while'.
2b. Replace the recursive call with multiple assignment, using the
parameters as targets and the arguments as source.
For our example:

def fact(n, _fac=1):
while n > 1:
n, _fac = n-1, n * _fac
else:
return _fac

The proof of correctness for this conversion might argue that the
recursive form is equivalent to the following pseudo-Python:

def fact(n, _fac=1):
label top
if n > 1:
n, _fac = n-1, n * _fac
goto top
else:
return _fac

and that this is equivalent to the real Python with while.

At this point, you call pull the initialization of the private parameter
into the body, remove the else, and even add a value check.

def fact(n):
if n < 0 or n != int(n):
raise ValueError('fact input {} is not a count'.format(n))
fac = 1
while n > 1:
n, fac = n-1, n * fac
return fac

With care, the multiple-assignment statement can usually be turned into
multiple assignment statements for greater efficiency.

fac *= n
n -= 1

But note that the reverse order would be a bug. So I would not do this,
especially in more complicated situations, without having tests first.
 
R

rusi

Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it is.

What happens for mutual tail recursive code like this

def isodd(x) : return False if x==0 else iseven(x-1)
def iseven(x): return True if x==0 else isodd(x-1)

----------------
Notes:
1. I am not advocating writing odd/even like the above -- just giving a trivial example
2. General mutual structural (what you call 'body') recursion is more general than mutual tail recursion is more general than single tail recursion.
3. IOW tail recursion optimization is intermediate between the optimizationyou are showing and the maximally pessimistic implementation -- stack, activation record etc -- of programming languages
4. There is a whole spectrum of such optimizaitons --
4a eg a single-call structural recursion example, does not need to push return address on the stack. It only needs to store the recursion depth:

If zero jump to outside return add; if > 0 jump to internal return address

4b An example like quicksort in which one call is a tail call can be optimized with your optimization and the other, inner one with 4a above

5. Programming language implementations dont do optimizations like TR because these analyses are in themselves hard but because inter-procedural analysis per se is a headache. For python-like languages its a much bigger headache because the recursive name must be dynamically fetched for every call

6. [Personal pov] TR optimization is not really a big beef for me:
As you can see, it does not figure in the "All things every programmer should learn from FP" list of mine
http://blog.languager.org/2012/10/functional-programming-lost-booty.html

7. Pedagogically, understanding general recursion is far more important than exploiting tail recursion
 
A

Alain Ketterlin

Terry Reedy said:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial,
once you know the secret of the two easy steps. Here it is.

Assume that you have already done the work of turning a body recursive
('not tail recursive') form like

def fact(n): return 1 if n <= 1 else n * fact(n-1)

into a tail recursion like
[...]

How do know that either "<=" or "*" didn't rebind the name "fact" to
something else? I think that's the main reason why python cannot apply
any procedural optimization (even things like inlining are impossible,
or possible only under very conservative assumption, that make it
worthless).

-- Alain.

P/S: don't take me wrong; your explanation is excellent (and very useful
to python programmers). What I say is that it relies on assumptions that
do not apply to python.
 
R

rusi

rusi writes:






It takes one step of inlining to make Terry's technique applicable.

Well I did say my example was trivial!
For a more nontrivial case consider the delta function of an FSM.

The cleanest way of doing it in C is to make one function with a goto label for each state and a goto for each transition

The same thing can be done in a TR optimized language like scheme by making each state into a function and each transition into a TR-call
(Actually, out of curiosity, I tried this with gcc 4.6.3: the compiler
does 16 levels of inlining, plus tail call optimization. The final code
has no call.)

Good to know.
 
M

Mark Janssen

def fact(n): return 1 if n <= 1 else n * fact(n-1)
into a tail recursion like
[...]

How do know that either "<=" or "*" didn't rebind the name "fact" to
something else? I think that's the main reason why python cannot apply
any procedural optimization (even things like inlining are impossible,
or possible only under very conservative assumption, that make it
worthless).

It's called "operator precedence".
 
M

Mark Janssen

Operator precedence is totally irrelevant here, you misunderstand what
"bind" means.

Imagine that you call fact(x) where x is an instance of the following class:

class Strange:
...
def __le__(dummy):
global fact
fact = someotherfun # this is "binding"
return false

i.e., executing "n<=1" calls __le__, which rebinds the name "fact" to
someting else. Then, there is no recursive call at all. At the time
fact(x-1) is executed, "fact" is not the same function any more.

You cannot prevent this in python.


No, but you can't prevent a lot of bad moves in python. What you just
did there is a total bonehead ("strange"?) of an idea.
 
T

Terry Reedy

As I said in response to randomxxx, even this 0th step (recursion to
recursion transformation) requires assumptions or carefully reasoning
about the properties of the operations.
into a tail recursion like
[...]

How do know that either "<=" or "*" didn't rebind the name "fact" to
something else? I think that's the main reason why python cannot apply
any procedural optimization (even things like inlining are impossible,
or possible only under very conservative assumption, that make it
worthless).

-- Alain.

P/S: don't take me wrong; your explanation is excellent (and very useful
to python programmers). What I say is that it relies on assumptions that
do not apply to python.

Program transformations (usually intended to be optimizations), whether
automatic or manual, are infamous for being buggy in not always being
correct because of hidden assumptions that are not always true.

CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze that.
 
S

Steven D'Aprano

CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze
that.

The first sentence of this is correct. The next two don't quite make
sense to me, since I don't understand what you mean by "constant" in this
context. I *think* you might be referring to the LOAD_CONST byte-code,
which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So
a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST
call:

py> from dis import dis
py> dis(compile("x = (1, 2, 3)", '', 'exec'))
1 0 LOAD_CONST 4 ((1, 2, 3))
3 STORE_NAME 0 (x)
6 LOAD_CONST 3 (None)
9 RETURN_VALUE


while a literal [1, 2, 3] does not:


py> dis(compile("x = [1, 2, 3]", '', 'exec'))
1 0 LOAD_CONST 0 (1)
3 LOAD_CONST 1 (2)
6 LOAD_CONST 2 (3)
9 BUILD_LIST 3
12 STORE_NAME 0 (x)
15 LOAD_CONST 3 (None)
18 RETURN_VALUE


But I don't think this is a necessary language limitation. Both (1, 2, 3)
and [1, 2, 3] are known at compile time: the first cannot be anything
other than a tuple of three ints, and the second a list of three ints. It
seems to me that an implementation might provide a single byte-code to
build list literals, perhaps even LOAD_CONST itself. The byte-codes used
by the Python VM are not part of the language definition, and are subject
to change without warning.

And in fact, if we go all the way back to Python 1.5, even tuple literals
weren't handled by a single byte-code, they were assembled at runtime
like lists still are:

[steve@ando ~]$ python1.5
Python 1.5.2 (#1, Aug 27 2012, 09:09:18) [GCC 4.1.2 20080704 (Red Hat
4.1.2-52)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam0 SET_LINENO 0

3 SET_LINENO 1
6 LOAD_CONST 0 (1)
9 LOAD_CONST 1 (2)
12 LOAD_CONST 2 (3)
15 BUILD_TUPLE 3
18 STORE_NAME 0 (x)
21 LOAD_CONST 3 (None)
24 RETURN_VALUE
 
D

Dave Angel

CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze
that.

The first sentence of this is correct. The next two don't quite make
sense to me, since I don't understand what you mean by "constant" in this
context. I *think* you might be referring to the LOAD_CONST byte-code,
which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So
a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST
call:

py> from dis import dis
py> dis(compile("x = (1, 2, 3)", '', 'exec'))
1 0 LOAD_CONST 4 ((1, 2, 3))
3 STORE_NAME 0 (x)
6 LOAD_CONST 3 (None)
9 RETURN_VALUE


while a literal [1, 2, 3] does not:

The difference is that a tuple can be reused, so it makes sense for the
comiler to produce it as a const. (Much like the interning of small
integers) The list, however, would always have to be copied from the
compile-time object. So that object itself would be a phantom, used
only as the template with which the list is to be made.
 
M

MRAB

CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze
that.

The first sentence of this is correct. The next two don't quite make
sense to me, since I don't understand what you mean by "constant" in this
context. I *think* you might be referring to the LOAD_CONST byte-code,
which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So
a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST
call:

py> from dis import dis
py> dis(compile("x = (1, 2, 3)", '', 'exec'))
1 0 LOAD_CONST 4 ((1, 2, 3))
3 STORE_NAME 0 (x)
6 LOAD_CONST 3 (None)
9 RETURN_VALUE


while a literal [1, 2, 3] does not:

The difference is that a tuple can be reused, so it makes sense for the
comiler to produce it as a const. (Much like the interning of small
integers) The list, however, would always have to be copied from the
compile-time object. So that object itself would be a phantom, used
only as the template with which the list is to be made.
The key point here is that the tuple is immutable, including its items.
 
S

Steven D'Aprano

On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:

CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze
that.

The first sentence of this is correct. The next two don't quite make
sense to me, since I don't understand what you mean by "constant" in
this context. I *think* you might be referring to the LOAD_CONST
byte-code, which in Python 3.3 understands tuples like (1, 2, 3), but
not lists. So a literal (1, 2, 3) gets created at compile-time with a
single LOAD_CONST call:

py> from dis import dis
py> dis(compile("x = (1, 2, 3)", '', 'exec'))
1 0 LOAD_CONST 4 ((1, 2, 3))
3 STORE_NAME 0 (x)
6 LOAD_CONST 3 (None)
9 RETURN_VALUE


while a literal [1, 2, 3] does not:
The difference is that a tuple can be reused, so it makes sense for the
comiler to produce it as a const. (Much like the interning of small
integers) The list, however, would always have to be copied from the
compile-time object. So that object itself would be a phantom, used
only as the template with which the list is to be made.
The key point here is that the tuple is immutable, including its items.

You are both assuming that LOAD_CONST will re-use the same tuple
(1, 2, 3) in multiple places. But that's not the case, as a simple test
will show you:


# Python 3.3

py> def f():
.... a = (1, 2, 3)
.... b = (1, 2, 3)
.... return a is b
....
py> f() # Are the tuples the same object?
False
py> from dis import dis
py> dis(f)
2 0 LOAD_CONST 4 ((1, 2, 3))
3 STORE_FAST 0 (a)

3 6 LOAD_CONST 5 ((1, 2, 3))
9 STORE_FAST 1 (b)

4 12 LOAD_FAST 0 (a)
15 LOAD_FAST 1 (b)
18 COMPARE_OP 8 (is)
21 RETURN_VALUE



So even though both a and b are created by the same LOAD_CONST byte-code,
the object is not re-used (although it could be!) and two distinct tuples
are created.

Re-use of objects (caching or interning) is an implementation
optimization, not a language feature. An implementation might choose to
cache ints, tuples, strings, floats, or none of them at all. That in no
way affects whether the LOAD_CONST byte-code is used.

In fact, LOAD_CONST may behave differently inside functions than outside:
in CPython, functions will cache some literals in the function attribute
__code__.co_consts, and LOAD_CONST may use that, while outside of a
function, only small ints and identifier-like strings are cached but very
little else. Other implementations may do differently -- I'm not sure
whether __code__ is a public language feature or an implementation
feature, but what goes into co_consts certainly isn't fixed. IronPython
2.6 doesn't appear to put anything in co_consts except None.

And of course, *all of this is subject to change*, since it is not
language semantics but implementation details. If HyperPython8.5
aggressively caches every tuple it can, while SimplePython1.1 uses
BUILD_TUPLE rather than LOAD_CONST, both are still perfectly compliant
Python implementations.

(HyperPython probably will require a huge amount of memory, and
SimplePython will probably be slow, but those are quality of
implementation issues.)


Aside: a sufficiently smart optimizing compiler could optimize function f
above to either

LOAD_CONST (True)
RETURN_VALUE

or

LOAD_CONST (False)
RETURN_VALUE


and still be a correct Python implementation. Since Python the language
doesn't specify when, if ever, objects should be cached, it could even
randomly choose between those two options at compile time, or even at
runtime, and still be correct.
 
T

Terry Reedy

On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:

CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze
that.

To be specific: in

for i in [1,2,3]: print i

it looks like it might be save to pre-compile the list and just use it
when the code is run. But if the above is in a function object whose
code object is ever introspected, the list object could be accessed and
mutated. Letting the compiler know that it can do the optimization is
one reason to use tuples in situations like the above.

I am referring to constant-value objects included in the code object.(None, 1, 2, 3, (1, 2, 3))

None is present as the default return, even if not needed for a
particular function. Every literal is also tossed in, whether needed or not.

The byte-code does not understand anything about types. LOAD_CONST n
simply loads the (n+1)st object in .co_consts onto the top of the stack.
S9 a literal (1, 2, 3) gets created at compile-time with a single
LOAD_CONST call:

py> from dis import dis
py> dis(compile("x = (1, 2, 3)", '', 'exec'))
1 0 LOAD_CONST 4 ((1, 2, 3))
3 STORE_NAME 0 (x)
6 LOAD_CONST 3 (None)
9 RETURN_VALUE


while a literal [1, 2, 3] does not:

The difference is that a tuple can be reused, so it makes sense for the
comiler to produce it as a const. (Much like the interning of small
integers) The list, however, would always have to be copied from the
compile-time object. So that object itself would be a phantom, used
only as the template with which the list is to be made.
The key point here is that the tuple is immutable, including its items.

The items of the tuple I gave as an examples are all constant. If they
were not, the tuple would not be a constant for the purpose of
compile-time creation.
 
T

Terry Reedy

CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze
that.

The first sentence of this is correct. The next two don't quite make
sense to me, since I don't understand what you mean by "constant" in this
context. I *think* you might be referring to the LOAD_CONST byte-code,
which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So
a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST
call:

Answered in another response.
py> from dis import dis
py> dis(compile("x = (1, 2, 3)", '', 'exec'))
1 0 LOAD_CONST 4 ((1, 2, 3))
3 STORE_NAME 0 (x)
6 LOAD_CONST 3 (None)
9 RETURN_VALUE


while a literal [1, 2, 3] does not:


py> dis(compile("x = [1, 2, 3]", '', 'exec'))
1 0 LOAD_CONST 0 (1)
3 LOAD_CONST 1 (2)
6 LOAD_CONST 2 (3)
9 BUILD_LIST 3
12 STORE_NAME 0 (x)
15 LOAD_CONST 3 (None)
18 RETURN_VALUE


But I don't think this is a necessary language limitation. Both (1, 2, 3)
and [1, 2, 3] are known at compile time: the first cannot be anything
other than a tuple of three ints, and the second a list of three ints.

Given introspectable code objects, the list must be built as runtime
from the three ints to guarantee that.
seems to me that an implementation might provide a single byte-code to
build list literals, perhaps even LOAD_CONST itself.

There are list displays, but not list literals. The distinction is
important. The BUILD_LIST byte code is used above.

LOAD_CONST 4 (1,2,3)
BUILD_LIST_FROM_TUPLE_CONSTANT

would be possible for the special case but hardly worthwhile.
The byte-codes used by the Python VM are not part of the language
definition,

which is why I specified CPython as the context, with 'current' as the
default.
and are subject to change without warning.

And in fact, if we go all the way back to Python 1.5, even tuple literals
weren't handled by a single byte-code, they were assembled at runtime
like lists still are:

[steve@ando ~]$ python1.5
Python 1.5.2 (#1, Aug 27 2012, 09:09:18) [GCC 4.1.2 20080704 (Red Hat
4.1.2-52)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam0 SET_LINENO 0

3 SET_LINENO 1
6 LOAD_CONST 0 (1)
9 LOAD_CONST 1 (2)
12 LOAD_CONST 2 (3)
15 BUILD_TUPLE 3
18 STORE_NAME 0 (x)
21 LOAD_CONST 3 (None)
24 RETURN_VALUE

Extending pre-complilation to tuples with nested constant tuples is even
more recent. I 3.3.2, we have
(None, 1, 2, 3, 4, 5, (1, 2, 3), (4, 5), ((1, 2, 3), (4, 5)))

but I am sure if you go back you can find versions that lack the last item.

--
The language is as conservative about mandating optimizations as the
implementation is about doing them. I consider making None, False, True
be un-rebindable keynames to be an optimization. This is not even for
the other singletons Ellipsis and NotImplemented. I cannot think of too
much else. Tuple constant optimization is not mandated. It would be as
out of character for the language to require tail-recursion optimization
as for CPython to do it.
 
C

Chris Angelico

py> def f():
... a = (1, 2, 3)
... b = (1, 2, 3)
... return a is b
...
py> f() # Are the tuples the same object?
False

That just means the compiler doesn't detect reuse of the same tuple.
But compare:
return (1,2,3)
True

Every time the function's called, it returns the same tuple (which
obviously can't be done with lists). And of course if that would be
dangerous, it's not done:
return (1,[2],3)
f()[1].append("Hello")
f() (1, [2], 3)
import dis
dis.dis(f)
2 0 LOAD_CONST 1 (1)
3 LOAD_CONST 2 (2)
6 BUILD_LIST 1
9 LOAD_CONST 3 (3)
12 BUILD_TUPLE 3
15 RETURN_VALUE

ChrisA
 
R

random832

The key point here is that the tuple is immutable, including its items.

Hey, while we're on the subject, can we talk about frozen(set|dict)
literals again? I really don't understand why this discussion fizzles
out whenever it's brought up on python-ideas.
 
R

random832

You are both assuming that LOAD_CONST will re-use the same tuple
(1, 2, 3) in multiple places. But that's not the case, as a simple test
will show you:
True

It does, in fact, re-use it when it is _the same LOAD_CONST
instruction_.
 
N

Neil Cerutti

That isn't actually sufficient reason.

It isn't hard to imagine adding a TAIL_CALL opcode to the
interpreter that checks whether the function to be called is
the same as the current function and if it is just updates the
arguments and jumps to the start of the code block.

Tail call optimization doesn't involve verification that the
function is calling itself; you just have to verfify that the
call is in tail position.

The current frame would be removed from the stack frame and
replaced with the one that results from calling the function.
There is an issue that you would lose stack frames in any
traceback.

I don't think that's a major issue. Frames that got replaced
would quite uninteresting.
Also it means code for this modified Python wouldn't run on
other non-modified interpreters, but it is at least
theoretically possible without breaking Python's assumptions.

In any case it's so easy to implement yourself I'm not sure
there's any point.
 
T

Terry Reedy

You are both assuming that LOAD_CONST will re-use the same tuple
(1, 2, 3) in multiple places.

No I did not. To save tuple creation time, a pre-compiled tuple is
reused when its display expression is re-executed. If I had been
interested in multiple occurrences of the same display, I would have tested.
a = 1,'a',3333, 'bbb'; x = 1,'a',3333, 'bbb'
b = 1,'a',3333, 'bbb'
c = 'a'
d = 3333 + 3333
(None, 1, 'a', 3333, 'bbb', (1, 'a', 3333, 'bbb'), (1, 'a', 3333,
'bbb'), (1, 'a', 3333, 'bbb'), 6666)

Empirically, ints and strings are checked for prior occurrence in
co_consts before being added. I suspect None is too, but will not assume.

How is the issue of multiple occurrences of constants relevant to my
topic statement? Let me quote it, with misspellings corrected.

"CPython core developers have been very conservative about what
transformations they put into the compiler." [misspellings corrected]

Aha! Your example and that above reinforce this statement. Equal tuples
are not necessarily identical and cannot necessarily be substituted for
each other in all code.
True

But replacing (1.0, 2.0) with (1, 2), by only storing the latter, would
not be valid without some possibly tricky context analysis. The same is
true for equal numbers, and the optimizer pays attention.
a = 1
b = 1.0
(None, 1, 1.0)

For numbers, the proper check is relatively easy:

for item in const_list:
if type(x) is type(item) and x == item:
break # identical item already in working list
else:
const_list.append(x)

Writing a valid recursive function to do the same for tuples, and
proving its validity to enough other core developers to make it
accepted, is much harder and hardly seems worthwhile.

It would probably be easier to compare the parsed AST subtrees for the
displays rather than the objects created from them.

---
py> def f():
... a = (1, 2, 3)
... b = (1, 2, 3) [snip]
So even though both a and b are created by the same LOAD_CONST
byte-code,

I am not sure what you mean by 'created'. LOAD_CONST puts the address of
an object in co_consts on the top of the virtual machine stack.
the object is not re-used (although it could be!)

It can be reused, in this case, because the constant displays are
identical, as defined above.
and two distinct tuples are created.

Because it is not easy to make the compiler see that only one is needed.
 
S

Steven D'Aprano

I am referring to constant-value objects included in the code object.
(None, 1, 2, 3, (1, 2, 3))

Okay, now that's more clear. I didn't understand what you meant before.
So long as we understand we're talking about a CPython implementation
detail.

None is present as the default return, even if not needed for a
particular function. Every literal is also tossed in, whether needed or
not.


The byte-code does not understand anything about types. LOAD_CONST n
simply loads the (n+1)st object in .co_consts onto the top of the stack.

Right, this is more clear to me now.

As I understand it, the contents of code objects are implementation
details, not required for implementations. For example, IronPython
provides a co_consts attribute, but it only contains None. Jython doesn't
provide a co_consts attribute at all. So while it's interesting to
discuss what CPython does, we should not be fooled into thinking that
this is guaranteed by every Python.

I can imagine a Python implementation that compiles constants into some
opaque object like __closure__ or co_code. In that case, it could treat
the list in "for i in [1, 2, 3]: ..." as a constant too, since there is
no fear that some other object could reach into the opaque object and
change it.

Of course, that would likely be a lot of effort for very little benefit.
The compiler would have to be smart enough to see that the list was never
modified or returned. Seems like a lot of trouble to go to just to save
creating a small list.

More likely would be implementations that didn't re-use constants, than
implementations that aggressively re-used everything possible.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top