On 7/28/2015 5:28 PM, Paul Rubin wrote:
Chris Angelico was asking for examples of tail recursion that didn't
have obvious looping equivalents.

Since there is a mechanical procedure for producing the equivalent *under the assumption that the function name will not be rebound*, he is effectively asking for a unicorn.

> Here's an Euler problem solution using memoization and
> (except that Python doesn't implement it)
> tail recursion with an accumulator.

Python does tail recursion (and tail calls) just fine until stack space runs out. What it does not do is automatic tail call or tail recursion conversion/elimination. The example below illustrate

     # Solution to Euler problem 14, using memoization
     # https://projecteuler.net/problem=14

     import sys
     sys.setrecursionlimit(10000)

     def memoize(func):
         def mf(n, a, func=func, memo={}):
             if n in memo: return memo[n]+a
             return memo.setdefault(n,func(n,0))+a
         return mf

     @memoize
     def col(n, a):
         if n==1: return a
         if n%2==0: return col(n//2, 1+a)
         return col(3*n+1, 1+a)
>
     def main():
         print max((col(i,0),i) for i in xrange(1,1000001))

(col(13,0) returns 9, but the chain has 10 items. Makes no difference for the problem as stated.)

Because Python allows names in python-coded modules to be rebound, including from other modules, and because names in python-coded function are resolved to objects only when the name is encountered at runtime, 'recursiveness' is not an operational property of python-coded functions. It is only an operational property of a particular call at a particular time.

If one wants to guarantee that a function operates 'recursively', in the sense of continuing at the top of the function with parameters rebound to altered arguments, then one should use iterative rather than recursive syntax.

If I have it right, this should result in fewer calls to the col (short
for Collatz) function than turning it into the obvious explicit loop.

So you should be glad that Python does *not* automatically replace the apparently-but-not-really-recursive tail call with an internal jump (ie, tail recursion elimination).

To get the same gain you'd have to thread the memoization through the
function in a probably ugly way.

Yes and no.

If one is computing a typical f(n) = g(f(n-1)) recursion, such as factorial, for 0, 1, 2, ..., the cache should be a list rather than a dict. A smart cache will skip over values already computed.

Here is an intertwined version that you probably consider ugly. It directly accesses a nonlocal cache from a closure instead of bouncing back and forth between function and memo wrapper.

def _fac():
    cache = [1, 1]
    csize = 2
    def _(n):
        nonlocal csize
        for i in range(csize, n+1):
            cache.append(cache[-1] * i)
            csize += 1
        return cache[n]
    return _
fac = _fac()

However, when one wants to generate and save an indefinite number of values for a function in f(0), f(1), ... order, then in Python one should use a generator. It is easy to combine a specific generator with a generic cache.

class Gen_memo(list):
    def __init__(self, genf):
        self.next = genf().__next__
    def __call__(self, n):
        for i in range(self.__len__(), n+1):
            self.append(self.next())
        return self[n]

@Gen_memo
def fac2():
    n, fac = 0, 1
    while True:
        yield fac
        n += 1
        fac *= n

fac and fac2 pass this test:

for n in (1,2,6,3,5,7,4,5): print(n, fac(n), fac2(n))

It's certainly doable, but as the
saying goes, why trouble your beautiful mind over something like that.

I consider fac2 above to be more 'pythonic' in using a generator rather than a pseudo-tail-recursive function. Some people like faster functions.

The above solution seems pretty natural to me.

I agree for this function, which is somewhat unusual.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to