On Fri, Oct 4, 2013 at 4:16 AM, Duncan Booth <duncan.booth@invalid.invalid> wrote: > Neil Cerutti <ne...@norwich.edu> wrote: >> On 2013-10-03, Duncan Booth <duncan.booth@invalid.invalid> wrote: >>> It isn't hard to imagine adding a TAIL_CALL opcode to the >>> interpreter that checks whether the function to be called is >>> the same as the current function and if it is just updates the >>> arguments and jumps to the start of the code block. >> >> Tail call optimization doesn't involve verification that the >> function is calling itself; you just have to verfify that the >> call is in tail position. > > You misunderstood me. As usually implemented tail call recursion doesn't > require verifying that the function is calling itself, but in Python the > function could be rebound so a check would be a necessary protection > against this unlikely situation. Consider this Python: > > @some_decorator > def fact(n, acc=1): > if n <= 1: > return acc > return fact(n-1, n*acc) > > Is that tail recursion or not? > > You cannot tell whether or not that is tail recursion unless you know the > definition of 'some_decorator' and even then the answer may vary from run > to run.
There is no doubt that it's a tail call. Whether it is recursion is irrelevant to optimizing it. The reason we talk about "tail call recursion" specifically is because the recursive case is the one that makes the optimization worthwhile, not because the recursion is necessary to perform the optimization. It is possible that fact is recursive but that some_decorator adds additional stack frames that are not tail calls and not optimizable. If this were the case, then the recursion would still eventually hit the stack limit, but there would still be benefit in optimizing the "fact" frames, as it would increase the allowable depth before the stack limit is reached. So I see no reason to check the identity of the called function before optimizing the tail call. -- https://mail.python.org/mailman/listinfo/python-list