On 2008-02-12, Paul Hankin <[EMAIL PROTECTED]> wrote: > On Feb 12, 10:17 pm, Ben C <[EMAIL PROTECTED]> wrote: >> On 2008-02-12, Paul Rubin <> wrote: >> >> > Paul Hankin <[EMAIL PROTECTED]> writes: >> >> def genDescendants(self): >> >> return chain([self], *[child.genDescendants() >> >> for child in self.children]) >> >> > That is scary. It generates an in-memory list the size of the >> > whole subtree, at every level. Total memory consumption is maybe >> > even quadratic, depending on the tree shape, but even if it's >> > only linear, it's way ugly. >> >> This would probably be better (in terms of memory if not beauty): >> >> def genDescendants(self): >> return chain([self], *(child.genDescendants() >> for child in self.children)) > > No, it's the same I think.
Yes I think you're right. The problem is really the *. The generator comprehension (child.genDescendants() ...) will have to be run to completion to build the arguments to pass to chain. So no laziness there. > forgot that a generator using yield isn't the same as the same > function that returns an equivalent generator, because one is lazy and > the other isn't. To get the lazy behaviour back, you'd have to > write.... > > def genDescendants(self): > for g in chain([self], *(child.genDescendants() > for child in self.children)): > yield g Is that lazy? It still seems like (child.genDescendants() ...) would have to be run all the way to the end before chain could be called. Unless * is lazy. I don't think it is, but I don't know for sure. -- http://mail.python.org/mailman/listinfo/python-list