Joseph Garvin wrote: > Wolfgang Keller wrote: >>If this is actually also true in the general case, and not due to eventual >>non-representativeness of the test mentioned above, is it simply due to a >>less-than-optimum implementation of generators in the current Pyython >>interpreter and thus likely to change in the future or is this a matter of >>principle and will consequently remain like this forever? > > I am not a CPython or PyPy hacker, but I would guess that it will always > be slower as a matter of principal. When resuming a generator you have > to resetup the state the function was in when it was last called, which > I think should always be more costly than calling the function with a > clean state. > > Someone want to correct me?
Sure. "You have to resetup the state of the function"... depending on what "resetup" means (not a usual English word, so we might all imagine different meanings for it), either the first or the second part of the last sentence is false. More precisely, the state of the function is *saved* when a yield occurs, so you certainly don't *recreate* it from scratch, but merely restore the state, and this should definitely be faster than creating it from scratch in the first place. I haven't looked at the source, but this wouldn't have to involve much beyond a little memory copying, or even a few pointer changes, whereas the original could involve a lot of work, depending on how many arguments were passed, how many locals exist, and so on. -Peter -- http://mail.python.org/mailman/listinfo/python-list