I just measured the overhead again, to see if things changed. The attached file has the exact implementation of the compose functions, and another copy with the proper reduction and renaming. For the correct function I'm getting ~5x slowdown from the current code, which in turn is 20x slower than just using a lambda. This is big enough to justify a comment saying something like "don't use this if you care about speed", which kind of makes the whole point of the function redundant.
Notes: 1. Yes, the docs should say that already, but still I consider jumping to a slowdown of two orders of magnitude as a major problem. 2. I'm not talking about a theoretical point here -- I *have* seen `compose' used in tight loops, and I have personally written explicit lambdas in code where I really should have just used it. 3. There are definitely other cases where things like `apply' are used in a way that doesn't preserve arity, so it's not like this is an exceptional problem (or a new one). 4. I'll shut up now. 20 minutes ago, Robby Findler wrote: > I think the right approach is to make the function behave correctly > first and then find the places to optimize second. > > In this case, using procedure-reduce-arity (or something else with > the same effect) is not merely "nice" but, IMO, necessary. That is, > we have procedure-arity in our PL and our primitives/core libraries > should respect that.
x.rkt
Description: Binary data
-- ((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay: http://barzilay.org/ Maze is Life!
____________________ Racket Users list: http://lists.racket-lang.org/users