Hi, Am 24.12.2010 um 02:08 schrieb Ken Wesson:
> It's also possible that -server vs. -client is an issue here, also > running it a few times in a row so JIT will have kicked in. I used > -server and ran each test a few times until the numbers settled down > before posting my timings here; I'm not sure if everyone else did > likewise. One might want to use criterium[1] for such things. It tries to avoid such pitfalls. And even if not perfect it would give the same basis for everyone doing the benchmark. Most interesting is also the relation between the different versions on the given machine. Just the numbers of one algorithm aren't really comparable, I guess. (different machine, different load, different phase of moon, who knows...) Sincerely Meikel [1]: http://hugoduncan.org/post/2010/benchmarking_clojure_code_with_criterium.xhtml -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en