On Fri, Feb 23, 2018 at 1:16 PM, Chris Angelico <ros...@gmail.com> wrote: > On Sat, Feb 24, 2018 at 6:09 AM, Python <pyt...@bladeshadow.org> wrote: >> On Sat, Feb 24, 2018 at 05:56:25AM +1100, Chris Angelico wrote: >>> No, not satisfied. Everything you've said would still be satisfied if >>> all versions of the benchmark used the same non-recursive algorithm. >>> There's nothing here that says it's testing recursion, just that (for >>> consistency) it's testing the same algorithm. There is no reason to >>> specifically test *recursion*, unless that actually aligns with what >>> you're doing. >> >> It seems abundantly clear to me that testing recursion is the point of >> writing a benchmark implementing recursion (and very little of >> anything else). Again, you can decide for yourself the suitability of >> the benchmark, but I don't think you can really claim it doesn't >> effectively test what it means to. > > Where do you get that it's specifically a recursion benchmark though? > I can't find it anywhere in the explanatory text.
I hope I am not missing something blatantly obvious, but in the page Python linked to (https://julialang.org/benchmarks/), in the opening paragraph it states: <quote> These micro-benchmarks, while not comprehensive, do test compiler performance on a range of common code patterns, such as function calls, string parsing, sorting, numerical loops, random number generation, recursion, and array operations. </quote> Recursion is listed above as one code pattern to specifically target with one of their micro-benchmarks. I did not exhaustively go through each language's code examples, but it does appear that the authors of these benchmarks are implementing as exactly as possible the same algorithm chosen for each of these code patterns in each language. Now to me the real question is whether the Julia people were trying to be intellectually honest in their choices of coding patterns and algorithms or were they deliberately cherry picking these to make Julia look better than the competition? Myself, I would rather be charitable than accusatory about the benchmarkers' intentions. For instance, the authors were aware of numpy and used it for some of the python coding -- the array operations they were targeting IIRC. Instead, if they were being deliberately dishonest, they could have come up with some really contrived python code. -- boB -- https://mail.python.org/mailman/listinfo/python-list