On Tue, Sep 25, 2012 at 6:51 AM, Justin Lebar <justin.le...@gmail.com> wrote:
>
> One of the intriguing things about this benchmark is that it's open
> source, and they're committed to changing it over time.
>
> FWIW Paul Irish agrees the sieve is a bad test, although he doesn't
> hate it to the extent you or i would think is deserved.
> https://github.com/robohornet/robohornet/issues/20#issuecomment-8837867
>  So maybe all hope is not lost.

I'm less optimistic than you are.  Microbenchmarks are a completely
flawed basis for a benchmark suite, so they'd have to be willing to
throw away everything they currently have and completely redo it from
scratch with real apps (which is *much* harder than writing
microbenchmarks).  But I could be wrong.


> Regardless, my name is off the list, and I never knew it would be used that 
> way.

Thanks, Daniel!


> In the meantime, I would prefer to see someone who has been involved in
> benchmark design to decide our position with respect to this benchmark.

I've never been involved with benchmark *design*, but I've used plenty
of benchmarks, the topic is a hobby-horse of mine, I've read chapter 1
of Hennessy and Patterson(!), and I've been in Mozilla's JS team long
enough to know how bad benchmarks can hurt the web.

I'd be happy to write an article explaining all this in some detail
(I've been marshalling thoughts for such an article for the past 24
hours).  The gist of the article would be "good benchmarks use real
apps;  microbenchmarks cannot result in a good benchmark;  bad
benchmarks hurt the web;  RobotHornet needs to be rebuilt from scratch
if it is to become a good benchmark".

As for whether or not that would serve as Mozilla's official response,
I don't mind.  I'd be happy just to post such an article on my blog
and make clear that it's just my opinion if that would make people
happier.

Nick
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to