> While I appreciate Phoronix as a booster site, their benchmarking
> practice often seems very dodgy; I'd take the results with a large grain
> of salt....

The main reason I posted the link in the first place was because it
was reflecting my own emperical evidence for the application I am
working on; the march/mtune flags (on various corei* cpus) are
actually detrimental to performance (albeit by at most 10% - but
still, not the performance boost I was hoping for).

I had been trying for the last few weeks to strip down my application
into a mini-benchmark I could create a PR out of, however it is
tougher than expected and was hoping the Phoronix article was going to
offer a quicker route to finding performance regressions than my code
- as their coverage was a lot wider.  Anyway, apparently this is not
the case, so back to my original work...

Would it not be in the best interests for both GCC and Phoronix to
rectify the problems in their benchmarks?  Could we not contact the
authors of the article and point out how they can make their
benchmarks better?  I would be happy to contact them myself, however I
think it would be far more effective (and valid) coming from a GCC
maintainer.

Points they have apparently missed so far are;
 - clarify which compiler flags they are using
 - don't run make -j when looking at compile times
 - ensure they are using --enable-checking=release when benchmarking a snapshot
 - in general, as much information as possible as to their setup and
usage, to make the results easily repeatable

Out of interest, has their been much communication in the past between
GCC and Phoronix to address any of these issues in their previous
benchmarks?

Tony

Reply via email to