Quoting Ivan Voras <[EMAIL PROTECTED]> (from Tue, 25 Nov 2008
21:46:35 +0100):
2008/11/25 Adrian Chadd <[EMAIL PROTECTED]>:
2008/11/25 Ivan Voras <[EMAIL PROTECTED]>:
I believe most of the synthetic numbers (mp3 encoding etc.) difference
comes from the different version of gcc the different OS uses...
You're very likely right. Ubuntu 8.10 has gcc 4.3.x - it could make for
the small difference in gzip and 7z compression performance.
Well, that should be a reasonably easy thing to test and feed back to
the author.
OTOH if the goal is to measure "operating system" performance, this
If you want to test OS performance and use Java programs in there to
do so, you would use the same Java version, wouldn't you? They didn't.
If you want to run some high performance java software and you want to
know on which OS it performs best, you would test the same Java
version on the OS' in question (or at least you should do that, to not
compare apples and oranges).
If you want to run number crunching software, you are interested in
high computing throughput of your app, so you use a compiler which
performs best for your code in question (which would mean probably the
Intel compiler or the Portland compiler on Linux, maybe the Sun
compiler on Solaris, and probably gcc on FreeBSD). You also want to
optimize the code for your CPU (it makes a difference if you do
floating point calculations and are allowed to use the SSEx or
whatever instructions), and not some generic settings the OS comes with.
The "benchmark" presented there is flawed in a lot of ways. No
descrition what they really want to benchmark, no description what
each subtest benchmarks (e.g. lame is performing on one CPU and
occasionally performs IO, what does this benchmark mean? That your
multi-CPU system is mostly idle and can be used to browse the net
without that you notice any impact). Only absolute numbers and no
relative performance comparision (percentage of difference).
Inconsistent starting point (not the same compiler, not the same java
version, ...) in case you want to promote an OS for specialized tasks
(there are comments which tell FreeBSD would be good for raytracing,
as the corresponding subtest was the fastest on FreeBSD), and so on.
Did I overlook some part where they tell how they test? Do they
calculate the average of several runs?
must also include the compiler, libraries and all. (for example, what
does Solaris default to nowadays? I think it ships with gcc but not as
default). The hold on gcc 4.3 in FreeBSD is, after all, political
(licencing).
Users most of the time don't care what the reasons are, they use what
is there and complain or switch if it works better somewhere else.
People which care about compute intense stuff, will install their
preferred compiler anyway.
Bye,
Alexander.
--
So so is good, very good, very excellent good:
and yet it is not; it is but so so.
-- William Shakespeare, "As You Like It"
http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137
_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"