On 22.12.11 12:50, Igor Mozolevsky wrote:
On 22 December 2011 10:12, Daniel Kalchev<dan...@digsys.bg>  wrote:

As for how fast to get from point A to point B. If you observe speed limits,
that will depend only on the pilot, no? :)
Both cars are sufficiently faster than the imposed speed limits.
You are ignoring acceleration, handling, and other factors... Besides,
you're missing the point: *given same conditions* a benchmark allows
one to show how A performs compared to B, which is why I said it is
important to keep everything else constant! At the end of the day,
what users, sysadmins,&c want to know is given hardware configuration
H and requirement R will software X outperform software Y or Z. The
components and the bells and whistles of X, Y or Z are, quite often,
irrelevant (unless one has some silly idealogical reason, for
example).

None of the benchmarks measure 'comfort'.
None of the benchmarks measure how the system 'feels' while performing an numerical computation.
The benchmarks measure how soon the computations are finished.

You typically achieve that by tuning the OS to say, ignore any interactivity at the cost of focusing all resources to compute-intensive tasks. If you use the same hardware, the CPU can do only so much and if there are any differences, that will be in how the OS asks the CPU to do other things, besides the task you benchmark.

You need to define your criteria. Otherwise the benchmark cannot be used to make comparisons.

On very specific hardware, such as systems with many CPUs and lots of
memory, you may see one better than the other -- this in most cases will be
relevant to tuning, but also to overall system architecture.
Are you saying that careful tuning will give you _orders of magnitude_
performance increase? Got numbers to back that up? ;-)

Ah.. now we are talking :)

Two things:

Someone once said, that you may have an very fast computation if only you need not make sure the results are correct. So yes, you can! :)

It is all too easy to make things worse, from the theoretical baseline. So often we measure not how 'good' an OS is, but how 'bad' it actually manages the hardware.

Well.. there is also some hardware that has limitations and you need to define the benchmark in a specific way to not touch them. Or you may have specific OS trying to avoid touching them -- and thus providing you with 'performance'.

You may make an very "scientific", well documented and repeatable benchmark,
such as this one:

time dd if=/dev/zero of=/dev/null

.. then optimize your particular OS to run it at the highest possible
rate... and so what? Do you know what this benchmark measures? :)
Yes, do you? I hope you are not being deliberately obtuse here...

I know, that different people will see different things being measured here. Let's see if someone else will jump in. (which is the purpose of this example)

Besides, I would criticise your test in this example: have you tried
running that with, say, bs=1g count=1000?

That would measure different things. :)

Is there a difference how fast FreeBSD completes that vs how fast a Linux box 
does the same?

Why not? I would expect there will be difference in how fast different versions of FreeBSD complete it as well.

It could be also interesting to measure (although it's somewhat subjective) how interactive both systems stay during this task.

Daniel
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to