Mike Meyer wrote:
In <[EMAIL PROTECTED]>, Gary Corcoran <[EMAIL PROTECTED]> typed:
The confusing thing is that I thought 'real' time should be >= 'user' + 'sys'.
But here 'user' is much greater than 'real' for both machines! The sense I
got from the other messages in this thread is that 'user' time is somewhat
meaningless (i.e. unreliable as a measure) in a multi-CPU and/or hyperthreading
environment. Can you clarify?
'real' is wall clock time. 'user' and 'sys' are cpu time. If your
process gets all of some cpu, then user + sys will be the same as real
time. It's not possible to get more than all of a cpu, so that's a
maximum *per cpu*. If you have multiple cpus, the formula you want is
'real' * ncpu >= 'user' + 'sys'.
Thanks to all of you for the responses. The thing that was not clear is
that despite the printed messages, user (and sys) time are *not* measures
of time. IMO it would be much easier to understand if the message said
that they were so-many cpu-seconds, rather than just seconds. Then it would
be fairly obvious that in a multiprocessor environment that the real time
could be less than the sum of user + sys. I know, once you understand
the true meaning of user/sys time it's "obvious", but not to the first-time
multiprocessor observer... :-)
I made the comment about freebsd's measure of user time being skewed
by hyperthreading. That's a bit vague. The problem is that waiting
caused by hyperthreading will count against the instruction that's
doing the waiting, which skews them. But as Kris pointed out, there
are other things that have that property, so this is just one more
complication when it comes to figuring the performance of modern CPUs.
;-)
Thanks,
Gary
_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"