Hi Zeljko,
Zeljko Vrba wrote:
Windows 32bit results:
user system elapsed
21.66 0.02 21.69
Linux 64bit Results
user system elapsed
27.242 0.004 27.275
Using wall-clock time metric is not "two different ways" of timing. He could
have just as well measured the time using stop-watch, and the results would be
equally valid (given that his reaction time is ~0.5 seconds :-)) Also, "user"
True, but I'd be surprised if anyone using a stop watch can give 3 digit
accuracy. :-)
time is time just spent in executing instructions in user-mode, which does
*not* account for waiting on disk I/O. "system" time is time spent just in the
OS kernel, and that number would be also higher if there were intensive I/O
going on. If I/O pauses were to blame (fragmentation, as you suggest, does
increase latency of serving disk requests), there would be a lot of idle time,
which would be manifested in a large difference between user+system and elapsed
(the latter being wall-clock time).
Yes, you're correct. Fragmentation was just an example (albeit a poor
one). My point is that there are many factors that differ between the
two systems to draw a conclusion such as "the 32-bit Windows version is
faster than the 64-bit Linux version". I wouldn't be surprised if
another machine with different specs (say, cache size, etc.) would give
the opposite results.
I suppose to be fair, one would have to use the same compiler, etc. and
make sure the only difference is the OS. One compiler might have
options turned on that would make one operation faster than the
other...and that operation might be heavily used in the calculations
performed.
I guess my point is that there is a big jump from 21.69 seconds vs
27.275 seconds and system A is faster than system B...
Of the two systems, one thing that usually gets me is that a "typical"
Windows machine loads a lot of things into memory. And because of these
differences, timing results that differ by 6 seconds should be taken
with caution -- at least until various data sizes are considered or
multiple runs with the same data executed and averaged.
The numbers displayed above suggest that Windows version performs a *lot* less
work[*] than Linux version (6 seconds of CPU time is a *lot* of work given
today's CPU frequencies).
:-) Well, I think it's obvious that I don't agree with this last
statement. I'd be more inclined to believe this if it was repeated 10
times each and averaged and we still see a 6 second difference,
though... (I don't suggest the OP do this, of course; and in fact, I
wouldn't be surprised if it were true that this difference continues to
exist.)
Ray
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.