The idea behind the speedups by using n>number of CPUs is that you use unused CPU cycles during disk activity. Obviously this works only on systems which use 'low CPU usage storage' such as SCSI, firewire or more prominently NFS for the sources. Of cause this also assumes you don't have any performance penalties due to increased swap usage...
So in a 'normal' IDE environment it probably wont make a difference.
Regarding your test, I'd say don't trust your results unless you have rebootet the machine betweeen the tests, otherwise the cache will mess up your results.
Kind regards,
Alex.


Rob wrote:

Laurence Sanford wrote:

Rob wrote:

----------------------------------------

With these simple tests, I come to the conclusion that
"make -j$n buildworld" is best with n = number of CPUs.
Does that make sense?

Rob.


This is what I've been telling people and using myself for years. However, I've been shot down on this several times, so I just leave everyone alone and let them do their own thing. You and I will be getting it done a little faster though.


Not really faster, but higher values do not make a difference,
well, as long as the extra processes do not force the use of
swap. Intensified swapping because of a high -j value slows
down the build considerably.

I don't understand why this is reason for debate. My test has
obvious results on various of my PCs, and was very quickly done:
I wrote a script with a loop that built the world again and
again, doing a 'touch' to a file immediately before and after
the build. Got all my data within a day or so.

Rob.


_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to