On Sun, Jul 02, 2006 at 11:57:50AM +0200, Wouter Verhelst wrote: > Additionally, it puzzles me how you think a maintainer will be able to > accurately predict how much RAM a certain build is going to use. There > are so many variables, that I think anything but 'this is the fastest > way to build it on my machine' is going to be unfeasible.
Let's say: program X consist of a number of C files; it seems like compiling every file takes around 24MB, with the final link taking much more[1]. I guess this can be called typical, in C you need to store just the current file and the headers you use. Now, let's assume we use not the simple snippet I wrote [2] but a "concurrency-helper" with the interface Goswin described, with some unknown logic inside. The maintainer thus declares that package X takes 24MB, and says it's good to use heavy concurrency: concurrency-helper --ram-estimate 24 --more-concurrent The machine is a mid-range user box, with 512MB ram. Thus, if the helper decides to go with -j4, the safety margin is _5_ times. I guess you can trust people to be at least within _that_ error range. And even if they fail, you can always force the build to use -j1. If, let's say, the machine is a high-end one with 2GB ram, it runs 4 buildds at once and the admin didn't specify his preferences, using -j4 won't be any worse than on the user box mentioned above. [1]. I was once forced to do a kernel compile on a critically memory starved box. Going from .c to .o went quite smoothly, but the final link was an unholy swappeathon that took hours. [2]. My idea was to simply go with -j1 if the machine has less than X memory, or with a given constant otherwise. Cheers, -- 1KB // Microsoft corollary to Hanlon's razor: // Never attribute to stupidity what can be // adequately explained by malice. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]