It seems like the new compiler likes to get up to ~200+MB resident when building some basic things in our tree.

Unfortunately this causes smaller machines (VMs) to take days because of swap thrashing.

Doesn't our make(1) have some stuff to mitigate this? I would expect it to be a bit smarter about detecting the number of swaps/pages/faults of its children and taking into account the machine's total ram before forking off new processes. I know gmake has some algorithms, although last I checked they were very naive and didn't work well.

Any ideas? I mean a really simple algorithm could be devised that would be better than what we appear to have (which is nothing).

Even if an algorithm can't be come up with, why not something just to throttle the max number of c++/g++ processes thrown out. Maybe I'm missing a trick I can pull off with some make.conf knobs?

Idk, summer of code idea? Anyone mentoring someone they want to have a look at this?

-Alfred
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[email protected]"

Reply via email to