On Fri, 2006-01-13 at 19:53 +0900, Kalin KOZHUHAROV wrote:
> > Make this distributed tool for tar zip bzip2 and gzip and I'm in, I
> > don't think it would be useful with anything else than Gigabit Ethernet.
One 2Ghz CPU can't even saturate a 100Mbit line with bzip2 as far as I
can tell.
Although the speedups won't be extreme it could just work.

> > We might want to have in the make.conf 2 separate variables, one of them
> > saying how many threads can be run on the machine, then How many
> > threads/process across a cluster.
> > 
> > For example, my Dual Xeon EM64T file server can do make -j4  locally,
> > like in make install, make docs etc etc, But for compiling I can use
> > -j20, really not useful over -j8 anyway. But the point is, it would be
> > usefully to separate the load distribution on the local machine and
> > cluster nodes.
> 
> As the discusison started...
> 
> I would like to be able to limit the -jN when there is no distcc host
> available or when compiling c++ code, otherwise my poor laptop is dead with
> -j5 compiling pwlib when the network is down....
As far as I can tell distcc isn't smart enough for dynamic load balancing.
One could hack portage to "test" each server in the distcc host list and
remove missing servers for each run - doesn't look elegant to me.

> It is particular example, but being able to limit portage in some way as
> total CPU, total MEM might be interesting (just nice-ing is not enough)
Very difficult - usually gcc uses ~25M per process (small source files), but 
I've seen >100M (most larger C++ files) and heard of ~600M per process for MySQL

Limiting that is beyond the scope of portage.

wkr,
Patrick
-- 
Stand still, and let the rest of the universe move

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to