On 04/11/17 04:45, Wol's lists wrote:
> On 03/11/17 18:02, Rich Freeman wrote:
>> My understanding is that the preprocessing is all done on the target
>> machine, and the remote workers take all their marching orders from
>> there.  The contents of CFLAGS, libraries, and so on don't matter on
>> the workers.  You can build a program that requires qt using distcc
>> workers that don't have qt installed, because all the includes are
>> already resolved by the time the source code reaches them and linking
>> is done later.
> 
> Yup. If you're cross-compiling (like I was - a mix of 32 and 64 bit),
> provided all machines are set up correctly it works fine. BUT. That's if
> the "master" is compiling for itself and just sharing out the work.
> 
> But if all the machines are similar architecture (like mine are now all
> x86_64) you can do what I do - the fastest machine builds everything and
> makes binary install files, and the slower machines just install the
> binaries. (Or actually, the slower machine builds everything, because
> the faster machine has a habit of falling over during builds :-(
> 
> Cheers,
> Wol
> 


I use ccache and distcc for raspberry pi (512M ram) to  an 64bit intel
vm set up to cross compile.

Set MAKEOPTS to 1
in distcc use either 1 or zero for local jobs.
Pi is 32 bit so use 2x 2G swapfiles for 4G swap - have not seen the
second swap in use, but until I added it the last GCC build failed with
out of memory.

Most problematic package is GCC itself.

Sllooowwww :)

Suggestion - use an LCD (Lowest Common Denominator) host to build
packages and then emerge them on other hosts as binary packages.

BillK

Reply via email to