On 2 January 2016 11:56:58 GMT+00:00, Andrew Savchenko <birc...@gentoo.org> 
wrote:
> On Sat, 2 Jan 2016 10:42:31 +0000 Neil Bothwick wrote:
> > On Fri, 1 Jan 2016 22:11:34 -0500, waltd...@waltdnes.org wrote:
> > 
> > >   I'm trying to run a distccserver in a 32-bit VM on a 64-bit
> host, for
> > > the benefit of my ancient 32-bit-only netbook.  Yeah, "it'll work"
> using
> > > the native 64-bit host OS.  But any stuff that links against
> 32-bit
> > > libraries is going to be sent back to the netbook to compile
> locally.
> > > That defeats the whole purpose of distcc.  This is why I want the
> 32-bit
> > > VM to compile for the 32-bit Atom.  Here's the launch script for
> the
> > > 32-bit VM on the i3 machine...
> > 
> > I used to take a different approach. Instead of a VM I used a chroot
> > that was a clone of the netbook, except that make.conf in the chroot
> > included buildpkg in FEATURES and the netbook's make.conf have
> --usepkg in
> > DEFAULT_OPTs. PKGDIR was an NFS share accessible to both.
> 
> Similar solution here, but instead of cloning, I NFS-mount root
> from slow system using filescached to speedup I/O process and
> placing all volatile data (/tmp, /var/tmp) either in local memory
> or on fast local storage. This way there is no need to make manual
> modifications twice or synchronize them somehow (e.g. when
> modification of package.use or package.license during update is
> needed).
> 
> I must warn that such approach should not be used for packages
> using build-time profiling, like sci-libs/atlas or any ebuild with
> USE="pgo" enabled; otherwise profiling will be wrong and targeted
> on helper system instead of target box. In such cases distcc may be
> used.
> 
> For 32-bit distcc on 64-bit host there is no need to chroot or
> create VM (hey, they're hellishly slow!). Just add -m32 to your
> *FLAGS to force 32-bit arch. (In some rare cases ebuild ignores
> {C,CXX,F,FC}FLAGS, while this is a bug and should be fixed, this
> can be worked around on distcc server by forcing -m32 for each
> gcc call.
> 
> Best regards,
> Andrew Savchenko

I tried that too.  I stuck with containers because I  could have a script that 
rsynced /etc/portage with the slow machine and then entered the container and 
ran emerge @world. That script ran on the build host in the early hours,  after 
the host had run emerge - - sync, so by the time I crawled out of bed, all the 
packages were ready to install, without requiring the slow machines to even be 
running. 
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Reply via email to