On Mon, January 29, 2007 13:11, Neil Bothwick wrote:
> On Mon, 29 Jan 2007 11:50:34 +0200, Alan McKinnon wrote:
>
>> I already use a fairly complicate solution with emerge -pvf and wget in
>> a cron on one of the fileservers, but it's getting cumbersome. And I'd
>> rather not maintain an entire gentoo install on a server simply to act
>> as a proxy. Would I be right in saying that I'd have to keep
>> the "proxy" machine up to date to avoid the inevitable blockers that
>> will happen in short order if I don't?
>>
>> I've been looking into kashani's suggestion of http-replicator, this
>> might be a good interim solution till I can come up with something
>> better suited to our needs.
>
> I was suggesting the emerge -uDNf world in combination in
> http-replicator. The first request forces http-replicator to download the
> files, all other request for those files are then handled locally. So if
> you run this on a suitable cross-section of machines overnight,
> http-replicator's cache will be primed by the time you stumble
> bleary-eyed into the office.
>
> If all your machines run a similar mix of software, say KDE desktops, you
> only need to run the cron task on one of them.
>
> I use a slightly different approach here, with an NFS mounted $DISTDIR
> for all machines and one of them doing emerge -f world each morning. it's
> simpler to set up that http-replicator but is less scalable since you'll
> get problems if one machines tries to download a file while another is
> partway through downloading it.
portage uses locking for distfiles so if your share is writeable you
wouldn't have any need for http-replicator. The locks are kept in
$DISTDIR/.locks/

I'm sharing my distfiles over nfs myself and I haven't had any problems.
portage also takes care of stale lockfiles, the masterclient truncates the
lockfile and the other clients fill the lockfile with data. If a threshold
is met the lock is discarded.
-- 
gentoo-user@gentoo.org mailing list

Reply via email to