On Sat, Nov 12, 2011 at 07:40:08PM +0700, Pandu Poluan wrote:
> On Nov 12, 2011 7:00 PM, "Mick" <michaelkintz...@gmail.com> wrote:
> >
> > I've been using boa just for this purpose for years:
> >
> > * www-servers/boa
> >     Available versions:
> >                ~       0.94.14_rc21 "~x86 ~sparc ~mips ~ppc ~amd64" [doc]
> >     Homepage:            http://www.boa.org/
> >     Description:         A very small and very fast http daemon.
> >
> > It can be easily locked down for internet facing roles.
> >
> > I've also used thttpd (you can throttle its bandwidth if that's important
> in
> > your network), but it's probably more than required for this purpose:
> >
> > * www-servers/thttpd
> >     Available versions:
> >                        2.25b-r7 "amd64 ~hppa ~mips ppc sparc x86
> ~x86-fbsd" [static]
> >                ~       2.25b-r8 "~amd64 ~hppa ~mips ~ppc ~sparc ~x86
> ~x86-fbsd"
> > [static]
> >     Homepage:            http://www.acme.com/software/thttpd/
> >     Description:         Small and fast multiplexing webserver.
> 
> Thanks for all the input!
> 
> During my drive home, something hit my brain: why not have the 'master'
> server share the distfiles dir via NFS?
> 
> So, the question now becomes: what's the drawback/benefit of NFS-sharing vs
> HTTP-sharing? The scenario is back-end LAN at the office, thus, a trusted
> network by definition.

NFS doesn't like when it looses connection to the server. The only
problems I had ever with NFS were because I forgot to unmout it before a
server restart or when I  took a computer (laptop) off to another
network...
Otherwise it works well, esp. when mounted ro on the clients, however
for distfiles it might make sense to allow the clients download and save
tarballs that are not there yet ;), though I never used it with many
computer emerging/downloading same same stuff, so can't say if locking
etc works correctly...

And with NFS the clients won't duplicate the files in their own
distfiles directories ;)

yoyo


Reply via email to