It's a single point of failure, but I have the Puppet Masters NFS
mount their modules and manifests from another host. Rsync out of cron
would be all right if you are fine with your Puppet Masters being
potentially out of sync for short periods of time. If you use SVN or
some other RCS then you could have a job (or a Puppet Exec resource)
do automatic check outs of the latest modules.

Putting the entire modules dir to be served out of Puppet as File
resources might be a bad idea due to the performance implications of
such a large recursive directory. There's a number of threads about
it, but basically there's two problems: Puppet will MD5 sum all the
files by default to check if they need to be updated (can be changed
to copy based on mtime). The second is Puppet builds in memory Ruby
objects when doing file recursion which translates to a lot of CPU and
RAM used when the directory you are recursing is very large.

On Jan 25, 4:46 pm, CraftyTech <hmmed...@gmail.com> wrote:
> Hello All,
>
>      For those who run multiple Puppetmasters; what's your method of syncing
> the modules directory? NFS, rsync, etc?  I'm asking, because I'd like to use
> puppet itself to sync up the modules.  I know that the normally the modules
> dir gets shared automatically, but what would be the implications to file
> serve the entire modules dir via /etc/puppet/fileserver.conf, to sync up
> with other masters? What's the best practices way of syncing modules dir
> across masters?
>
> Thanks,

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to