Le 06/07/2012 18:27, Luke Kanies a écrit :
On Jul 6, 2012, at 9:24 AM, DEGREMONT Aurelien wrote:
Le 06/07/2012 18:07, Luke Kanies a écrit :
On Jul 6, 2012, at 1:40 AM, DEGREMONT Aurelien wrote:
Le 05/07/2012 19:00, Daniel Pittman a écrit :
That would ... probably not show a lot of short-term performance gain
for you. The static compiler,
We tested (and proposed some fixes (pull request #769)) and that looks
interesting but static compiler as some bad side effect which are removing some
nice aspect of Puppet.
We like that Puppet, through fileserver, can filter file access based on the
certificate information. We use it to strictly prevent client to access files
they should not.
With static compiler, puppet agent is now accessing file through the filebucket
which does not have such separation. Any client can access all files in the
filebucket we cannot filter this.
It could be nice if static compiler can insert file metadata checksum the
catalog as it already does to reduce agent/master traffic but still keep a file
source that agent can use to retrieve file from the fileserver when needed.
In order to retrieve a file from a filebucket, you must first know the checksum
of that file's content, and to know that, you must (generally) know the actual
content.
We can list the filebucket content (ticket #4871). File bucket is much more
usefull with that.
But you can also bruteforce the filebucket and get all its content.
Ah. Well that makes it a bit less useful as a security mechanism, doesn't it?
That's why we rely on fileserver only, not a remote filebucket (without static
compiler).
Filebucket is much nicer for that. This is one the reason we chose to use
Puppet. Anyway, It does not seem very difficult to return in the catalog, the
sourcelist AND the computed checksum, instead of only one of those (depending
on static compiler behing enabled or not). The puppet agent can check the
checksum and retrieve the file from the fileserver as it does usually. It seems
99 % of the code is already there :) just a mix of both mode :)
I expect we'd accept that patch, but it would likely defeat the point of the
static compiler, unless you verified the file contents when the file got
downloaded (to confirm the URL contents hadn't changed).
No, Puppet uses a lot of RPC to ask for each metadata of each file it has to take care from the catalog. Most of the
time, 95 % of your file are already up to date. They do not have changed since the latest puppet run.
With static compiler, the file metadata is already known to the agent and you can avoid 95 % of those requests. That's a
lot! Even more if you have 5000 agents like us.
You'd also have to turn off the filebucket for it to be secure, because you'd
be locking out the file by URL but you'd have the file by content and that
wouldn't be locked out.
Sure. We are only interested in filebucket in local mode.
Seems like it'd be easier, and maybe better, to allow certain hosts to have
full list rights to the filebucket, but block that for most hosts.
Better to show an example.
Here is our fileserver.conf
[global]
path /etc/puppet/production/files/global
allow *.mydomain
[domain]
path /etc/puppet/production/files/%d/domain
allow *.mydomain
[node]
path /etc/puppet/production/files/%d/nodes/%h
allow *.mydomain
Each of our File object is declared either as 'global', 'domain' or 'node'. If a file is in 'node' mode, that means we
have a different one per server (like private keys) and I'm sure node A will never be able to access file from node B,
but they have the same declaration in catalog and modules. This cannot be done with filebucket.
Aurélien
--
You received this message because you are subscribed to the Google Groups "Puppet
Developers" group.
To post to this group, send email to puppet-dev@googlegroups.com.
To unsubscribe from this group, send email to
puppet-dev+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/puppet-dev?hl=en.