On Wed, 10 May 2006, Brendan Strejcek wrote: ... > I do something similar, though I do not use ssh-keyscan. I keep copies > of all my ssh key pairs on a central host. If a new machine with a > previously unused host name is built, its key pair needs to be copied to > the central location. If a new machine is built using a host name that > we already have a key for, the old key is copied to the new machine, so > that end users do not notice any changes. The global ssh_known_hosts is > then aggregated into a single file and distributed by cfengine.
I also do something similar. Although in my case any (re)installed host will get new keys, these will be copied to a common location, a new ssh_known_hosts file is generated and then (the next cfagent run) distributed to the clients. Now, if only the copy keyword could copy *to* the server instead of only from it then I wouldn't have had to use a common NFS-(auto)mounted directory for the copy. That lack is annoying, especially as my NFS-server's mountd tend to drop requests due to system/net load from time to time. A "solution" that has been suggested is to run cfservd on all clients, but I'm do not consider that practical (security, performance, more cfengine public keys to keep in sync are issues that spring to mind). I've not had time to look that these parts of the code yet, but hopefully the design of 'copy' is not busted enough to disallow bidirectional copies? Anyone? Anyway, the above method works, kind of, even though I may have to wait an extra iteration due to mount problems. Possible alternatives would be for the cfagent script to contain some other method of distribution. A web server on the central server and having the cfagents do 'HTTP PUT' would likely work, for instance. scp with restricted shell perhaps. There are others, none "good". /H _______________________________________________ Help-cfengine mailing list Help-cfengine@cfengine.org http://cfengine.org/mailman/listinfo/help-cfengine