On Sunday, May 25, 2014 11:21:20 AM UTC-7, Jakov Sosic wrote:
>
> Can you somehow get list of active nodes from balancer? You could use 
> that list in a daily cron to do a 'puppet cert clean' and remove all 
> other certificates? 
>
> I can get a list of active nodes in the constellation, the instances in 
the constellation have a constant embedded in the instance name that tells 
which constellation they belong to. That's how my Nagios instance works 
after all -- it queries AWS for the list of active nodes and reconfigures 
Nagios to look at the active nodes. Otherwise Nagios would be completely 
out of date after the first scaling event. I'm somewhat reluctant to embed 
AWS credentials into the puppetmaster though. The other thing that someone 
mentioned in another forum was to look for nodes reporting in the reports 
directory, and if a node hasn't reported for over an hour (mine are 
checking in every twenty minutes at a minimum so should have checked in by 
then) to do a 'puppet cert clean' on that node and then a 'puppet 
certificate_revocation_list destroy' just in case it comes back to life and 
checks in again. The other thing mentioned has been to change the hostname 
of the instances as part of their cloud_init stage to add the instance ID 
as well as the IP address as the hostname. That would actually work fairly 
well, I suspect, since the chance of both instance ID *and* the IP address 
being reused for the same instance are pretty much non-existent, and also 
gives me more information on my Splunk server about what Splunk event 
applies to what instance,but will require a lot of time to debug on my part 
because the only way to debug it is to run Cloudformation time... after... 
time... after... time... creating and destroying constellations until I get 
it right.

I don't want to go to master-less configuration because I tweak 
constellations before rolling them into production, and it's easier to do 
that via a configuration master. For example, new constellations start out 
pointed at a testing database, and one of the things that happens when 
they're moved into production is that they get re-pointed at the production 
database. I might try migrating to a different configuration tool such as 
Chef in the future, but I have limited time to devote to this project. So 
right now the priority is just forcing Puppet to work the way it needs to 
work in the cloud, rather than the way the Puppet authors believe it should 
work, which is completely incompatible with cloud ops.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/80ef1916-4434-4405-a357-62345d111618%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to