> On 04 Sep 2015, at 22:29, Jason 'XenoPhage' Frisvold <xenoph...@godshell.com> > wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Greetings all, > > I've been running a "simple" Icinga2 setup for a few months now. It > took a bit to get things running, but I believe I understand how it's > working right now. Currently I have a single Master node that runs > icinga2 and icingaweb2. I have a number of clients also running > icinga2 so that I can do local checks on them. I believe this is > called command bridge execution?
Not necessarily. There are basically three modes which can monitor clients - command execution bridge, local configuration or config sync throughout the cluster. > > I'm looking to break this up a bit into a hierarchical configuration. > I'd like to set up a satellite node that would exist in a second data > center to handle all of the checks within that data center. But I > still want clients to handle their own local checks, just passing them > on to the satellite or master that's local to the data center. From > there the various satellites push that up to the master. Setting up > the satellite seems straightforward enough as it's identical to > setting up one of the clients I already have. > > I've been reading the documentation, trolling the mailing lists, > hanging out on IRC, and I think I have it working now, but I want to > double check things. So I have a bunch of questions which might help > me understand ... > > - - Does the zones.conf file have to be synchronized between all masters > and satellites? It seems like the answer is yes. In addition, does > it need to be on all clients that are set up for command execution? zones.conf (or whatever you like to name and include it) includes the required connection information for all your endpoints. Furthermore it illustrates the trust model between zones, and for example a satellite must know that it is in a child zone of the master instance, in order to receive configuration or other events. So yes, you’ll need to provide that information between each level. Clients not necessarily need to know about the master 2 levels up, and also not about the other clients connected to the satellite zone. > > - - Does the ca.crt and ca.key need to be synchronized between all > masters and satellites? *Never* expose your CA’s private key to other nodes. You should keep it in a safe location where you’ll sign your certificates with the CA then. That’s not necessarily the icinga2 master (only if using CSR-Autosigning). The public ca.crt certificate must be put on each node, as well as their signed public and private key files. More on the docs. > > - - I see there's a repository command in the cli, should I be using > this to sync between masters and satellites? Will it handle the ca as > well? No there is not. At this stage, the clients won’t trust each other yet. You’ll need to do that manually. But there are tools like scope, or your favourite config management tool. > > - - Do clients that are performing command execution need to be > reconfigured with the satellite listed as the "master" for that client? They’ll need a parent zone where the satellite is a member of. > > - - What features need to be enabled on the satellite? I'm pretty sure > API is needed, but what about command? ‘command’ is for enabling the external command pipe, so if you’re planning to use Icinga Web 2 on this satellite, you’d also need to enable the command and ido-mysql feature (and need a database setup). > > My zones.conf file looks like this at the moment : > > object Endpoint "master0.dc0.example.com" { > } > > object Zone "dc0.example.com" { > endpoints = [ "master0.dc0.example.com" ] > } > > object Zone "global-templates" { > global = true > } > > object Endpoint "satellite1.dc1.example.com" { > host = "10.0.1.4" > } > > object Zone "dc1.example.com" { > endpoints = [ "satellite1.dc1.example.com" ] > parent = "dc0.example.com" > } > > object Endpoint "client0.dc0.example.com" { > host = "10.0.0.10" > } > > object Zone "client0.dc0.example.com" { > endpoints = [ "client0.dc0.example.com" ] > parent = "dc0.example.com" > } > > object Endpoint "client1.dc1.example.com" { > host = "10.0.1.10" > } > > object Zone "client1.dc1.example.com" { > endpoints = [ "client1.dc1.example.com" ] > parent = "dc1.example.com" > } > > > And the zones.d structure looks like this : > > /etc/icinga2/zones.d > global-templates/ > command-custom.conf > commands.conf > groups.conf > notifications.conf > satellite.conf > services.conf > templates.conf > timeperiods.conf > users.conf > dc0.example.com > hosts.comf > dc0.example.com > hosts.comf > > It seems that now that I have this config in place, everything is > working. Is this the correct way to handle this? Looks good to me. I guess hosts.conf contains the command_endpoint information used for the command execution bridge for all applied services? One thing you should keep in mind - add cluster health checks so you’ll know about your setup, getting alarmed and add dependencies for notification suppressing even. Kind regards, Michael -- Michael Friedrich, DI (FH) Senior Developer NETWAYS GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg Tel: +49 911 92885-0 | Fax: +49 911 92885-77 GF: Julian Hein, Bernd Erk | AG Nuernberg HRB18461 http://www.netways.de | michael.friedr...@netways.de ** OSBConf 2015 - September - osbconf.org ** ** OSMC 2015 - November - netways.de/osmc ** _______________________________________________ icinga-users mailing list icinga-users@lists.icinga.org https://lists.icinga.org/mailman/listinfo/icinga-users