Hello all,
last night our icinga2 master server went down very quietly because of a
network problem, so today I decided it is really about time to have at
least two masters.

I have few questions though, any comments/advices appreciated.

>From the documentation:
"Keep in mind that only one node acts as configuration master having the
configuration files in the zones.d directory."

So, the currect master has all the config in zones.d, others, including the
second master, get the configuration synced, saved
to /var/lib/icinga2/api/zones/.
If the current master fail, the second master takes over and everything
keep going, no problem.
The second master is now the current master, so either I can move the
configs from /var/lib/icinga2/api/zones/ to zones.d, or I can provision the
configs on the master node by chef, or I can fix the failed master…
Is it so?

Another question is about database, this is the main confusion for me right
now.
There is only one database server, it's no like each master within the zone
got its one database instance, right?
>From the doc, I understand that only one instance at a time writes to the
database.
So, the database server better be located on different machine than the
icinga master is running, right?

Also, in our case, we will have one Icinga2 master located in CZ and the
second master in US. My guess is that it would be good to also have a
database server in each location and setup some kind of master-master
replication maybe (I will need a lot of help from our db guy here).
But I don't know how to instruct each icinga master to use the database
server in its nearest geolocation.

Thanks for help, guys.
_______________________________________________
icinga-users mailing list
icinga-users@lists.icinga.org
https://lists.icinga.org/mailman/listinfo/icinga-users

Reply via email to