On Thu, Jul 24, 2014 at 11:31 AM, Michael Friedrich <
michael.friedr...@netways.de> wrote:

>  Am 24.07.2014 17:20, schrieb Max Zhang:
>
>
>
>
> On Thu, Jul 24, 2014 at 4:14 AM, Michael Friedrich <
> michael.friedr...@netways.de> wrote:
>
>>  Am 24.07.2014 04:53, schrieb Max Zhang:
>>
>>  What I have now:
>>
>>  *on monitoring master node*
>> under
>>
>>  zones.d
>>      master/local.conf
>>      satellite/hosts.conf
>>
>>
>>  The master node acts as configuration master, and should synchronize the
>> configuration for all child zones (given that your child zone endpoints
>> have 'accept_config = true' set in their ApiListener.
>>
>>
>>
>>  [...]
>>
>>
>> *on satellite check node    *
>> under
>>
>>  zones.d
>>       satellite/hosts.conf
>>
>>
>> That's wrong, if the child zone endpoint should receive configuration
>> from the parent master zone.
>>
>>
>>
>>  [...]
>>
>>
>>
>>  And in api.conf on both master node and satellite node, I set
>> "accept_config=true"
>>
>>
>> Only the child zone members should receive configuration.
>>
>>       I guess this will be my problem: If I don't fill zones.d/ on the
> satellite node, how does the master node know about the satellite nodes?
>
>
>     Is zones.conf the same thing as configurations under zones.d? If I
define everything under zones.d, does that mean those can replace
zones.conf and I can comment everything out in zones.conf?


> That's CONFIGURATION. You do that on the master node below zones.d/ with a
> directory for each zone. Then you'll create the zones and endpoints
> connected (e.g. in zones.conf). zones.conf must be identifical on all
> involved nodes.
>
>
>   Since master node cannot reach the satellite node(s) by its
> IP/FQDN(private network), does that mean I'll have to wait until auto
> discovered icinga2 agents are developed?
>
>
> That's a CONNECTION. It does not matter which direction wins. _After_ the
> CONNECTION the CONFIGURATION sync is done, if enabled.
>
> Instead of vague guessing you really should fix your configuration, and
> just install a simple and dumb checker satellite connected to a master
> zone, receiving the configuration. That thing which is also described in
> the documentation and plenty of users already implemented succesfully.
>
>
>
>
>   Ideally, I want the satellite nodes do all the work and communicate
> with master node & send back check results and what not while master node
> sits there collecting everything.
>
>
> That's the default as described previously.
>
> (On a related note - decide whether to use configuration sync or not)
>
>
>
>   Also, what if I have multiple satellite nodes I want to check within
> the same private network? Does that mean I'll need to assign different
> ports for each satellite node in the configurations on master node? Does
> the following seem like a correct way:
> configure public IP/NAT ip for the network satellite nodes reside in and
> set up different port forwarding rules on the satellite network for
> different hosts so master node is aware which host to talk to?
> I'm still confused...
>
>
> First, fix your initial issue before proceeding to more complex scenarios.
> It seems you generally don't understand how easy it is to install such a
> master-satellite-setup. I'd like to see a documentation patch from yours
> after you've resolved your initkal issues.
>
>
>
>
>
>> Conclusion to that: Only edit the configuration on your master, and do
>> not fill zones.d/ on your satellite. It will automatically sync the
>> configuration in the background.
>> If you're putting configuration there as well, you'll get duplicated
>> configs (the synced one plus the one in zones.d/) which might lead into
>> confusion. That's something which cannot be prevented, and I am not sure
>> how to extend the docs so that users read and understand it.
>>
>>
>>
>>  *Questions and confusion*
>>  SSL handshake successful, both nodes have icinga2 listening for
>> incoming connection. And I can see connection orginating from satellite
>> node to master node, but not from master node to satellite node(that's
>> expected since satellite node is in a private network and not directly
>> accessible from the internet. I've put dummy FQDN in my config files just
>> to match distributed monitoring setup)
>>
>>
>>  Mention the obfuscated fqdn names in the first place. That might lead to
>> confusion when debugging stuff together.
>>
>> (note that for your troubleshooting guide patch)
>>
>>
>>
>>  I don't quite understand how to use cluster health check even after
>> reading the Icinga2 documentation, I assume I need to create host
>> profile "icinga2testa" or "icinga2testb" and add cluster service checks for
>> them?
>>  And my second question is, how do I pass service check results back to
>> the master node from the satellite node? Do I need to install third party
>> tools like NSCA-ng?
>>
>>
>>  Consider that you've got a local host object for each node (monitoring
>> its status, disk, io, etc). You then would want to check on this specific
>> host, if all endpoints are connected in a cluster setup. Assigning the
>> cluster check to that host object as a new service will make the instance
>> executing it running the check in-memory.
>>
>>
>> http://docs.icinga.org/icinga2/latest/doc/module/icinga2/toc#!/icinga2/latest/doc/module/icinga2/chapter/monitoring-remote-systems#cluster-health-check
>> uses the 'cluster' itl check command.
>>
>>
>>
>>
>>
>>
>>  This was what confused me in the Icinga2 documentation:
>> "Meanwhile remote checkers in a cluster setup could act as immediate
>> replacement, but without any local configuration - or pushing their
>> standalone configuration back to the master node including their check
>> result messages." Does that mean I need to install one of those agents
>> anyway on the client side? I haven't used passive check  in Icinga 1.x so
>> setting up a distributed monitoring environment is still quite new to me.
>>
>>
>>  That's the Icinga 2 Agent section. Which got nothing to do with your
>> request to do distributed setup with a master-satellite scenario. It's just
>> one possibility how you could treat a dumb checker instance as "agent",
>> until there's a proper Icinga 2 Agent implemented (that includes setup,
>> documentation, configs, commands, etc). There's a design concept and
>> implementation tbd and tba. Details at dev.icinga.org as always.
>>
>> Regards,
>> Michael
>>
>>
>>
>>  Thanks,
>> Max
>>
>>
>> On Wed, Jul 23, 2014 at 4:18 PM, Max Zhang <maxhyjals...@gmail.com>
>> wrote:
>>
>>>       I have config tree on master node like:
>>>
>>>  master
>>>      local.conf
>>>  checker
>>>      hosts.conf
>>>
>>>
>>>  and on the checker endpoint:
>>>
>>>  checker
>>>      hosts.conf  (has exactly the same contents  to the hosts.conf on
>>> master node)
>>>
>>>  Do I need to add cluster health check in hosts.conf on master node as
>>> well as hosts.conf on checker node?
>>>  I'm pretty confused here.
>>>
>>>  Thanks,
>>>  Max
>>>
>>>
>>>
>>>
>>> On Wed, Jul 23, 2014 at 3:19 PM, Max Zhang <maxhyjals...@gmail.com>
>>> wrote:
>>>
>>>>   OK, looks like the cert error has something to do with the
>>>> permission of pki directory. I fixed that part, but now I'm seeing
>>>> [2014-07-23 19:09:17 +0000] warning/Service: Apply rule 'cluster' for
>>>> host does not match anywhere!   in endpoint
>>>> startup.log, master now seems to be listening on port 5665, but it's
>>>> not open on the endpoint side..
>>>>  I don't see any other error except for that one.
>>>>
>>>>  I'll continue troubleshooting...
>>>>
>>>>  Thanks
>>>>  Max
>>>>
>>>>
>>>> On Wed, Jul 23, 2014 at 2:13 PM, Michael Friedrich <
>>>> michael.friedr...@netways.de> wrote:
>>>>
>>>>> On 21.07.2014 20:59, Max Zhang wrote:
>>>>>
>>>>>> Hey guys,
>>>>>>
>>>>>> I tried to set up a test Icinga master server and endpoint check
>>>>>> instance as a distributed monitoring architecture test. But I ran into
>>>>>> some issues.
>>>>>>
>>>>>> on the master server in startup.log:
>>>>>> critical/ApiListener: Cannot make SSL context for cert path:
>>>>>> '/etc/icinga2/pki/icinga2testa.crt' key path:
>>>>>> '/etc/icinga2/pki/icinga2testa.key' ca path:
>>>>>> '/etc/icinga2/pki/ca.crt'.
>>>>>> though icinga2 process started successfully without other errors.
>>>>>>
>>>>>
>>>>>  There's an issue open for that error message:
>>>>> https://dev.icinga.org/issues/6682
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> on the endpoint.
>>>>>> Icinga 2 started successfully without any issue.
>>>>>>
>>>>>
>>>>>  The ApiListener runs asynchronously and shouldn't harm the operation
>>>>> (e.g. doing local checks, or anything else). If the SSL handshake
>>>>> fails,
>>>>> something is weird with either
>>>>>
>>>>> 1) your ssl certificates. is 'icinga2testa' really your fqdn and the
>>>>> common name in your ssl certs?
>>>>> 2) how did you create these certificates? (1x ca, 2x keys?)
>>>>> 3) what's the permissions of the files in /etc/icinga2/pki
>>>>>
>>>>> Question remains if the Icinga 2 process should fail hard on this.
>>>>>
>>>>>
>>>>>
>>>>>> But I don't see port open on the server side listening for the traffic
>>>>>> coming from the endpoint.
>>>>>>
>>>>>
>>>>>  That won't happen if the ssl init fails.
>>>>>
>>>>>
>>>>>
>>>>>  In my test set up, firewall prevents master instance from reaching the
>>>>>> endpoint in a different private network, but the endpoint is able to
>>>>>> reach the master instance.
>>>>>>
>>>>>
>>>>>  That's fine, and Icinga 2 will internally determine which way to
>>>>> connect.
>>>>>
>>>>> Please provide answers to the questions above.
>>>>>
>>>>> best regards,
>>>>> Michael
>>>>>
>>>>>
>>>>>
>>>>>> My config tree for the Icinga 2 master server:
>>>>>>  *
>>>>>> /etc/icinga2/zones.d/checker1/checker1*
>>>>>>
>>>>>>
>>>>>> /*
>>>>>>  * Endpoint and Zone configuration for a cluster setup
>>>>>>  * This local example requires `NodeName` defined in
>>>>>>  * constants.conf.
>>>>>>  */
>>>>>> /*
>>>>>> object Endpoint NodeName {
>>>>>>   host = NodeName
>>>>>> }
>>>>>>
>>>>>> object Zone ZoneName {
>>>>>>   endpoints = [ NodeName ]
>>>>>> }
>>>>>> */
>>>>>>
>>>>>> /*
>>>>>>  * Defines a global zone containing templates,
>>>>>>  * etc. synced to all nodes, if they accept
>>>>>>  * configuration. All remote nodes need
>>>>>>  * this zone configured too.
>>>>>>  */
>>>>>>
>>>>>> /*
>>>>>> object Zone "global-templates" {
>>>>>>   global = true
>>>>>> }
>>>>>> */
>>>>>>
>>>>>> object ApiListener "api" {
>>>>>>   cert_path = SysconfDir + "/icinga2/pki/icinga2testb.crt"
>>>>>>   key_path = SysconfDir + "/icinga2/pki/icinga2testb.key"
>>>>>>   ca_path = SysconfDir + "/icinga2/pki/ca.crt"
>>>>>>   accept_config = true
>>>>>> }
>>>>>>
>>>>>> /*
>>>>>>  * Read the documentation on how to configure
>>>>>>  * a cluster setup with multiple zones.
>>>>>>  */
>>>>>>
>>>>>>
>>>>>> object Endpoint "icinga2testb" {
>>>>>>   host ="10.0.1.141"
>>>>>>
>>>>>> object Zone "checker1" {
>>>>>>   parent = "icinga2testa"
>>>>>>   endpoints = [ "icinga2testb" ]
>>>>>> }
>>>>>>
>>>>>>
>>>>>>  */etc/icinga2/zones.d/master/localhost.conf*
>>>>>>
>>>>>>
>>>>>> object Endpoint NodeName {
>>>>>>   host = NodeName
>>>>>> }
>>>>>>
>>>>>> object Zone ZoneName {
>>>>>>   endpoints = [ NodeName ]
>>>>>> }
>>>>>>
>>>>>> /*
>>>>>>  * Defines a global zone containing templates,
>>>>>>  * etc. synced to all nodes, if they accept
>>>>>>  * configuration. All remote nodes need
>>>>>>  * this zone configured too.
>>>>>>  */
>>>>>>
>>>>>> /*
>>>>>> object Zone "global-templates" {
>>>>>>   global = true
>>>>>> }
>>>>>> */
>>>>>>
>>>>>> /*
>>>>>>  * Read the documentation on how to configure
>>>>>>  * a cluster setup with multiple zones.
>>>>>>  */
>>>>>>
>>>>>> /*
>>>>>> object ApiListener "api" {
>>>>>>   cert_path = SysconfDir + "/icinga2/pki/" + NodeName + ".crt"
>>>>>>   key_path = SysconfDir + "/icinga2/pki/" + NodeName + ".key"
>>>>>>   ca_path = SysconfDir + "/icinga2/pki/ca.crt"
>>>>>>   accept_config = true
>>>>>> }
>>>>>> */
>>>>>>
>>>>>> ( I definted NodeName as "icinga2testa" in constants.conf, and
>>>>>> apiListener was defined in api.conf)
>>>>>>
>>>>>>
>>>>>>  *And I pushed checker1 config file to the endpoint node, with
>>>>>> configuration like:*
>>>>>>
>>>>>>
>>>>>> object ApiListener "api" {
>>>>>>   cert_path = SysconfDir + "/icinga2/pki/icinga2testb.crt"
>>>>>>   key_path = SysconfDir + "/icinga2/pki/icinga2testb.key"
>>>>>>   ca_path = SysconfDir + "/icinga2/pki/ca.crt"
>>>>>>   accept_config = true
>>>>>> }
>>>>>>
>>>>>>
>>>>>> object Endpoint "icinga2testa" {
>>>>>>   host = "publicIP"
>>>>>> }
>>>>>>
>>>>>> object Endpoint "icinga2testb" {
>>>>>>   host = "10.0.1.141"
>>>>>> }
>>>>>>
>>>>>> object Zone "master" {
>>>>>>   endpoints = [ "icinga2testa" ]
>>>>>> }
>>>>>>
>>>>>> object Zone "checker1" {
>>>>>>   parent = "icinga2testa"
>>>>>>   endpoints = [ "icinga2testb" ]
>>>>>> }
>>>>>>
>>>>>>
>>>>>> How should I test the communication between master and enpoint
>>>>>> instances?
>>>>>> Is there anything I'm still missing?
>>>>>>
>>>>>> Thanks
>>>>>> Max
>>>>>>
>>>>>>
>>>>>>  _______________________________________________
>>>>>> icinga-users mailing list
>>>>>> icinga-users@lists.icinga.org
>>>>>> https://lists.icinga.org/mailman/listinfo/icinga-users
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Michael Friedrich, DI (FH)
>>>>> Application Developer
>>>>>
>>>>> NETWAYS GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg
>>>>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77
>>>>> GF: Julian Hein, Bernd Erk | AG Nuernberg HRB18461
>>>>> http://www.netways.de | michael.friedr...@netways.de
>>>>>
>>>>> ** Open Source Backup Conference 2014 - September - osbconf.org **
>>>>> ** Puppet Camp Duesseldorf 2014 - Oktober - netways.de/puppetcamp **
>>>>> ** OSMC 2014 - November - netways.de/osmc **
>>>>> ** OpenNebula Conf 2014 - Dezember - opennebulaconf.com **
>>>>> _______________________________________________
>>>>> icinga-users mailing list
>>>>> icinga-users@lists.icinga.org
>>>>> https://lists.icinga.org/mailman/listinfo/icinga-users
>>>>>
>>>>
>>>>
>>>
>>
>>
>> _______________________________________________
>> icinga-users mailing 
>> listicinga-users@lists.icinga.orghttps://lists.icinga.org/mailman/listinfo/icinga-users
>>
>>
>>
>> --
>> Michael Friedrich, DI (FH)
>> Application Developer
>>
>> NETWAYS GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg
>> Tel: +49 911 92885-0 | Fax: +49 911 92885-77
>> GF: Julian Hein, Bernd Erk | AG Nuernberg HRB18461
>> http://www.netways.de | michael.friedr...@netways.de
>>
>> ** Open Source Backup Conference 2014 - September - osbconf.org **
>> ** Puppet Camp Duesseldorf 2014 - Oktober - netways.de/puppetcamp **
>> ** OSMC 2014 - November - netways.de/osmc **
>> ** OpenNebula Conf 2014 - Dezember - opennebulaconf.com **
>>
>> _______________________________________________
>> icinga-users mailing list
>> icinga-users@lists.icinga.org
>> https://lists.icinga.org/mailman/listinfo/icinga-users
>>
>>
>
>
> _______________________________________________
> icinga-users mailing 
> listicinga-users@lists.icinga.orghttps://lists.icinga.org/mailman/listinfo/icinga-users
>
>
>
> --
> Michael Friedrich, DI (FH)
> Application Developer
>
> NETWAYS GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg
> Tel: +49 911 92885-0 | Fax: +49 911 92885-77
> GF: Julian Hein, Bernd Erk | AG Nuernberg HRB18461
> http://www.netways.de | michael.friedr...@netways.de
>
> ** Open Source Backup Conference 2014 - September - osbconf.org **
> ** Puppet Camp Duesseldorf 2014 - Oktober - netways.de/puppetcamp **
> ** OSMC 2014 - November - netways.de/osmc **
> ** OpenNebula Conf 2014 - Dezember - opennebulaconf.com **
>
> _______________________________________________
> icinga-users mailing list
> icinga-users@lists.icinga.org
> https://lists.icinga.org/mailman/listinfo/icinga-users
>
>
_______________________________________________
icinga-users mailing list
icinga-users@lists.icinga.org
https://lists.icinga.org/mailman/listinfo/icinga-users

Reply via email to