[Icinga2] Automated Client Registration via Ansible, "zone not connected" but host checks are fine?

This forum was archived to /woltlab and is now in read-only mode. Please register a new account on our new community platform.

You can create a thread on the new site and link to an archived thread. This archive is available as knowledge base, safe and secured.

More details here.
  • Hello,

    last week i tried to automate the process of icinga2 client registration via ansible. Finally the playbook itself is now working fine and the host checks are all green. The only problem is: the checks, which should be executed from master towards the client are failing.


    Both systems are icinga2 2.8 on ubuntu 16.04 TLS. The services in example which are failing right now are: ping4,ping6, cluster-zone. Output from icinga2 object list for cluster zone is:


    Zones.conf:

    Services-apply.conf:

    A Screenshot of the Status in the webinterface is attached. Both hosts are in the same subnet and can reach each other. What's strange to me, that the host checks are definitely working and are reporting back to the master. Maybe somebody can provide me some outpt what to look for. Any help is appreciated.



    Regards

  • That config path won't be synced to the satellite: /etc/icinga2/conf.d/testlab/services-apply.conf


    Neither does the service object then exist on the satellite. The master thinks this service object belongs to the zone "dns.example.net" which can be seen in your object list output. Therefore it does not execute any checks. If you want to the satellite to execute such service checks via local scheduler, ensure to sync these service apply rules. E.g. via a global config sync zone.


    PS: I've edited the subject. I feel that '?!?' is not needed when asking a question.

  • could you tell us, what exactly the error message of the checks is?

    Linux is dead, long live Linux


    Remember to NEVER EVER use git repositories in a productive environment if you CAN NOT control them

  • That config path won't be synced to the satellite: /etc/icinga2/conf.d/testlab/services-apply.conf


    Neither does the service object then exist on the satellite. The master thinks this service object belongs to the zone "dns.example.net" which can be seen in your object list output. Therefore it does not execute any checks. If you want to the satellite to execute such service checks via local scheduler, ensure to sync these service apply rules. E.g. via a global config sync zone.


    PS: I've edited the subject. I feel that '?!?' is not needed when asking a question.

    Thanks for the input. That cluster zone config shouldn't be synced to the satellite at all. The check shall be executed from master towards any client which is registered to him.

    master -> pings -> satellite, master checks if the client is connected etc. The rest of the config is synced as intended and working fine as i wrote.


    PS: Thanks, it was a "wtf" moment cause it was late.


    could you tell us, what exactly the error message of the checks is?

    Can do later when i'm home. Thx


    Regards

  • So you need to tell the master where to execute those checks. The object list output doesn't have command_endpoint set for these attributes, that is missing in order to use the remote command execution bridge then. Or am I mistaken with that assumption?

  • So you need to tell the master where to execute those checks. The object list output doesn't have command_endpoint set for these attributes, that is missing in order to use the remote command execution bridge then. Or am I mistaken with that assumption?

    How can i tell the master that itself should execute the check? If i try to set a variable like "vars.remote_client = "icinga2.example.test" and reflect that in the command_endpoint defintion, i get:


    Code
    1. critical/config: Error: Validation failed for object 'dns.example.test!cluster zone' of type 'Service'; Attribute 'command_endpoint': Command endpoint must be in zone 'dns.example.test' or in a direct child zone thereof.

    The hosts.conf in the dns zone looks like this:


    Code
    1. object Host "dns.example.test" {
    2. import "satellite"
    3. check_command = "cluster-zone"
    4. address = "172.30.1.251"
    5. address6 = "fe80::250:56ff:fe89:5258"
    6. vars.os = "ubuntu"
    7. vars.remote_client = "icinga2.example.test"
    8. }
  • Btw:

    If i crate an explicit Object Service for the cluster zone, that check works fine. So please warp me as i don't see the tree in the wood, which parameter i need to set. The Setup afterwards is automated to onboard ~100 clients with just 1 Ansible run.