Help getting started with top-down setup

This forum was archived to /woltlab and is now in read-only mode. Please register a new account on our new community platform.

You can create a thread on the new site and link to an archived thread. This archive is available as knowledge base, safe and secured.

More details here.
  • I have read Help getting started with top-down setup

    I can not make the upgrade to top-down-setup

    So, I have upgraded the master(name: monitor) to r2.8.0-1 and all my clients. We used to have a button-top-setup.

    Now I have defines two clients(accountCowboy and static) which I want to monitor. I have more. Little by little I will upgrade all.

    My problem now is that all my checks are run on monitor. Her

    This look like the same server checked three times. It should be three different servers

    1. $ icingacli monitoring list --service=disk --verbose
    2. UP accountCowboy: PING OK - Packet loss = 0%, RTA = 0.22 ms
    3. OK └─ disk (Since Dec 4)
    4. DISK OK - free space: / 344030 MB (77% inode=97%):
    5. UP monitor: PING OK - Packet loss = 0%, RTA = 0.03 ms
    6. OK └─ disk (Since 2016-12)
    7. DISK OK - free space: / 344030 MB (77% inode=97%):
    8. UP static: PING OK - Packet loss = 0%, RTA = 0.18 ms
    9. OK └─ disk (For 40m 38s)
    10. DISK OK - free space: / 344030 MB (77% inode=97%):

    I am not 100% percent sure which files is important for this matter so here is

    conf.d/hosts.conf where the monitor host itself is defined

    zones.d/monitor/accountCowboy.conf where Zone, Endpoint and Host is defined for accountCowboy (side question, Should I create one new Zone and one new Endpoing for each Host)

    zones.d/monitor/staticpages.conf where Zone, Endpoint and Host is defined for static

  • take a look at this chapter in the documentation. It should tell you how to configure your clients so that checks are executed on them.…ring/#master-with-clients

    Linux is dead, long live Linux

    Remember to NEVER EVER use git repositories in a productive environment if you CAN NOT control them

  • I read the documentation. I can not see what I have done wrong.

    Thanks of even taking a look at it.

    Is it recommended to have all the zones in one file and all endpoint in another.

    Now I have the related zone, endpoint in one file.

  • It is fine to put the client Zone/Endpoint definition into the master zone, many users do so. Another recommended way is to collect such into zones.conf being a generic managed file. It doesn't matter though, as requirements beg to differ - one wants to deploy clients one by one, and does not want to edit zones.conf in doing so, another one just manages the entire file with Puppet or Ansible.

    Either way, your Host objects look fine. I wouldn't copy the example config from conf.d 1:1 but that is up to you. I'd rather start simple with just the Host object, a template import, a check_command and then look into specific services applied to this host.

    One thing you need to look into - where are the service apply rules in your setup, and how the services are executed. They probably need the "command_endpoint" attribute set, can you share such an example?

  • Thanks dnsmichi

    `command_endpoint` is never set. What did you want me to share?

    If I want to define a new Apply rule (i don't know if thats the correct turn) should what be done don /etc/icinga2/conf.d/custom_rules.conf or in /etc/icinga2/zones.d/custom_rules.conf

  • The "command_endpoint" tells Icinga where the service should be executed, e.g you want the check_disk service for your client to be executed on your client itself.

    This way, the Service would be executed on the client "accountCowboy" and not on your master "monitor".

  • Thinks. That does work. Now it looks like

    Does that look correct?

  • I have changed all Services in conf.d/services.conf to include command_endpoit

    1. apply Service "ping4" {
    2. import "generic-service"
    3. check_command = "ping4"
    4. command_endpoint = host.vars.remote_client
    5. assign where host.address
    6. }

    It is working now. Thanks again! I have just one last question.

    I have upgraded from 2.6 to 2.8. All servers seem to be mostly working. But I think I am missing something. To disable something.

    I have a procs check which should have warning 1600 and critical 3100. But Once every ~30 run it will return the defaults warning 100 and critical 150. Why does that happen

  • I reran icinga2 node wizard and I think that did the job.

    and commented out include_recursive "conf.d" in icinga2.conf

    Is there anything else I should do when setting up top down?

  • Thats normally it, comment out the "conf.d" directory on your clients and do all the configuration changes on your master instance.

  • Cool, For some of by servers I have one new check commands


    1. #!/bin/bash
    2. #
    3. systemctl status sidekiq > /dev/null
    4. if [ $? -eq 0 ]
    5. then
    6. echo "OK - Sidekiq is running"
    7. exit 0;
    8. fi
    9. echo "CRITICAL - Sidekiq is NOT running"
    10. exit 2;

    Where do I but this to be synced?

    Then I have my service

    1. apply Service "sidekiq" {
    2. import "generic-service"
    3. check_command = "sidekiq"
    4. assign where == NodeName
    5. }
  • Icinga is a monitoring tool, no lifecycle management tool which takes care about packages and dependencies. There won't be a plugin sync, such a task needs to be managed with an external tool or your weapon of choice (git, cron, spacewalk, foreman, puppet, ansible, etc.)

  • Ok, thanks again, both of you.

    I think I will go with a git repo and manual syncing. Where would you recommend to store such files? /etc/icinga ?

  • I have two sections which I think all need for the custom command to work, but where should I do I put them


    1. #!/bin/bash
    2. #
    3. systemctl status sidekiq > /dev/null
    4. if [ $? -eq 0 ]
    5. then
    6. echo "OK - Sidekiq is running"
    7. exit 0;
    8. fi
    9. echo "CRITICAL - Sidekiq is NOT running"
    10. exit 2;


  • You need to place it in the "PluginDir" directory on your client where that check is executed (e.g /usr/lib64/nagios/plugins/). It's defined in your constants.conf file (/etc/icinga2/constants.conf).

    1. const PluginDir = "/usr/lib64/nagios/plugins"
  • const ManubulonPluginDir = "/usr/lib/nagios/plugins"

    from constants.conf from the client.

    When I execute it from a terminal with/usr/lib/nagios/plugins/check_sidekiq i get

    1. # /usr/lib/nagios/plugins/check_sidekiq
    2. OK - Sidekiq is running

    So I think that

    1. object CheckCommand "sidekiq" {
    2. import "plugin-check-command"
    3. command = [ PluginDir + "/check_sidekiq" ]
    4. }

    this section is not synced as it should. Right now I put it at /etc/icinga2/conf.d/services.conf.