icinga2 Load Distribution setup question

This forum was archived to /woltlab and is now in read-only mode.
  • Hey all,


    Would like to say this forum has been a big help with understanding icinga2 more and more and love what it can do. I do have a question related to setting up a Load Distribution cluster and I'm working with a simple example right now:


    <M>
    |
    <C>
    |
    <H>


    M = master node
    C = checker node
    H = host


    So I'm able to get the master and checker node working using the following zones.conf file


    I'm just trying to use the default service checks for now to be ran on the checker node (icinga-worker01) and was able to sync the configs (api.conf below)


    Now, icinga-worker01 is able to do basic checks on itself and syncing it back to the master.


    So now that I have that setup, I've been trying get the host setup using the client-agent and was able to set that up to the master just fine. Trying to do a basic 'remote-user' check in this exmaple (http://docs.icinga.org/icinga2…ration-command-bridge).My questions are the following for using the client-agent on the host:


    • After the client has been successfully connected to the master, does the hosts 'zones.conf' need to know about the checker nodes (i.e. icinga-worker01) or just the default config thats generated ok? would it be better to update the hosts 'zones.conf' file to look similar to the master/checker nodes?
    • I'm assuming that the host doesn't need the feature 'checker' enabled if I dont want it to be a part of the cluster to take requests, correct?
    • Does the host need 'include_recursive "conf.d"' commented out in the icinga2.conf file?
    • A host.conf like file needs to be on the master node but does it go inside of the 'zones.d/worker/' directory or some other place?


    Master file structure in the icinga2 dir (conf.d is commented out in icinga2.conf)


    host1 config on master under zones.d/worker/host1.conf


    The docs read before posting this:



    Modeled this example/test from the following link - http://docs.icinga.org/icinga2…enarios-load-distribution


    If Im missing any information needed, will reply back with what I can :D


    Cheers and thanks again.

  • After the client has been successfully connected to the master, does the hosts 'zones.conf' need to know about the checker nodes (i.e. icinga-worker01

    Each machine, no matter of it's role, should have a zones file that contains all zones and endpoints on the path that machine itself is in - from the root down to the leaf.


    That is what gives that machine a view of the infrastructure.
    A Checker does not need to know about other Checkers children, because these are not on his path.
    A Child should know its Checker Zone and its Master Zone - (using Load Balancing: with all endpoints in these zones).



    I'm assuming that the host doesn't need the feature 'checker' enabled if I dont want it to be a part of the cluster to take requests, correct?

    You need the feature checker for scheduling checks. For running commands (Command execution bridge), you do not need it (at least my checks show so).



    Does the host need 'include_recursive "conf.d"' commented out in the icinga2.conf file?

    I suggest that because people may run into problems with that default config if they really want to do a command execution bridge - here conf.d is not needed.
    If you know what you are doing *and* are able to track down problems yourself, you may let it as is.



    A host.conf like file needs to be on the master node but does it go inside of the 'zones.d/worker/' directory or some other place?

    To put it in zones.d/[zoneWhereTheHostIsIn] is considered best practise.
    What is placed here, will be replicated down to all endpoints in a zone.
    That may help you with managing your checker nodes configuration from a central point.

  • Thanks for the reply back @sru


    To see if I'm getting this straight, to make the host get checked by the master/checker setup, I will need to do the following:

    • add the host config to zones.d/[zoneWhereTheHostIsIn] on master
    • update the zones.conf of the new host on master (not sure what object Zone it needs to be in)
    • restart icinga2 service on master
    • wait for the new config to sync over to checker


    So follow up questions are:

    • Would the 'master' of the host be the checker and if so, what would the config on the host look like for the zones.conf? does that mean I need the checker nodes new CA as well?
    • Am I understanding the idea of a 'checker' for icinga? Its an icinga2 installed node that will help execute the checks there for other hosts in that zone?

    The hard part to this is I feel like the issue I'm seeing is just a config being wrong thing (placement, missing something, etc..) thats allowing me to setup the master, checker successfully but having a bit of trouble with the checker 'checking' anything outside from itself.


    Just some other info Im providing that might be related to my question (not sure anymore):




    Shell-Script: icinga2 feature list - worker
    1. root@icinga-worker01:/etc/icinga2# icinga2 feature list
    2. Disabled features: compatlog gelf graphite icingastatus livestatus notification opentsdb perfdata statusdata syslog
    3. Enabled features: api checker command debuglog mainlog
    Shell-Script: icinga2 feature list - host1
    1. [root@host1 ~]# icinga2 feature list
    2. Disabled features: checker command compatlog gelf graphite icingastatus livestatus notification opentsdb perfdata statusdata syslog
    3. Enabled features: api debuglog mainlog


    Cheers!

  • Correct.
    As a rule of thumb, every machine that speaks icinga cluster protocol is an endpoint and gets its own zone -
    except load-balancing scenarios where all endpoints between load is distributed are in the same zone.


    create .../zones.d/HostZoneExampleHost and within that ExampleHost.conf where you define object Host ... and object service ... .


    At

    • the masters zones.conf
    • the special checkers zones.conf ExampleHost is a subzone of
    • ExampleHost's zones.conf

    create a new endpoint and zone definition:



    Would the 'master' of the host be the checker and if so, what would the config on the host look like for the zones.conf? does that mean I need the checker nodes new CA as well?


    Am I understanding the idea of a 'checker' for icinga? Its an icinga2 installed node that will help execute the checks there for other hosts in that zone?

    Upper question already answered.
    Parent Zone of Example Host is Zone "worker". Zones.conf of ExampleHost is the same as on icinga-worker01, with the above snipped appended to both *and* the master. No additional CA's needed, one is fine for the whole cluster.


    Lower question:
    My problem is that we have no solid definition from "Upstream", so here is mine:

    • Master: An endpoint in a zone that has no parent. May or may not schedule and execute checks.
    • Satellite: An endpoint in a zone that has a parent. Normally schedules and executes checks. Therefore reduces the master's load.
    • Agent: An endpoint in a zone that has a parent. Will not schedule and execute checks but runs commands provided by satellites. Often referred to as Command Execution Bridge (CBE).

    I have not fighted through your code snippets.

  • Thanks @sru again for explaining. No worries about the code snippets as those were there just in case.


    The information is helpful and will continue on with my icinga2 setup. Still isn't working the way I want but will make a new topic if needed but your answered all the questions I needed answered.


    Cheers!