Posts by sru

    If you really have succeeded in compiling the base products you are faced with creating the plugins.

    Perhaps he should start with compiling the plugins.

    That would enable him to run the checks for the machines from a third machine that already *has* icinga2 installed, using by_ssh.

    As the dependencies of the plugins are - from my point of view - are easier to fullfill *and* he needs these anyhow (even for nagios), would not that make sense ?

    What's the purpose of ""

    Seems to be the master:

    object Endpoint "" {

    object Zone "core" {
    //this is the local node master named = "core"
    endpoints = [ "" ]


    1. object Zone "" {
    2. endpoints = [ "" ]
    3. parent = "core"
    4. }

    is some satellite / client.

    Should I have the same zones.conf file on both the master and the slave?

    Yes, but they slightly differ in which endpoints have the host= property set:

    Masters zones.conf:

    Slaves zones.conf:

    Do I need to have a zone named

    No, you may name it as you like.

    But with my answer above i wanted to point out:

    • A host is an endpoint and not a zone
    • A zone may have multiple endpoints assigned to (building aq load sharing scenario)
    • An endpoint has a zone as a parent, not an endpoint

    shows the differences from a fresh 2.6.0.sql schema migration and my current schema migration

    That was the intention for that special thread, yes.

    Cool that you found it !

    From the above, we delete all lines

    • starting with -- (comments)
    • containing AUTO_INCREMENT (these *must* differ)

    What is left are some index related lines that - if slightly reordered - match.

    Your structure is matching 2.6.0 without errors.

    You have that same zones.conf at both machines, right ?

    So, your master will connect to your slave ( because that has an ip set).

    Are you sure that your slaves firewall has port 5665 open ?

    Anything in /var/log/icinga2/icinga2.log or debug.log at the slave regarding certificate errors ?

    /this is the local node master named = "core"

    No it is not.

    It is your masters zone name that is "core".

    Your Masters endpoint name is "".

    And that *must* be the common name of the certificate you created using icinga2 node wizard.

    Yes, they are needed.

    NodeName and ZoneName are constants that are defined in constants.conf.

    If you prefer, you might replace them in zones.conf with Strings like "MyEndpointName" and "MyZoneName".

    But the common name of the certificate *must* match the endpointname, take that into account !

    I can run /usr/lib/nagios/plugins/check_ldap -H host.local -b dc=local,dc=com successfully

    Which is at least something.

    With icinga2 object list --type service --name ldap you can verify if the service exists or not.

    With icinga2 object list --type host --name host.local you can verify if the host exists or not.

    You have that formated with newlines, i guess:

    1. apply Service "ldap" {
    2. import "generic-service"
    3. check_command = "ldap"
    4. assign where == "host.local"
    5. }

    For me, it looks like you have mixed some things.


    Master has just been set up and is able to execute local checks; Displays these in IcingaWeb2.

    icinga2 node wizard has been run at the master. Mode has been given as Master, CN has been chosen to be the machines FQDN.

    Master has Hostname and Endpoint-Name sentinel, as well as the IP

    The Client is named Widow.

    We want the Client to connect to the Master, not vice versa.

    Process should be

    At the master create / modify the zones.conf file to contain the clients zone:

    At the master, run The Below Commands

    1. $ mkdir /etc/icinga2/zone.d/global-templates
    2. $ mkdir /etc/icinga2/zone.d/widow
    3. $ icinga2 pki ticket --cn 'widow'
    4. 2483cf6f158c06f362b2f2a7ea29b72b25d14d17
    5. $ icinga2 feature list
    6. Disabled features: compatlog debuglog gelf graphite influxdb livestatus opentsdb perfdata statusdata syslog
    7. Enabled features: api checker command ido-mysql mainlog notification

    At the Client, run:

    You see, creating the ticket at the master and pasting it here at the client is enough, you do not need to fiddle with api.conf and curl.

    Modifications At The Windows Client


    Now continue with the attached document, starting at:

    Create An Interrim Windows Service At The Master, To Verify Top Down Replikation

    Yes - the document is for setting up a windows client, but at that point we are behind the windows specific things so from that paragraph on

    it will work for both linux and windows.



      (121.7 kB, downloaded 4 times, last: )


    check_procs -w <range> -c <range> [-m metric] [-s state] [-p ppid]



    check_procs -w 2:2 -c 2:1024 -C portsentry

    Warning if not two processes with command name portsentry.

    Critical if < 2 or > 1024 processes

    Also bei dir wohl:

    check_procs  -c 1:1 -C snmpd


    1. vars.procs_user = "snmp"
    2. //vars.procs_argument = ""
    3. vars.procs_command ="snmpd"
    4. vars.procs_critical = "1:1"

    my api.conf on client:

    accept_commands = true

    That is the thing that matters - and that looks ok for me.

    Did you reSTART the icinga2 at the client so that it honors the setting ?

    • Masters zone.conf looks good.
    • Clients zone.conf looks good as long nodename and zonename are set to the respective values of the masters zone.conf in the client's constants.conf.
    • api.conf client looks good.
    • api.conf master looks good.

    icinga2 feature list shows api enabled on both machines ?

    in icinga2.conf, the following line is present include "features-enabled/*.conf" @ both machines ?

    Certificates are ok ?…r-unauthenticated-clients

    May be related:

    Invalid endpoint origin (client not allowed)