cluster-zone check command

This forum was archived to /woltlab and is now in read-only mode.
  • Hi guys,


    I have a no icinga-cluster zone . as a default when I was registering icinga agent client install check command is cluster-zone. I want to change hostalive. I know we can edit the file on icinga master server in repository.d. but I don't want. I want to when register will be registered as a HostAlive Command. How can I change? I actually do not want the satellite-host template. How can I register with my own templates. ?


    like this as a follow


    object Host NodeName {
    import "generic-host"
    address = "192.168.14.156"
    address6 = "::1"
    vars.os = "Linux"
    }


    service.conf


    apply Service "ping4" {
    import "generic-service"
    check_command = "ping4"
    assign where host.address
    }


    apply Service "ping6" {
    import "generic-service"
    check_command = "ping6"
    assign where host.address6
    }


    apply Service "ssh" {
    import "generic-service"
    check_command = "ssh"
    assign where (host.address || host.address6) && host.vars.os == "Linux"
    }

  • As your Agent is like a cluster setup he also has his own zone. So the cluser-zone check monitors the connection between the Agent and the Master, so i think its a good replacement for hostalive.


    Not sure about the following, as i dont use repository.d :


    If you use the bottomup config with repository.d the config for your host is on the host and you need to change it there. After that it will reported to the Master. As the Check is executed on the host itself, a hostalive check is useless, as a ping on itself will work on most circumstances. Iam not sure if its possible to check a Hostalive from another host in this scenario.

  • Hi unic,


    Ho do I install agent is like just agent :) ? I have not cluster solution ( at least for now ). I have a choice of two When I run the setup wizard. satellite-host or master setup. In fact, I was not exactly understand the icinga terminology ( zone,endpoint, cluster-zone etc.. ). I have a lot of server. I using Icinga API. very easier my life .Yeah I know file correction by traditional methods, but do not need if properly registered.

  • remoteclient, agent, satellite is all the same, the only difference is the configuration.


    Your "Agent" is the endpoint and every endpoint needs a zone. In most scenarios the zone has the same name as the endpoint, but you can have more endpoints in a zone, f.e. for HA or load-balance)


    If you use Icinga2 as Agent iam pretty sure you will find a zone for it in repository.d or somewhere else. But I am not sure about it as i am using zones.d with top-down configuration for my Agents.


    If you have more Agents or Satellites in a Zone you have a zone-cluster. At least this is my definition ;)

  • Hi unic,


    in my situation , I have a one zone and a lot of endpoint . On each agent server should it be in a similar line? -->>>> const ZoneName = master.example.com

  • For automated generation of configs (and the initial setup) the constants are nice to have. In real world big environments with multiple endpoints and levels I suggest to use real strings. Keep the Endpoint name the same as the FQDN and SSL certificate CN, but use telling names for your zones. The latter is important for trust relationship between zones.