Multiple clients in a single zone

This forum was archived to /woltlab and is now in read-only mode.
  • Hello everyone, 8)


    Here is what I'd like to do :




    It's simply a Master zone and another zone called Zone (which has Master as parent) with a global zone called global-templates.

    I'll use it with the config sync mode.

    In the zone Zone, I will have two endpoints : Endpoint A and Endpoint B.


    I made a lot of tests, so I know how to configure all the files (certificats, conf files, etc) to make it works.


    Let's consider now that I want to monitor the two Endpoints as Hosts and then link services to those hosts (disk, load, cpu, ...).

    So I'll create in the folder /etc/icinga2/zone.d/Zone/ a hosts.conf file that will contain the objects definition for both hosts (Host A and Host B).

    This file will then (as it is the config sync mode that is chosen) be sent to the two Endpoints, isn't it ?

    SO :How will the hosts definition be handled ?


    I'm afraid that, for example, the Endpoint A will receive the hosts.conf file and try to "instanciate/create" both hosts and then will get config errors.

    Moreover, I'm afraid that 4 hosts will appear in Icinga web (two for each endpoints)


    Are you using, or have you using, Icinga2 in a such way ? If yes, how are you doing this :p ?

    I'll be glad to read your answers.


    Regards

    The post was edited 1 time, last by pcasis ().

  • Well after reading more doc and examples, I'm now realizing that it might be impossible to do what I wanted.

    I will have to use satellite, right ?


    Is someone could confirm that, i'll appreciate.

  • This file will then (as it is the config sync mode that is chosen) be sent to the two Endpoints, isn't it ?

    Correct. And the Endpoints A and B build up a load sharing scenario in which they decide about who is checking what.


    Consider a Ping check: it does not matter on which endpoint the check is run, a result can be produced by both endpoints.


    Consider a load check: it *must* be run on that endpoint that has the host object.

    (But it *may* be scheduled at any endpoint for execution, note the difference)

    I would give that service the command_endpoint attribute as is written in the docs below to tell the cluster

    that you need the check to be run on a dedicated endpoint:

    https://docs.icinga.com/icinga…enarios-ha-master-clients


    I'm afraid that, for example, the Endpoint A will receive the hosts.conf file and try to "instanciate/create" both hosts and then will get config errors.

    Moreover, I'm afraid that 4 hosts will appear in Icinga web (two for each endpoints)

    Sure it does.

    In a HA load sharing scenario, you expect that one node is allowed to die and the remaing one takes all the burden now.

    Believe me - They agree about who is checking what - but for taking that decision they both need a consistent knowledge.


    For the same reason, you do not need to fear that 4 hosts will appear.

    Remember: Objects are not created in Endpoints - they are created in Zones.

    And the executive of a zone is an endpoint.


    Are you using, or have you using, Icinga2 in a such way ? If yes, how are you doing this :p ?


    I'll be glad to read your answers.

    I did not do it myself.

    But what you describe really seems to be the standard load sharing scenario from the quoted docs -

    If i did not misunderstand you completely.

  • Thank you very much for your long answer.:thumbsup: I now understand some stuff that I hadn't understood before (HA for example). I'll read more about the command_endpoint attribute.

    I think, I wasn't clear enough when I explained. Sorry for that.:S


    Regarding what you said, here is my reformulated problem :

    • I have two servers I want to monitor (checks could be disk, load, process, ...).
    • I have one Master server where icinga2 is installed.
    • And I would like to use the config sync mode between Master and Endpoints because thx to this mode, checks are scheduled locally and replay logs could be used.

    My first thought was to create two zones (one Master and one Zone). Then, to define two endpoints in the zone Zone (one for each server I have).

    However as you said, HA will be enable between the two endpoints, and I guess now it will not fit to my architecture (because the two monitored servers could be separated - *networkly* speaking).


    So, to reformulate it :

    How could I logically group 2 hosts and use config sync mode with both of them (without enabling HA because servers could be *networkly* separated, and thus can't communicate) ?

    Am I obliged to define distinct zones for each server I want to monitor ?

    EDIT : well I think I found the answer :

    client nodes also have their own unique zone. By convention you can use the FQDN for the zone name.



    Regards :):thumbsup:

    The post was edited 1 time, last by pcasis ().

  • Yes - use a dedicated zone for each client.

    That enables you to distribute different configs to them.

    And it separates them - due to the use of certificates - in a way that they are even unable to speak to each other.


    That is handy if you have to monitor machines of different customers.

  • Yes - use a dedicated zone for each client.

    That enables you to distribute different configs to them.

    And it separates them - due to the use of certificates - in a way that they are even unable to speak to each other.


    That is handy if you have to monitor machines of different customers.

    My "customers" could have more than 1 server. So that's why my first idea was to create a zone for each customer with multiples endpoints within zone.

    However, because of what we already talk, it's appearing to don't fit to my architecture.

    Thank you very much.


    I'll probably use HostGroup so.


    I appreciate :) :thumbsup::thumbsup::thumbsup::thumbsup::thumbsup: