Multiple master setup without HA

Hi,
first of all I´d like to than you all for great monitoring software you are developing. I’m in “monitoring bussines” since 2008 and I started with nagios 2, and then continued with Icinga Legacy and Naemon. Then I discovered Icinga2 and since then I’m using it on several separated instalations with director only.
In one of the instalations I have 100+ monitored machines ( mostly Icinga2 agent ) both Linux and Windows and about 700+ services, most of them with pnp4nagios graphs. All done with just one master + about 6 aditional zones managed by 9 satellites. I’m using ansible script for Icinga agent deployment and check deployment.

Now I want to create aditional master ( or two more - but I’m still not sure if there is still that bug which affect zones with more then 2 checkers - is it still issue?), but I’d like to avoid complex setup with db cluster and keep masters separate (enable_ha = false) and apoint one of the master as “configuration master” with director installed. Both masters with icingaweb installed.

Is this recomended aproach? What are the pitfalls of this kind of setup? Do I have to worry about something special? I’m also considering to move from pnp4nagios to Graphite or Grafana. I know I have to take care about notification on “Backup Master” ( hopefully I could trigger them ON via eventhandler ).

Thanks for all comments in advance.

Some answers, some personal thoughts:

Yes, it is.

A master is an endpoint at the root of the entire structure, it has all knowledge about it.
In HA / load sharing scenarios, you would deal with three different features:

  • HA / Load sharing for the checker feature, you can not disable that for endpoints in the same zone.
  • HA / Load sharing for the notification feature; disabling it usually results in duplicate notifications sent out.
  • HA / Load sharing for the IDO DB feature, disabling it leads to (i think…) multiple databases with hopefully the same data, as long as these do not differ due to i.e. an endpoint being down.

That said, i remember posts where that has been done as you describe.
But, what would be the point of such a solution ?
I would vote for a single master at the top, and degrade the machines you plan to be the “masters” to satellites below it.

  • You do not want HA / loadsharing for any feature (as far as i understand)
  • You are out of the “2 endpoints per zone” limitation
  • you can still have icingaweb2 for a dedicated range of objects on these satellites (search multitenant in that forum)

What would be the point of running multiple but HA / Load sharing disabled masters in one zone ?

The main point I´m trying to solve with multiple masters is to keep monitoring running even if one master (and maybe also underlying physical esxi machine ) fails… The second reason would be to distribute load between masters. You mentioned that with dual master configuration with non HA settings there would be no load distribution between masters? How is that posible? I noticed that with dual satelite nodes in one zone there is working load distribution ( some check are running on one sattelite node some on another sattelite node) . If what you wrote is true then I think that there is something in documnetation missing, because all the cases and not fully described there…

What you described is the HA approach: splitting notifications/checks between the two masters and failover in event of failure of one of the nodes. You can set accept_config to false on one of them and have that one be your configuration master. If you do not want a clustered DB approach, a separate database server is recommended (however, this is another point of failure so perhaps not the best approach).

Director stores configuration in your DB, so it should be fine to add the module to the secondary module.

One consideration is that the icingaweb2 ini files at /etc/icingaweb2/*.ini are NOT synchronized between your icingaweb2 servers. You will need to figure out a solution for that (to name a few: rsync, lsyncd, glusterfs, network share).

Hello, I want to do the same as @majales

I currently have one master with Grafana and InfluxDB on one host and want to add another one with the exact same checks as the first. The idea is if master1 fails the checks and notifications to keep working while the master1 is down, that is all. I will install icingaweb on both of them which will work with the local instance of the Icinga.
What will I lose if they have separate DB?
As I understand there will be no double checks or notifications while both masters are UP, am I correct?

You’ll need to have some way to keep the two DB servers in sync with each other so that if one drops the other isn’t working with out-of-date information. You’d probably be better off to set up a single DB on its own machine, separate from both Icinga instances, and simply take periodic backups of the DB if you need some kind of DR for it.

It’s worth noting that the DB being unavailable doesn’t prevent checks and notifications from running. Afaik, you only lose historical data and WebUI functionality if the DB goes offline or gets wiped.

Thanks for the answer @mamoru I do not need “historical data” and will be put two icingaweb-s on every master. 99% of the time I will use the first master.

As long as the icingaweb instances can communicate with at least one up-to-date IDO mirror, you should be good to go. It will definitely be more simple to have just one running IDO instance configured for HA than to hassle with replication and such directly in MySQL/PostgresQL :slight_smile: