Performance data with multiple masters

This forum was archived to /woltlab and is now in read-only mode.
  • Hi all,

    I have a general question regarding performance data and graphing.

    I'm currently setting up a test/evaluation installation of Icinga2 with the following components:
    2 master endpoints with HA, local IDO mysql on each and top down config sync to 2 satellite zones. I have Icingaweb2 installed on both masters and I see that HA is working.

    The masters do not execute any checks but they have the graphing features enabled.

    In our future production setup we plan to ship all performance data to opentsdb. That will be approx. 60,000 service checks with performance data in our future Icinga2 setup.
    We now have some concerns that it would be a performance bottleneck when this amount of metric data is processed by one (or two?) master nodes.
    My general understanding is that the satellites, who are executing the checks report the check results (and performance data) to all nodes in the parent zone (master).
    I'm seeing traffic from both master nodes to influxdb (that I use for testing currently). My influxdb.conf is the same on both master nodes.
    Is it true that both masters are sending the same performance data information, or is this sort of balanced? Is there a way to modify this behavior?


  • Currently both masters will send the same check result perfdata as they receive it. There might be an addition in the future to add HA support for more features similar like DB IDO already has it.

  • Thanks Michi!

    The main reason I was asking is because we have concerns to send all performance metrics through one single pipe (the master node).
    I've just made a quick test and enabled the influxdb and graphite writers on the satellite nodes to let them ship the data directly. Looks good so far.

    Is there any downside of this approach?
    If not, this would be our preferred solution anyway because it adds a lot of flexibility in the setup, e.g. having a separate opentsdb (graphite/influxdb) storage for each datacenter.


  • You can just enable the features on the satellite nodes themselves but you need to keep in mind that they'll only see the configuration objects from their zone. Which should perfectly fit your requirement of sort of load balancing (and probably also permissions who sees what).