Understanding Icinga hostgroups and Nagvis - small issue


#1

Hi all,

I am using 2.4.2 icinga and Nagvis version 1.1.1

I have 6 production hosts that run Windows 2012. One of the hosts has a disk space issue so is in status: critical.
Unfortunately, the other hosts (but not all of them) that are in the same hostgroup sometimes show as Red (critical) in my nagvis map (with 1 critical showing when hovering over) until I click on ‘refresh status’ and then they go green but red again after a few minutes. Is this supposed to be like this? When I click through all is green. (I imagine that when I solved the disk space issue all the hosts will be green.)

How were the hosts added? Uploaded a background png, created a map then added the host one by one.

Just adding an edit here. It might not be related to the host groups but rather to the hosts that are in the same map, some of the non production hosts that are on the same map are yellow and affecting the production hosts I mentioned. I have prod / staging / test on the same map. Why would a test host with an issue affect another host in prod on the same map?


(Michael Friedrich) #2

Do you have some screenshots, configuration samples and details on the NagVIs backend config? This would help understand the problem.


#3

This is what I see in the nagvis.ini.php:

[global]
authmodule=“CoreAuthModIcingaweb2”
authorisationmodule=“CoreAuthorisationModIcingaweb2”
logonmodule=“LogonIcingaweb2”

[paths]
base="/usr/share/nagvis/"
htmlbase="/nagvis/"
htmlcgi="/icingaweb2"

[defaults]
backend=“live_1”
urltarget="_top"
hosturl="[htmlcgi]/monitoring/host/show?host=[host_name]"
hostgroupurl="[htmlcgi]/monitoring/list/hostgroups?hostgroup_name=[hostgroup_name]"
serviceurl="[htmlcgi]/monitoring/service/show?host=[host_name]&service=[service_description]"
servicegroupurl="[htmlcgi]/monitoring/list/servicegroups?servicegroup=[servicegroup_name]"
mapurl="[htmlcgi]/nagvis/show/map?map=[map_name]"
headermenu=1
stylesheet=“icingaweb-nagvis-integration.css”

[index]
[automap]
[wui]
[worker]
interval=10
[backend_live_1]
backendtype=“mklivestatus”
socket=“unix:/var/run/icinga2/cmd/livestatus”
[backend_ndomy_1]
backendtype=“ndomy”
dbhost=“localhost”
dbname=“name”
dbuser=“user”
dbpass=“pass”
dbprefix=“icinga_”

[rotation_demo]
maps=“demo-load,demo-muc-srv1,demo-geomap,demo-automap”
interval=15

[states]

===========

Bit difficult to show screen shots or host info without some editing. Hopefully there is something in that config that looks a little ‘funky’…

thanks!


#4

Here you can see the issue. The red host in STAGING has a real critical -> disk space issue.

The red critical in PRODUCTION is not real. If I right click on the host in PRODUCTION and refresh it goes to green. The host below the one in red in PRODUCTION might also go red from time to time. But there are actually no criticals in PRODUCTION but other hosts in the Map having issues are seemingly randomly affecting the display of my PRODUCTION hosts

Maybe I have missed something (eg noob error) but I am scratching my head with this one…


#5

I’m not sure if you need two data sources so I’d comment one of the [backend_] section (probably ndomy).


#6

Thanks for the suggestion. Can one not have 2 backends? The problem is that I have mostly live1 and then when I changed some hosts from ndomy_1 to live_1, I ended up losing the map so had to go and edit the config file so the host was back as Ndomy_1. I had Error Undefined Offset - 6.


#7

I’d say you can have more that one backend. OTOH the livestatus backend retrieves the information from memory whereas ndomy uses database calls.