'successful' Ping PENDING indefinitely

This forum was archived to /woltlab and is now in read-only mode. Please register a new account on our new community platform.

You can create a thread on the new site and link to an archived thread. This archive is available as knowledge base, safe and secured.

More details here.
  • I have a basic master-client top-down configuration (connection direction going from master to client), where the client is a Windows Server 2012 host. I have a "cluster" health check configured, and several service checks. Looking at my icingaweb2 installation, everything is smooth, including cluster endpoint check.

    Except that the standard (locally executed) hostalive "self-ping" of the client host hangs:

    - the status stays at PENDING forever

    - the check source is empty

    (see the Icingaweb2 screen capture below in the end)

    The host configuration is defined in a hosts.conf placed under zones.d/client/ at master, and it is successfully synced to the client. It is literally nothing more than 'import generic-host' plus 'address="x.x.x.x"'

    The icinga2 debug log entries show the default hostalive ping executes ok on the client:

    [2017-05-19 17:36:06 FLE Daylight Time] notice/Process: Running command '"C:\Program Files\ICINGA2\/sbin/check_ping" -H x.x.x.x -c 5000,100% -w 3000,80%': PID 4636

    [2017-05-19 17:36:11 FLE Daylight Time] notice/Process: PID 4636 ('"C:\Program Files\ICINGA2\/sbin/check_ping" -H x.x.x.x -c 5000,100% -w 3000,80%') terminated with exit code 0

    Yet it hangs. There are no ping issues for sure, I've checked manually that everything pings fine. I've checked the IP between localhost ( and the "real" IP and same thing happens.

    Any idea what could cause this? I am not sure if this is an icinga2 or icingaweb2 issue...?

    Icingaweb2 screen capture:

  • I would check if the zone hierarchy is correct and endpoint names are matching. Please add the zones.conf from both involved nodes for better understanding your issue.

    One thing you also should check - query /v1/objects/services on the REST API on both nodes and check the last_check / next_check timestamps (you could also use the console commands explained in the troubleshooting docs).