Graphing disk usage with grafana/kibana using elasticsearch as data source

(seccentral) #1

I have a working instance of icinga2 with icingabeat that sends data to es and i can use the stock-provided kibana dashboards and also i made a few grafana graphs but i want to graph disk usage on a few hosts.

How can I configure icingabeat to send perfdata to the es cluster so that i can graph it using grafana or kibana ?
I see one field named check_result.performance_data in the index template (and i suppose this is where perfdata should be) but the field is missing from all documents in the icingabeat index.
My guess is icingabeat simply does not send perfdata to es.

Thanks in advance for any hints that can help me solve this problem.

(Michael Friedrich) #2

Icingabeat doesn’t send performance data metrics to Elasticsearch. If you want to (ab)use Elasticsearch as document storage for metrics, you might want to look into the elasticsearch feature in Icinga 2 v2.8.

Yet better, I suggest to let performance metric write to Graphite and use that in Grafana to visualize disk usage.

Note aside - if you are going the trends route with Grafana, calculations such as AVG or current value are really hard with Elasticsearch.

(seccentral) #3

Hey, thanks.
So, for the 1st option, my indexes so far are useless for this and i should enable the elastic feature and start from scratch.
And the second option would be to write perf metrics to graphite and configure that as a data source for grafana (doubt grafana can do multiple data sources per graph, and I want to aggregate several things)
Thing is i would have preferred to have all data centralized in es.
Then, what is the point of “check_result.performance_data” field in the index template ? As stated above, not one document has that field populated and there are almost 1 million documents in the icingabeat index by now.

(Michael Friedrich) #4

Prior to using Icingabeat you should have had a read about its features, performance data metrics writing isn’t one of them. If you need to clear up your index, there’s highly likely tools around for just that which talk to the Elasticsearch REST API.

As said, I wouldn’t use Elasticsearch as TSDB. That is why I recommend Graphite (or InfluxDB, OpenTSDB, etc.) which deal with just metrics. And if you really want to go that route with Elasticsearch and Metrics, you can use the Icinga 2 feature.

Experience wise there’s not much more to say, writing metrics to Elasticsearch from Icinga 2 was a customer sponsored feature request, and we’ll see how it goes.

(seccentral) #5

Attempt using icinga2’s elasticsearch feature fails with code 400 seems to shed some light but doesnt provide a fix yet.
icinga2 backend is 2.8.1-1.stretch from packages . icinga . com/debian icinga-stretch/main (debian 9.4)
es cluster versions is
“number” : “6.2.2”
I am not a coder, waiting for a fix on this in icinga2’s feature is probably the way to go.
this is the log:

[2018-03-21 11:23:15 +0100] debug/ElasticsearchWriter: Timer expired writing 22 data points
[2018-03-21 11:23:15 +0100] notice/ElasticsearchWriter: Connecting to Elasticsearch on host ‘’ port ‘9200’.
[2018-03-21 11:23:15 +0100] debug/ElasticsearchWriter: Sending POST request to ‘’.
[2018-03-21 11:23:15 +0100] warning/ElasticsearchWriter: Unexpected response code 400

(Michael Friedrich) #6

Oh, you are on 6.x already. That depends on a fix for The one for the 400 only shields the real breaking change with the indexes in 6.x. For now, the elasticsearch feature only works with 5.x.