Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 64 additions & 0 deletions doc/09-object-types.md
Original file line number Diff line number Diff line change
Expand Up @@ -1241,6 +1241,70 @@ for an example.
TLS for the HTTP proxy can be enabled with `enable_tls`. In addition to that
you can specify the certificates with the `ca_path`, `cert_path` and `cert_key` attributes.

### ElasticsearchDatastreamWriter <a id="objecttype-elasticsearchdatastreamwriter"></a>

Writes check result metrics and performance data to an Elasticsearch timeseries datastream.
This configuration object is available as the [elasticsearch datastream feature](14-features.md#elasticsearchdatastream-writer).

> **Note:**
>
> This feature is experimental right now and behind a compilation flag.

Example:

```
object ElasticsearchDatastreamWriter "datastreamwriter" {
host = "127.0.0.1"
port = 9200
datastream_namespace = "production"

enable_send_perfdata = true

host_tags_template = ["icinga-production"]
filter = {{ "datastream" in host.groups }}

flush_threshold = 1024
flush_interval = 10
}
```

Configuration Attributes:

Name | Type | Description
--------------------------|-----------------------|----------------------------------
host | String | **Required.** Elasticsearch host address. Defaults to `127.0.0.1`.
port | Number | **Required.** Elasticsearch port. Defaults to `9200`.
enable\_tls | Boolean | **Optional.** Whether to use a TLS stream. Defaults to `false`.
insecure\_noverify | Boolean | **Optional.** Disable TLS peer verification.
ca\_path | String | **Optional.** Path to CA certificate to validate the remote host. Requires `enable_tls` set to `true`.
enable\_ha | Boolean | **Optional.** Enable the high availability functionality. Only valid in a [cluster setup](06-distributed-monitoring.md#distributed-monitoring-high-availability-features). Defaults to `false`.
flush\_interval | Duration | **Optional.** How long to buffer data points before transferring to Elasticsearch. Defaults to `10s`.
flush\_threshold | Number | **Optional.** How many data points to buffer before forcing a transfer to Elasticsearch. Defaults to `1024`.

Auth:

Name | Type | Description
--------------------------|-----------------------|----------------------------------
username | String | **Optional.** Basic auth username for Elasticsearch
password | String | **Optional.** Basic auth password for Elasticsearch
api_token | String | **Optional.** Authorization token for Elasticsearch
cert\_path | String | **Optional.** Path to host certificate to present to the remote host for mutual verification. Requires `enable_tls` set to `true`.
key\_path | String | **Optional.** Path to host key to accompany the cert\_path. Requires `enable_tls` set to `true`.

Changing the behavior of the writer:

Name | Type | Description
--------------------------|-----------------------|----------------------------------
datastream_namespace | String | **Required.** Suffix for the datastream names. Defaults to `default`.
manage\_index\_template | Boolean | **Optional.** Whether to create and manage the index template in Elasticsearch. This requires the user to have `manage_index_templates` permission in Elasticsearch. Defaults to `true`.
enable\_send\_perfdata | Boolean | **Optional.** Send parsed performance data metrics for check results. Defaults to `false`.
enable\_send\_thresholds | Boolean | **Optional.** Whether to send warn, crit, min & max performance data.
host\_tags\_template | Array | **Optional.** Allows add [tags](https://www.elastic.co/docs/reference/ecs/ecs-base#field-tags) to the document for a Host check result.
service\_tags\_template | Array | **Optional.** Allows add [tags](https://www.elastic.co/docs/reference/ecs/ecs-base#field-tags) to the document for a Service check result.
host\_labels\_template | Dictionary | **Optional.** Allows add [labels](https://www.elastic.co/docs/reference/ecs/ecs-base#field-labels) to the document for a Host check result.
service\_labels\_template | Array | **Optional.** Allows add [labels](https://www.elastic.co/docs/reference/ecs/ecs-base#field-labels) to the document for a Service check result.
filter | Function | **Optional.** An expression to filter which check results should be sent to Elasticsearch. Defaults to sending all check results.

### ExternalCommandListener <a id="objecttype-externalcommandlistener"></a>

Implements the Icinga 1.x command pipe which can be used to send commands to Icinga.
Expand Down
115 changes: 115 additions & 0 deletions doc/14-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -439,6 +439,121 @@ The recommended way of running Elasticsearch in this scenario is a dedicated ser
where you either have the Elasticsearch HTTP API, or a TLS secured HTTP proxy,
or Logstash for additional filtering.


#### Elasticsearch Datastream Writer <a id="elasticsearch-datastream-writer"></a>

> **Note**
>
> This is a newer alternative to the Elasticsearch Writer above. The Elasticsearch Datastream Writer uses
> Elasticsearch's datastream feature and follows the Elastic Common Schema (ECS), providing better performance
> and data organization. Use this writer for new installations. The original Elasticsearch Writer is still
> available for backward compatibility.

This feature sends check results with performance data to an [Elasticsearch](https://www.elastic.co/products/elasticsearch) instance or cluster.

> **Note**
>
> This feature requires Elasticsearch to support timeseries datastreams and have the ecs component template installed.
> This feature was tested successfully with Elasticsearch 8.12 and 9.0.8.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to explicitly mention that the ElasticsearchDatastreamWriter will not work with OpenSearch? Since the current ElasticsearchWriter works with OpenSearch, users might expect the same here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did some testing with OpenSearch, luckily there is little that needs to change for this implementation to work with OpenSerch.

  • The index-template.json needs to be changed, the users can do that themselves
  • OpenSearch responds with charset=UTF-8 not charset=utf-8

0001-Opensearch.patch


Enable the feature and restart Icinga 2.

```bash
icinga2 feature enable elasticsearchdatastream
```

The default configuration expects an Elasticsearch instance running on `localhost` on port `9200`
and writes to datastreams with the pattern `metrics-icinga2.<check>-<namespace>`.

More configuration details can be found [here](09-object-types.md#objecttype-elasticsearchdatastreamwriter).

#### Current Elasticsearch Schema <a id="elasticsearch-datastream-writer-schema"></a>

The documents for the ElasticsearchDatastreamWriter try to follow the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current/index.html)
version `8.0` as close as possible, with some additional changes to fit the Icinga 2 data model.
All documents are written to a data stream of the format `metrics-icinga.<check>-<datastream_namespace>`,
where `<check>` is the name of the checkcommand being executed to keep the number of fields per index low
and document with the same performance data grouped together. `<datastream_namespace>` is an optional
configuration parameter to further separate documents, e.g. by environment like `production` or `development`.
The `datastream_namespace` can also be used to separate documents e.g. by hostgroups or zones, by using the
`filter` function to filter the check results and use several writers with different namespaces.

Icinga 2 automatically adds the following threshold metrics
if existing:

```
perfdata.<perfdata-label>.min
perfdata.<perfdata-label>.max
perfdata.<perfdata-label>.warn
perfdata.<perfdata-label>.crit
```

#### Adding additional tags and labels <a id="elasticsearch-datastream-writer-custom-tags-labels"></a>

Additionally it is possible to configure custom tags and labels that are applied to the metrics via
`host_tags_template`/`service_tags_template` and `host_labels_template`/`service_labels_template`
respectively. Depending on whether the write event was triggered on a service or host object,
additional tags are added to the ElasticSearch entries.

A host metrics entry configured with the following `host_tags_template`:

```
host_tags_template = ["production", "$host.groups"]
host_labels_template = {
os = "$host.vars.os$"
}
```

Will in addition to the above mentioned lines also contain:

```
"tags": ["production", "linux-servers;group-A"],
"labels": { "os": "Linux" }
```

#### Filtering check results <a id="elasticsearch-datastream-writer-filtering"></a>

It is possible to filter the check results that are sent to Elasticsearch by using the `filter`
parameter. This parameter takes a function that is called for each check result and must return
a boolean value. If the function returns `true`, the check result is sent to Elasticsearch,
otherwise it is skipped.

This function has access to the check result in the `cr` variable, which contains the check result
object, the checkable object (host or service) in the `checkable` variable and the host and service
object in the `host` and `service` variables respectively.

An example configuration that only sends check results of services in the `linux-servers` hostgroup
and with a critical state:

```
object ElasticsearchDatastreamWriter "elasticsearchdatastream" {
...
datastream_namespace = "production"
filter = {{ service && "linux-server" in host.groups && cr.state == 2 }}
}
```

#### Elasticsearch Datastream Writer in Cluster HA Zones <a id="elasticsearch-datastream-writer-cluster-ha"></a>

The Elasticsearch Datastream Writer feature supports [high availability](06-distributed-monitoring.md#distributed-monitoring-high-availability-features)
in cluster zones.

By default, all endpoints in a zone will activate the feature and start
writing events to the Elasticsearch HTTP API. In HA enabled scenarios,
it is possible to set `enable_ha = true` in all feature configuration
files. This allows each endpoint to calculate the feature authority,
and only one endpoint actively writes events, the other endpoints
pause the feature.

When the cluster connection breaks at some point, the remaining endpoint(s)
in that zone will automatically resume the feature. This built-in failover
mechanism ensures that events are written even if the cluster fails.

The recommended way of running Elasticsearch in this scenario is a dedicated server
where you either have the Elasticsearch HTTP API, or a TLS secured HTTP proxy,
or Logstash for additional filtering.


### Graylog Integration <a id="graylog-integration"></a>

#### GELF Writer <a id="gelfwriter"></a>
Expand Down
81 changes: 81 additions & 0 deletions etc/icinga2/features-available/elasticsearchdatastream.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
/** The ElasticsearchDatastreamWriter feature writes Icinga 2 events to an Elasticsearch datastream.
* This feature requires Elasticsearch 8.12 or later.
*/

object ElasticsearchDatastreamWriter "elasticsearch" {
host = "127.0.0.1"
port = 9200

/* To enable a https connection, set enable_tls to true. */
// enable_tls = false

/* The datastream namespace to use. This can be used to separate different
* Icinga instances. Or for letting multiple Writers write to different
* datastreams in the same Elasticsearch cluster. By using the filter option.
* The Elasticsearch datastream name will be
* "metrics-icinga2.{check}-{datastream_namespace}".
*/
Comment on lines +12 to +17
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The block comments in this file are a bit inconsistent. Can you please ensure they're in this format:

/* Foo
 * bar
 */

Sorry for nitpicking again 😄

// datastream_namespace = "default"

/* You can authorize icinga2 through three different methods.
* 1. Basic authentication with username and password.
* 2. Bearer token authentication with api_token.
* 3. Client certificate authentication with cert_path and key_path.
*/
// username = "icinga2"
// password = "changeme"

// api_token = ""

// cert_path = "/path/to/cert.pem"
// key_path = "/path/to/key.pem"
// ca_path = "/path/to/ca.pem"

/* Enable sending the threashold values as additional fields
* with the service check metrics. If set to true, it will
* send warn and crit for every performance data item.
*/
Comment on lines +34 to +37
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/* Enable sending the threashold values as additional fields
* with the service check metrics. If set to true, it will
* send warn and crit for every performance data item.
*/
/* Enable sending the threshold values as additional fields
* with the service check metrics. If set to true, it will
* send warn and crit for every performance data item.
*/

// enable_send_thresholds = false

/* The flush settings control how often data is sent to Elasticsearch.
* You can either flush based on a time interval or the number of
* events in the buffer. Whichever comes first will trigger a flush.
*/
// flush_threshold = 1024
// flush_interval = 10s

/* By default, all endpoints in a zone will activate the feature and start
* writing events to the Elasticsearch HTTP API. In HA enabled scenarios,
* it is possible to set `enable_ha = true` in all feature configuration
* files. This allows each endpoint to calculate the feature authority,
* and only one endpoint actively writes events, the other endpoints
* pause the feature.
*/
// enable_ha = false

/* By default, the feature will create an index template in Elasticsearch
* for the datastreams. If you want to manage the index template yourself,
* set manage_index_template to false.
*/
// manage_index_template = true

/* Additional tags and labels can be added to the host and service
* documents by using the host_tags_template, service_tags_template,
* host_labels_template and service_labels_template options.
* The tags and labels are static and will be added to every document.
*/
// host_tags_template = [ "icinga", "$host.vars.os$" ]
// service_tags_template = [ "icinga", "$service.vars.id$" ]
// host_labels_template = { "env" = "production", "os" = "$host.vars.os$" }
// service_labels_template = { "env" = "production", "id" = "$host.vars.id$" }

/* The filter option can be used to filter which events are sent to
* Elasticsearch. The filter is a regular Icinga 2 filter expression.
* The filter is applied to both host and service events.
* If the filter evaluates to true, the event is sent to Elasticsearch.
* If the filter is not set, all events are sent to Elasticsearch.
* You can use any attribute of the host, service, checkable or
* checkresult (cr) objects in the filter expression.
*/
// filter = {{ "host.name == 'myhost' || service.name == 'myservice'" }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this quoted inside the lambda? Furthermore, if you remove the outer quotes, the inner single-quotes are not valid DSL syntax.

}
12 changes: 12 additions & 0 deletions lib/perfdata/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ mkclass_target(influxdbcommonwriter.ti influxdbcommonwriter-ti.cpp influxdbcommo
mkclass_target(influxdbwriter.ti influxdbwriter-ti.cpp influxdbwriter-ti.hpp)
mkclass_target(influxdb2writer.ti influxdb2writer-ti.cpp influxdb2writer-ti.hpp)
mkclass_target(elasticsearchwriter.ti elasticsearchwriter-ti.cpp elasticsearchwriter-ti.hpp)
mkclass_target(elasticsearchdatastreamwriter.ti elasticsearchdatastreamwriter-ti.cpp elasticsearchdatastreamwriter-ti.hpp)
mkclass_target(opentsdbwriter.ti opentsdbwriter-ti.cpp opentsdbwriter-ti.hpp)
mkclass_target(perfdatawriter.ti perfdatawriter-ti.cpp perfdatawriter-ti.hpp)

Expand All @@ -18,6 +19,7 @@ set(perfdata_SOURCES
influxdb2writer.cpp influxdb2writer.hpp influxdb2writer-ti.hpp
opentsdbwriter.cpp opentsdbwriter.hpp opentsdbwriter-ti.hpp
perfdatawriter.cpp perfdatawriter.hpp perfdatawriter-ti.hpp
elasticsearchdatastreamwriter.cpp elasticsearchdatastreamwriter.hpp elasticsearchdatastreamwriter-ti.hpp
)

if(ICINGA2_UNITY_BUILD)
Expand Down Expand Up @@ -58,6 +60,15 @@ install_if_not_exists(
${ICINGA2_CONFIGDIR}/features-available
)

install_if_not_exists(
${PROJECT_SOURCE_DIR}/usr/elasticsearch/index-template.json
${ICINGA2_PKGDATADIR}/elasticsearch
)
install_if_not_exists(
${PROJECT_SOURCE_DIR}/etc/icinga2/features-available/elasticsearchdatastream.conf
${ICINGA2_CONFIGDIR}/features-available
)

install_if_not_exists(
${PROJECT_SOURCE_DIR}/etc/icinga2/features-available/opentsdb.conf
${ICINGA2_CONFIGDIR}/features-available
Expand All @@ -68,6 +79,7 @@ install_if_not_exists(
${ICINGA2_CONFIGDIR}/features-available
)


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Superfluous newline

install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_SPOOLDIR}/perfdata\")")
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_SPOOLDIR}/tmp\")")

Expand Down
Loading
Loading