- 
                Notifications
    You must be signed in to change notification settings 
- Fork 597
Create a new ElasticsearchDatastreamWriter to more efficiently store data in elasticsearch. #10577
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
12c6baf
              c0c0beb
              3cef545
              d6acf20
              2f78ffd
              3f98090
              c1625e2
              3a4c2ce
              c1b98d2
              16b6c1b
              5d847b1
              5dee72e
              5c270a5
              File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,81 @@ | ||||||||||||||||||
| /** The ElasticsearchDatastreamWriter feature writes Icinga 2 events to an Elasticsearch datastream. | ||||||||||||||||||
| * This feature requires Elasticsearch 8.12 or later. | ||||||||||||||||||
| */ | ||||||||||||||||||
|  | ||||||||||||||||||
| object ElasticsearchDatastreamWriter "elasticsearch" { | ||||||||||||||||||
| host = "127.0.0.1" | ||||||||||||||||||
| port = 9200 | ||||||||||||||||||
|  | ||||||||||||||||||
| /* To enable a https connection, set enable_tls to true. */ | ||||||||||||||||||
| // enable_tls = false | ||||||||||||||||||
|  | ||||||||||||||||||
| /* The datastream namespace to use. This can be used to separate different | ||||||||||||||||||
| * Icinga instances. Or for letting multiple Writers write to different | ||||||||||||||||||
| * datastreams in the same Elasticsearch cluster. By using the filter option. | ||||||||||||||||||
| * The Elasticsearch datastream name will be | ||||||||||||||||||
| * "metrics-icinga2.{check}-{datastream_namespace}". | ||||||||||||||||||
| */ | ||||||||||||||||||
| 
      Comment on lines
    
      +12
     to 
      +17
    
   There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The block comments in this file are a bit inconsistent. Can you please ensure they're in this format: Sorry for nitpicking again 😄 | ||||||||||||||||||
| // datastream_namespace = "default" | ||||||||||||||||||
|  | ||||||||||||||||||
| /* You can authorize icinga2 through three different methods. | ||||||||||||||||||
| * 1. Basic authentication with username and password. | ||||||||||||||||||
| * 2. Bearer token authentication with api_token. | ||||||||||||||||||
| * 3. Client certificate authentication with cert_path and key_path. | ||||||||||||||||||
| */ | ||||||||||||||||||
| // username = "icinga2" | ||||||||||||||||||
| // password = "changeme" | ||||||||||||||||||
|  | ||||||||||||||||||
| // api_token = "" | ||||||||||||||||||
|  | ||||||||||||||||||
| // cert_path = "/path/to/cert.pem" | ||||||||||||||||||
| // key_path = "/path/to/key.pem" | ||||||||||||||||||
| // ca_path = "/path/to/ca.pem" | ||||||||||||||||||
|  | ||||||||||||||||||
| /* Enable sending the threashold values as additional fields | ||||||||||||||||||
| * with the service check metrics. If set to true, it will | ||||||||||||||||||
| * send warn and crit for every performance data item. | ||||||||||||||||||
| */ | ||||||||||||||||||
| 
      Comment on lines
    
      +34
     to 
      +37
    
   There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 
        Suggested change
       
 | ||||||||||||||||||
| // enable_send_thresholds = false | ||||||||||||||||||
|  | ||||||||||||||||||
| /* The flush settings control how often data is sent to Elasticsearch. | ||||||||||||||||||
| * You can either flush based on a time interval or the number of | ||||||||||||||||||
| * events in the buffer. Whichever comes first will trigger a flush. | ||||||||||||||||||
| */ | ||||||||||||||||||
| // flush_threshold = 1024 | ||||||||||||||||||
| // flush_interval = 10s | ||||||||||||||||||
|  | ||||||||||||||||||
| /* By default, all endpoints in a zone will activate the feature and start | ||||||||||||||||||
| * writing events to the Elasticsearch HTTP API. In HA enabled scenarios, | ||||||||||||||||||
| * it is possible to set `enable_ha = true` in all feature configuration | ||||||||||||||||||
| * files. This allows each endpoint to calculate the feature authority, | ||||||||||||||||||
| * and only one endpoint actively writes events, the other endpoints | ||||||||||||||||||
| * pause the feature. | ||||||||||||||||||
| */ | ||||||||||||||||||
| // enable_ha = false | ||||||||||||||||||
|  | ||||||||||||||||||
| /* By default, the feature will create an index template in Elasticsearch | ||||||||||||||||||
| * for the datastreams. If you want to manage the index template yourself, | ||||||||||||||||||
| * set manage_index_template to false. | ||||||||||||||||||
| */ | ||||||||||||||||||
| // manage_index_template = true | ||||||||||||||||||
|  | ||||||||||||||||||
| /* Additional tags and labels can be added to the host and service | ||||||||||||||||||
| * documents by using the host_tags_template, service_tags_template, | ||||||||||||||||||
| * host_labels_template and service_labels_template options. | ||||||||||||||||||
| * The tags and labels are static and will be added to every document. | ||||||||||||||||||
| */ | ||||||||||||||||||
| // host_tags_template = [ "icinga", "$host.vars.os$" ] | ||||||||||||||||||
| // service_tags_template = [ "icinga", "$service.vars.id$" ] | ||||||||||||||||||
| // host_labels_template = { "env" = "production", "os" = "$host.vars.os$" } | ||||||||||||||||||
| // service_labels_template = { "env" = "production", "id" = "$host.vars.id$" } | ||||||||||||||||||
|  | ||||||||||||||||||
| /* The filter option can be used to filter which events are sent to | ||||||||||||||||||
| * Elasticsearch. The filter is a regular Icinga 2 filter expression. | ||||||||||||||||||
| * The filter is applied to both host and service events. | ||||||||||||||||||
| * If the filter evaluates to true, the event is sent to Elasticsearch. | ||||||||||||||||||
| * If the filter is not set, all events are sent to Elasticsearch. | ||||||||||||||||||
| * You can use any attribute of the host, service, checkable or | ||||||||||||||||||
| * checkresult (cr) objects in the filter expression. | ||||||||||||||||||
| */ | ||||||||||||||||||
| // filter = {{ "host.name == 'myhost' || service.name == 'myservice'" }} | ||||||||||||||||||
| There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why is this quoted inside the lambda? Furthermore, if you remove the outer quotes, the inner single-quotes are not valid DSL syntax. | ||||||||||||||||||
| } | ||||||||||||||||||
| Original file line number | Diff line number | Diff line change | 
|---|---|---|
|  | @@ -6,6 +6,7 @@ mkclass_target(influxdbcommonwriter.ti influxdbcommonwriter-ti.cpp influxdbcommo | |
| mkclass_target(influxdbwriter.ti influxdbwriter-ti.cpp influxdbwriter-ti.hpp) | ||
| mkclass_target(influxdb2writer.ti influxdb2writer-ti.cpp influxdb2writer-ti.hpp) | ||
| mkclass_target(elasticsearchwriter.ti elasticsearchwriter-ti.cpp elasticsearchwriter-ti.hpp) | ||
| mkclass_target(elasticsearchdatastreamwriter.ti elasticsearchdatastreamwriter-ti.cpp elasticsearchdatastreamwriter-ti.hpp) | ||
| mkclass_target(opentsdbwriter.ti opentsdbwriter-ti.cpp opentsdbwriter-ti.hpp) | ||
| mkclass_target(perfdatawriter.ti perfdatawriter-ti.cpp perfdatawriter-ti.hpp) | ||
|  | ||
|  | @@ -18,6 +19,7 @@ set(perfdata_SOURCES | |
| influxdb2writer.cpp influxdb2writer.hpp influxdb2writer-ti.hpp | ||
| opentsdbwriter.cpp opentsdbwriter.hpp opentsdbwriter-ti.hpp | ||
| perfdatawriter.cpp perfdatawriter.hpp perfdatawriter-ti.hpp | ||
| elasticsearchdatastreamwriter.cpp elasticsearchdatastreamwriter.hpp elasticsearchdatastreamwriter-ti.hpp | ||
| ) | ||
|  | ||
| if(ICINGA2_UNITY_BUILD) | ||
|  | @@ -58,6 +60,15 @@ install_if_not_exists( | |
| ${ICINGA2_CONFIGDIR}/features-available | ||
| ) | ||
|  | ||
| install_if_not_exists( | ||
| ${PROJECT_SOURCE_DIR}/usr/elasticsearch/index-template.json | ||
| ${ICINGA2_PKGDATADIR}/elasticsearch | ||
| ) | ||
| install_if_not_exists( | ||
| ${PROJECT_SOURCE_DIR}/etc/icinga2/features-available/elasticsearchdatastream.conf | ||
| ${ICINGA2_CONFIGDIR}/features-available | ||
| ) | ||
|  | ||
| install_if_not_exists( | ||
| ${PROJECT_SOURCE_DIR}/etc/icinga2/features-available/opentsdb.conf | ||
| ${ICINGA2_CONFIGDIR}/features-available | ||
|  | @@ -68,6 +79,7 @@ install_if_not_exists( | |
| ${ICINGA2_CONFIGDIR}/features-available | ||
| ) | ||
|  | ||
|  | ||
| There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Superfluous newline | ||
| install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_SPOOLDIR}/perfdata\")") | ||
| install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_SPOOLDIR}/tmp\")") | ||
|  | ||
|  | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to explicitly mention that the
ElasticsearchDatastreamWriterwill not work with OpenSearch? Since the currentElasticsearchWriterworks with OpenSearch, users might expect the same here.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did some testing with OpenSearch, luckily there is little that needs to change for this implementation to work with OpenSerch.
index-template.jsonneeds to be changed, the users can do that themselvescharset=UTF-8notcharset=utf-80001-Opensearch.patch