-
Minimum requirements for all software (elasticsearch, logstash, kibana) is 7.17
-
for many reasons, but 1 of many great reasons is Ingest pipeline => 7.11 supports the set processor function "copy_from"
-
tag_with_exception_message for ruby filter block
- requires version => 7.17
- requires version => 8.x
-
ingest pipelines finally performant in => 8.8
-
logstash more performant in 8.x
-
many many other reasons
-
-
[@metadata][corelight_env_vars][disable_legacy_md5_fingerprint_hash] if set to true will disable legacy md5 fingerprint hashes else will call 8811-corelight-ecs-network_fingerprints-enrich-legacy_md5-filter.conf and add an array w/ md5 to the existing sha1
-
changed values for
labels.corelight.ecs_method
. values are nowlogstash_pipeline
oringest_pipeline
previously werelogstash
oringest_pipeline
-
added
labels.corelight.ecs_pipeline_method
-
replaced text analyzer with es_stk_analyzer
-
removed stats index, replaced with metrics. event.category is diagnostics same as the other metrics (previously was 'stats') these were the logs:
- capture_loss
- corelight_cloud_stats
- corelight_overall_capture_loss
- corelight_profiling
- corelight_weird_stats
- namecache
- packet_filter
- reporter
- stats
- suricata_stats
- weird_stats
-
removed netcontrol and iam index, replaced with system. these were the logs:
- netcontrol
- netcontrol_drop
- netcontrol_shunt
- corelight_audit_log
- audit
- auditlog
-
changed suricata_corelight dataset back to suricata_corelight from suricata
-
parse-failures is now parse_failures & parse-failure is parse_failure
-
change intel log event.category from intrustion_detection to threat
-
changed VAR_$Corelight_LS_Index_Strategy to VAR_CORELIGHT_INDEX_STRATEGY
-
logs that do not explicitly have a pipeline/parser (new or unknown) they get set to event.category tbd (previously used either temporary or event.dataset). not use event.dataset as do not want to autocreate datastreams that would live outside ILM patterns and other control/settings and permissions. can still search the logs as is despite the name in the index pattern. event.dataset still gets set.
-
you can set [@metadata][custom_temporary_metadata_index_name_namespace] or [data_stream][namespace] to set a desired namespace for multiple tenants/clients data
- if [@metadata][custom_temporary_metadata_index_name_namespace] is set then it overrides if [data_stream][namespace] already exists
-
for custom mappings,settings,ilm,etc.. use
-
aliases
- corelight-main_logs-aliases@custom
- corelight-metrics_and_stats-aliases@custom
- corelight-parse_failures-aliases@custom
-
base settings
- corelight-main_logs-base-settings@custom
- corelight-metrics_and_stats-base-settings@custom
- corelight-parse_failures-base-settings@custom
-
ilm settings
- corelight-main_logs-ilm-settings@custom
- corelight-metrics_and_stats-ilm-settings@custom
- corelight-parse_failures-ilm-settings@custom
-
mappings
- corelight-main_logs-mappings@custom
- corelight-metrics_and_stats-mappings@custom
- corelight-parse_failures-mappings@custom
-
-
ingest pipelines custom additions user controlled
if the pipelines do not exist, it just ignores them. so in other words, is optional for user -
names are
- before starting corelight ingest pipelines use 'corelight-ecs-main-pipeline@custom'
- after completing corelight ingest pipelines use 'corelight-ecs-postprocess-final-main-pipeline@custom'
-
date fields that are user/software/attacker controlled (smtp.date for example) or commonly could be invalid (like for x509 cert dates before/after epoch) are always kept now instead of removed. if valid date timestamp then it is copied as such to a new field
-
logstash moved 0101-corelight-ecs-user_defined-set_indexing_strategy-filter.conf into 3100-corelight-ecs-common-set_index_prefix_and_suffix-filter.conf ( no longer need to edit manually anyways)
-
installer script, use last run
- uses last run, allows directory to be specified
- this can also be used to just use the script to upload things you generated elsewhere or if already have the files generated
-
additional changelog
-
logstash network fingerprints have md5 and sha1, sha1 will be used for future so that ingest pipelines will be able to support to. and ingest pipelines do not have a builtin callable md5 from script processor. only fingerprint processor that unfortunately has a bug of copying a null byte that makes the hash unusuable with other things (like logstash in this case)
-
single quote invalid IPs in event.tag (previously was not quoted)
-
IPs beginning with "0" (ie: 0.0.0.0/8) changed
- ip_type from "reserved_as_a_source_address_only" to "reserved_local_this_network"
- ip_rfc from "RFC_1700" to "RFC_1122-3.2.1.3"
-
split out RFC info for all RFC6890, RFC2544, RFC5737 that were previously tagged together
-
255.255.255.255 changed from RFC_8190 to RFC_919
-
added rfc for 240.0.0.0/4 RFC5735, no longer grouped under multicast
-
carrier grade nat 100.64.0.0-100.127.255.255 fixed type label
-
0.0.0.0 set to type reserved_any_address
-
switched ingest pipeline's main_pipeline to call pipeline name w/ the event.dataset
-
changed the "genera"/"common" corelight_genenral_pipeline to corelight_common_pipeline
-
renamed all pipelines (to match similar naming in logstash)
- corelight_ to corelight-ecs-
- _pipeline to -pipeline
-
renamed pipelines (to match similar naming in logstash)
- metrics_general to common-metrics
- netcontrol_general to common-netcontrol
- system_related_info_general to common-system
- stats_general to common-stats
-
extended time fields for pulling out x509 and smtp date fields
-
ECS email field set is GA. previously these were always copied. However, they are now renamed instead. as follows. Alias field mappings are provided for backwards compatibility.
- smtp.cc > email.cc.address
- smtp.from > email.from.address
- smtp.mailfrom > email.sender.address
- smtp.msg_id > email.message_id
- smtp.reply_to > email.reply_to.address
- smtp.subject > email.subject
- smtp.to > email.to.address
- smtp.date > smtp.date_non_formatted_date (always kept)
- smtp.date_non_formatted_date COPIED to email.origination_timestamp (if valid date timestamp)
- smtp.subject_has_non_ascii > email.subject_has_non_ascii
- smtp.subject_length > email.subject_length
-
-
for
corelight_weird_stats.log
andweird_stats.log
- changed
name
fromlabels.weird_name
toname
- removed config in ingest pipeline, was not in logstash pipeline
- changed
num_seen
fromlabels.weird_number_seen
tonum_seen
- removed config in ingest pipeline, was not in logstash pipeline
- changed
-
for
conn_doctor.log
- changed
bad
fromconn_doctor.bad
tobad
- removed config in logstash pipeline, was not in ingest pipeline
- changed
check
fromconn_doctor.check
tocheck
- removed config in logstash pipeline, was not in ingest pipeline
- changed
percent
fromconn_doctor.percent
topercent
- removed config in logstash pipeline, was not in ingest pipeline
- changed
-
removed
labels.dns.query_length
from mappings, which was just an alias todns.question.name_length
-
spcap fields for
conn.log
- ingest pipeline:
- added varations for underscore fields (ie:
spcap_rule
,spcap_trigger
,spcap_url
) spcap_rule
fromlabels.corelight.spcap.rule
toconn.spcap_rule
spcap_trigger
fromlabels.corelight.spcap_trigger
toconn.spcap_trigger
spcap_url
fromlabels.corelight.spcap_url
toconn.spcap_url
- added varations for underscore fields (ie:
- logstash pipeline:
spcap_rule
fromlabels.corelight.spcap.rule
toconn.spcap_rule
spcap_trigger
fromlabels.corelight.spcap.trigger
toconn.spcap_trigger
spcap_url
fromlabels.corelight.spcap.url
toconn.spcap_url
- ingest pipeline:
-
removed
labels.etl.elasticsearch_index_*
fieldslabels.etl.elasticsearch_index_name_prefix
labels.etl.elasticsearch_index_name_suffix
labels.etl.elasticsearch_index_strategy
-
suricata_corelight log
- append the value from
suricata.alert.action
toevent.type
. previouslyalert.action
was renamed toevent.type
- append the value from
-
for metric and system logs, perform '.' fields to '_' for ingest pipeline the same as is done in logstash pipelines
-
http log
- logstash pipeline
username
fromusername
tourl.username
& copy tosource.user.name
- ingest pipeline
username
fromsource.user.name
tourl.username
& copy tosource.user.name
password
fromsource.user.password
tourl.password
& copy tosource.user.password
- logstash pipeline