You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running a web server in a docker container, managed by containerpilot inside the container and deployed via docker stack
containerpilot captures stdout from the web server and prefixes it with additional fields before writing to it's own stdout, which logagent eventually processes
e.g a sample output line as seen by logagent and recorded as 'message' in elk
I can handle this situation by copying the existing httpd pattern and adding additional fields corresponding to datestamp, process name, and loglevel.
However I will have other processes deployed in a similar manner that won't be web servers. Rather than writing custom plugins just to handle additional fields prefixed by containerpilot, I wonder if it would be possible to have some kind of globalTransform that runs BEFORE existing pattern matching, on non-json input.
This could be something like the grep input filter, except adding the ability to:
a. transform the data before passing it on to the callback for subsequent processing
b. optionally capturing meta-data in this stage of the pipeline (such as loglevel, in my example)
In some sense, this could be a group of transforms that try to match the raw input line, the first input transform that "matches" would be the only one that gets to alter the raw input line, and then processing continues just as it does now with input filters and patterns.
maybe this can also handle the case where a container application outputs json, which needs to be split back into regular text for pattern match (e.g transform from json to 'text'), or even nested json extraction, etc..
The text was updated successfully, but these errors were encountered:
+1 To create some input filter for container pilot. The input filter could set a log context object, and an output filter could add the fields from log context back to the log message object. Just an idea to implement what you want with the existing Logagent mechanisms.
If you have more questions, please feel free to reach out ...
I am running a web server in a docker container, managed by containerpilot inside the container and deployed via docker stack
containerpilot captures stdout from the web server and prefixes it with additional fields before writing to it's own stdout, which logagent eventually processes
e.g a sample output line as seen by logagent and recorded as 'message' in elk
or, as seen by docker logs command
I can handle this situation by copying the existing httpd pattern and adding additional fields corresponding to datestamp, process name, and loglevel.
However I will have other processes deployed in a similar manner that won't be web servers. Rather than writing custom plugins just to handle additional fields prefixed by containerpilot, I wonder if it would be possible to have some kind of globalTransform that runs BEFORE existing pattern matching, on non-json input.
This could be something like the grep input filter, except adding the ability to:
a. transform the data before passing it on to the callback for subsequent processing
b. optionally capturing meta-data in this stage of the pipeline (such as loglevel, in my example)
In some sense, this could be a group of transforms that try to match the raw input line, the first input transform that "matches" would be the only one that gets to alter the raw input line, and then processing continues just as it does now with input filters and patterns.
maybe this can also handle the case where a container application outputs json, which needs to be split back into regular text for pattern match (e.g transform from json to 'text'), or even nested json extraction, etc..
The text was updated successfully, but these errors were encountered: