I would like to save events to separate index when grok match pattern fails to parse the event and I wish to preserve the original event (message).
My pipeline when grok match is ok, deletes the original message as it is no longer needed, I have all fields I wanted.
But the same processors (like delete_entries) are applied when grok fails, so in my destination index for failed events I no longer have original, untouched message.
There is no way to conditionally execute delete_entries (where I delete original message) based on tag set by grok option: tags_on_match_failure
. So I’m a bit lost how to get my goal?
Should I create a subpipelines, after the grok matching and route traffic based on tags_on_match_failure
? Then subpipeline_grok_ok would process the event parsed properly (use delete_entries etc) while the subpipeline_grok_fail would leave the message untouched and use different sink? That is more complex compared to current pipeline, and I wish to avoid complexity if not necessary.