Handling grok matching failure in piepline

I would like to save events to separate index when grok match pattern fails to parse the event and I wish to preserve the original event (message).

My pipeline when grok match is ok, deletes the original message as it is no longer needed, I have all fields I wanted.

But the same processors (like delete_entries) are applied when grok fails, so in my destination index for failed events I no longer have original, untouched message.

There is no way to conditionally execute delete_entries (where I delete original message) based on tag set by grok option: tags_on_match_failure. So I’m a bit lost how to get my goal?

Should I create a subpipelines, after the grok matching and route traffic based on tags_on_match_failure? Then subpipeline_grok_ok would process the event parsed properly (use delete_entries etc) while the subpipeline_grok_fail would leave the message untouched and use different sink? That is more complex compared to current pipeline, and I wish to avoid complexity if not necessary.

So I did it. The only way I found is using routes/subpipelines. routing condition uses hasTags() funtion and based on this forks the flow to subpipelines. One servers the happy path (grok was fine) second one the failed grok path. Each pipeline stores the message in different index for processed and failed events.

Error handling this way made the pipeline more complicated. If any of you have better/simpler idea how to solve this I would appreciated to here it.