I just changed from the Elasticsearch output plugin in Logstash 7.12.1 to the Opensearch output plugin in Logstash 7.16.1 to keep things safe from log4shell and I encountered a new behavior. I’m not sure if it’s in the output plugin or Logstash itself, though.
Before I would see a log message like Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"audit", :routing=>nil}, #<LogStash::Event:0x4adef63b>]
Now instead of getting the event pointer, I get the entire event dumped into my log. This immense log line is then picked up by Filebeat and sent to Logstash.
This then turns into a Yo Dawg Joke of Death. Failing to index the increasingly large log line, logging the new, longer line, then getting it sent back into Logstash.
Does this seem like a problem with the Opensearch plugin or Logstash?
I’m working on my own solutions to this situation, but figured I’m not the only one who’s run into this.
Yeah, I’m using the Opensearch plugin now. The example I posted was the old behavior.
New behavior is Could not index event to OpenSearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"ecs", :routing=>nil}, {"event"=>{"original"=>
Followed by more than 32KB of data from a single awful log line.
Ok. Well, we’ve a eliminated variable but looks like there still is the problem. Considering that, it seems like a configuration issue outside of the output plugin (or more specifically, ES and OpenSearch output yield the same).
This seems like it’s related to the Logstash dead letter queue which makes me think something is going on with the template in OpenSearch. So, you have something that doesn’t match the template, gets sent to the DLQ (which you’re logging) and the cycle goes on. Maybe try disabling the DLQ in logstash or taking a look at how your templates are matching the incoming data from logstash.