Logstash OpenSearch Output Plugin - Leaking file descriptors

I am using Logstash 7.16.3 to send data to Humio using the OpenSearch output plugin and I eventually start getting a ton of errors like this (generally within 10-20 mins of starting Logstash):

logstash[38381]: [2022-08-21T15:11:01,085][ERROR][logstash.outputs.opensearch][main][0e21d7d163bfd9ae0bcb0995ed141b6720057119c2cea8281412c811173ef177] Encountered a retryable error (will retry with exponential backoff) {:code=>400, :url=>hxxps://cloud.us.humio.com:443/api/v1/ingest/elastic-bulk/_bulk, :content_length=>743911, :body=>“The request content was malformed:\nThe request’s encoding is corrupt: Truncated GZIP stream”}

Once this starts happening, Logstash starts chewing up file descriptors until it runs out of open files and then basically hangs. Sure, I could increase the “open files” ulimit, but that would just delay the issue (it craters in less than an hour as it is now)

**EDIT - It might not be file descriptors, it is more likely that Logstash is keeping all of these “retryable” batches and then simply runs out of memory

Now, if I enable Logstash trace level logging, then the problem never happens. So, I edited common.rb and commented out the “if” in this line:
log_hash[:body] = e.response_body #if @logger.debug? # Generally this is too verbose
That is what produced the error above.

If I edit api_configs.rb and set target_bulk_bytes to another (smaller) value, then I simply get the same error sooner, just with a lower content_length. (I wasn’t sure how to change this in a config file)

Any ideas how I can troubleshoot this further or resolve it? The error is one thing, that’s probably a Humio issue, but the running out of open files is a Logstash issue that I could really do without. Right now I just restart logstash every hour.

OK, here is the error with Message, Class, and Backtrace included:

logstash[39127]: [2022-08-21T19:59:45,939][ERROR][logstash.outputs.opensearch][main][0e21d7d163bfd9ae0bcb0995ed141b6720057119c2cea8281412c811173ef177] Encountered a retryable error (will retry with exponential backoff) {:code=>400, :url=>“hxxps://cloud.us.humio.com:443/api/v1/ingest/elastic-bulk/_bulk”, :content_length=>759476, :body=>“The request content was malformed:\nThe request’s encoding is corrupt: Truncated GZIP stream”, :message=>“Got response code ‘400’ contacting OpenSearch at URL ‘hxxps://cloud.us.humio.com:443/api/v1/ingest/elastic-bulk/_bulk’”, :class=>LogStash::Outputs::OpenSearch::HttpClient::Pool::BadResponseCodeError, :backtrace=>[“/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch/http_client/manticore_adapter.rb:169:in perform_request'", "/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch/http_client/pool.rb:270:in perform_request_to_url’”, “/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch/http_client/pool.rb:257:in block in perform_request'", "/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch/http_client/pool.rb:346:in with_connection’”, “/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch/http_client/pool.rb:256:in perform_request'", "/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch/http_client/pool.rb:264:in block in Pool’”, “/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch/http_client.rb:164:in bulk_send'", "/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch/http_client.rb:129:in block in bulk’”, “org/jruby/RubyArray.java:1821:in each'", "org/jruby/RubyEnumerable.java:1258:in each_with_index’”, “/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch/http_client.rb:117:in bulk'", "/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/plugin_mixins/opensearch/common.rb:225:in safe_bulk’”, “/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/plugin_mixins/opensearch/common.rb:157:in submit'", "/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/plugin_mixins/opensearch/common.rb:106:in retrying_submit’”, “/usr/share/logstash-7.16.3/vendor/bundle/jruby/2.5.0/gems/logstash-output-opensearch-1.2.0-java/lib/logstash/outputs/opensearch.rb:270:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:143:in multi_receive’”, “org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:121:in multi_receive'", "/usr/share/logstash-7.16.3/logstash-core/lib/logstash/java_pipeline.rb:299:in block in start_workers’”]}