Logstash can't connect to Elasticsearch: java.net.SocketException: Connection reset

On a Windows machine, I am trying to add some data with Logstash to Elasticsearch, using the following output configuration in logstash.conf:

output {
    elasticsearch { 
        hosts => "https://localhost:9200"
        ssl => true
        ssl_certificate_verification => false
        user => logstash
        password => logstash
        ilm_enabled => false
        index => "logstash"

The logstash.bat executes without errors, although with some warnings.

However, from field audit_request_exception_stacktrace, I get the following in Kibana:

java.net.SocketException: Connection reset at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:345) at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:376) at org.elasticsearch.transport.CopyBytesSocketChannel.readFromSocketChannel(CopyBytesSocketChannel.java:141)

As shown in the following picture:

Connection reset simply means that a TCP RST was received. This happens when your peer receives data that it can’t process, and there can be various reasons for that. The simplest is when you close the socket, and then write more data on the output stream. By closing the socket, you told your peer that you are done talking, and it can forget about your connection. When you send more data on that stream anyway, the peer rejects it with an RST to let you know it isn’t listening.

Hey @Trelipks ,

did you manage to resolve this issue? or found a root cause of it? If so, could you maybe share what was the reason for the Connection reset?


What version of logstash and ODFE did you use?

Hey @pablo ,

in my case, I’m not using logstash (no issues with logstash so far) but the client-application is using HighLevelRest Client, and typically that is just a normal (benchmarking call) towards ODFE.
Maybe I can also give a bit more context: The ODFE is setup via Docker Swarm on the overlay network using endpoint_mode: dnsrr. There have been some improvements by updating the default values on TCP Keepalive which stops connection timeouts (this improvements may be also be related to this fix https://github.com/opendistro-for-elasticsearch/security/pull/1061 that I did upgrade to the latest ODFE version 1.13.1 from 1.13.0, but can’t confirm this without further tests).

Even there aren’t any timeouts anymore, I’m still getting such errors:

java.net.SocketException: Connection reset
	at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:367)
	at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:398)
	at org.elasticsearch.transport.CopyBytesSocketChannel.readFromSocketChannel(CopyBytesSocketChannel.java:141)
	at org.elasticsearch.transport.CopyBytesSocketChannel.doReadBytes(CopyBytesSocketChannel.java:126)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at java.base/java.lang.Thread.run(Thread.java:832)

but hard to really trace it to know from where it is coming (it may also be influenced by the load-balancer, but it isn’t happening to the other cluster).

I know that it even may be related to the env :frowning: – but just wanted to share this so that we can share the solution if this did happen to someone else.

HI @GezimSejdiu
Thanks for update. I’ll see if I can repro that.

1 Like