Logstash OSS v8.9.0 - bad certificate

Trying to install logstash oss v8.9.0 onto v2.9 of opensearch and receiving the following error message

io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[netty-codec-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.94.Final.jar:4.1.94.Final]
        at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:365) ~[?:?]
        at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293) ~[?:?]
        at sun.security.ssl.TransportContext.dispatch(TransportContext.java:204) ~[?:?]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:297) ~[netty-handler-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1353) ~[netty-handler-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246) ~[netty-handler-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1295) ~[netty-handler-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[netty-codec-4.1.94.Final.jar:4.1.94.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[netty-codec-4.1.94.Final.jar:4.1.94.Final]

Not sure why this error is being presented as the root ca and all the other certs are valid.

Hey @jsamuel12

Can you share you logstash configurations?

this is the logstash.conf content

input {
  beats {
    port => 5044
    client_inactivity_timeout => 120
    ssl             => true
    ssl_key         => "/usr/share/logstash/config/logstash.key"
    ssl_certificate => "/usr/share/logstash/config/logstash.crt"
  }
}


# If we find DOBs in MM/DD/YYYY format, rewrite them to
# YYYY-MM-DD and save the original
filter {
  ruby {
    code => "
      require 'date'
      dobstr = event.get('[body][dob]')
      if dobstr
        begin
          dob    = Date.strptime(dobstr, '%m/%d/%Y')
          dobISO = dob.strftime('%Y-%m-%d')
          event.set('[body][dob]', dobISO)
          event.set('[originalDOB]', dobstr)
          rescue
        end
      end
    "
  }
}

output {
  if [fields][index_name] {
    opensearch {
        hosts => ["opensearch01.staging.aws.ch.com:9200","opensearch02.staging.aws.ch.com:9200"]
        auth_type => {
          type => 'basic'
          user => "logstash"
          password => <redacted>
        }
        ecs_compatibility => disabled
        ssl => true
        ssl_certificate_verification => true
        cacert => "/usr/local/share/ca-certificates/opensearch-staging-root-ca.crt"
        index => "logstash-%{[fields][index_name]}-%{+YYYY.MM.dd}"
    }
  }
  else {
    opensearch {
        hosts => ["opensearch01.staging.aws.ch.com:9200","opensearch02.staging.aws.ch.com:9200"]
        auth_type => {
          type => 'basic'
          user => "logstash"
          password => <redacted>
        }
        ecs_compatibility => disabled
        ssl => true
        ssl_certificate_verification => true
        cacert => "/usr/local/share/ca-certificates/opensearch-staging-root-ca.crt"
        index => "logstash-%{+YYYY.MM.dd}"
    }
  }
}

Just FYI, i added the root ca onto the trusted store…

Hey @jsamuel12

Have you tried using?

ssl_certificate_verification => false

Yea, I’ve tried that. I even downgraded logstash to v7.16.2 and still having the same error. What’s funny is that I was able to curl all the opensearch nodes securely without any issues.

I used the search guard online tools to generate all the certificates

Hey,

I was just reading on something familiar to this issue here, dating back to 2016 to 2022.

I would double check keystore ensure Opensearch has access and permissions on all certificates. If that does work I would test this out with self-signed certificates, the example here

If that works then you know for sure where this issue is. Other then that your configuration look like mine.

here is what mine look like and Im using self-signed certs.

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
  beats {
    port => 5044
    tags => [ 'beat' ]
      }
}
input {
  udp {
    port => 5144
    tags => ['syslog']
  }
}
input {
  http {
    port      => 12345
    tags => ['fluent']
    add_field => { "[@metadata][input-http]" => "" }
  }
}

filter {
  if [@metadata][input-http] {
    date {
      match => [ "date", "UNIX" ]
      remove_field => [ "date" ]
    }
    mutate {
      remove_field => ["headers","host"]
    }
  }
}


filter {

if "syslog" in [tags] {

grok {
      match => ["message", "%{SYSLOG5424PRI}%{GREEDYDATA:message}"]
      overwrite => [ "message" ]
        }


    kv {
       source => "message"
       value_split => "="
    }

   }
  }


filter {

if "syslog" in [tags] {
 mutate {
        remove_field => [ "addr","appcat","craction","crlevel","crscore","devtype","dstdevtype","dstosname","dstserver","dstserver","fazlograte","freediskstorage","interface","log.syslog.priority","masterdstmac","mastersrcmac","osname","policytype","poluuid","setuprate","srchwvendor","srcserver","total","totalsession","used","user","vd"]
  }
 }
}

output {
if "beat" in [tags] {
  opensearch {
    hosts => ["https://domain.com:9200"]
    auth_type => {
              type => 'basic'
              user => 'admin'
              password => 'changeit'
            }
    ecs_compatibility => disabled
    ssl => true
    ssl_certificate_verification => false
    cacert => "/opt/logstash-8.6.1/root-ca.pem"
    #index => "winlogbeat-%{+YYYY.MM.dd}"
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
     }
  }
if "syslog" in [tags] {
          opensearch {
             hosts => ["https://domain.com:9200"]
                        auth_type => {
                            type => 'basic'
                            user => 'admin'
                            password => 'changeit'
                          }
                        ecs_compatibility => disabled
                        ssl => true
                        ssl_certificate_verification => false
                        cacert => "/opt/logstash-8.6.1/root-ca.pem"
                        index => "firewall-%{+YYYY.MM.dd}"
        }
    }
if "fluent" in [tags] {
          opensearch {
             hosts => ["https://domain.com:9200"]
                        auth_type => {
                            type => 'basic'
                            user => 'admin'
                            password => 'changeit'
                          }
                        ecs_compatibility => disabled
                        ssl => true
                        ssl_certificate_verification => false
                        cacert => "/opt/logstash-8.6.1/root-ca.pem"
                        index => "fluent-bit-%{+YYYY.MM.dd}"

        }
    }
}

I think the difference between our setups is…

  1. Im not using certs on my logstash input
  2. I think your using AWS instance and Im using Hyper-v virtual machine
  3. Created my own certificates and your using “search guard online tools”

A while back I did use CertBot/Letsencrypt.
Sicne we know it bad certificates I would work around that issue but as far as your settings go I believe there correct.

EDIT: @jsamuel12
Did find something here

The one thing I’m not following is the part of the keystore. Right now, I’m not adding any of the certificates (all in pkcs8 format) to the keystore (which only accepts JKS or PKCS12/PFX format). I followed Generate certificates - OpenSearch documentation which does works for all the opensearch/opendashboard nodes but not the logstash server.

Hey @jsamuel12

Does Logstash have permission to the keystore that OS/OSD has?

I figured out the issue and this was self inflicted (a rogue server that had an older certificate which had a pointer to the logstash server).

Thank you so much for your help!

1 Like