OpenSearchException: Transport client authentication no longer supported

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
OpenSearch 2.3.0

Describe the issue:
I’m getting the following message in the logs, and I’m not sure how to troubleshoot it or even where to look. The documentation says: “TLS is optional for the REST layer and mandatory for the transport layer.”. So, an error message saying that the mandatory requirement is no longer supported seem kind of stupid?

[2022-11-01T15:07:23,402][ERROR][o.o.s.t.SecurityRequestHandler] [test-cluster-master-1] OpenSearchException[Transport client authentication no longer supported.]
[2022-11-01T15:07:23,581][ERROR][o.o.s.t.SecurityRequestHandler] [test-cluster-master-1] OpenSearchException[Transport client authentication no longer supported.]
[2022-11-01T15:07:23,848][WARN ][o.o.d.HandshakingTransportAddressConnector] [test-cluster-master-1] handshake failed for [connectToRemoteMasterNode[10.60.133.227:9300]]
org.opensearch.transport.RemoteTransportException: [test-cluster-master-2][10.60.133.227:9300][internal:transport/handshake]
Caused by: org.opensearch.OpenSearchException: Transport client authentication no longer supported.
        at org.opensearch.security.ssl.util.ExceptionUtils.createTransportClientNoLongerSupportedException(ExceptionUtils.java:63) ~[?:?]
        at org.opensearch.security.transport.SecurityRequestHandler.messageReceivedDecorate(SecurityRequestHandler.java:265) ~[?:?]
        at org.opensearch.security.ssl.transport.SecuritySSLRequestHandler.messageReceived(SecuritySSLRequestHandler.java:152) ~[?:?]
        at org.opensearch.security.OpenSearchSecurityPlugin$7$1.messageReceived(OpenSearchSecurityPlugin.java:658) ~[?:?]
        at org.opensearch.indexmanagement.rollup.interceptor.RollupInterceptor$interceptHandler$1.messageReceived(RollupInterceptor.kt:100) ~[?:?]
        at org.opensearch.performanceanalyzer.transport.PerformanceAnalyzerTransportRequestHandler.messageReceived(PerformanceAnalyzerTransportRequestHandler.java:43) ~[?:?]
        at org.opensearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:106) ~[opensearch-2.3.0.jar:2.3.0]
        at org.opensearch.transport.InboundHandler.handleRequest(InboundHandler.java:249) ~[opensearch-2.3.0.jar:2.3.0]
        at org.opensearch.transport.InboundHandler.messageReceived(InboundHandler.java:132) ~[opensearch-2.3.0.jar:2.3.0]
        at org.opensearch.transport.InboundHandler.inboundMessage(InboundHandler.java:114) ~[opensearch-2.3.0.jar:2.3.0]
        at org.opensearch.transport.TcpTransport.inboundMessage(TcpTransport.java:769) ~[opensearch-2.3.0.jar:2.3.0]
        at org.opensearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:175) ~[opensearch-2.3.0.jar:2.3.0]
        at org.opensearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:150) ~[opensearch-2.3.0.jar:2.3.0]
        at org.opensearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:115) ~[opensearch-2.3.0.jar:2.3.0]
        at org.opensearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:94) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
        at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1373) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1236) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:623) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:586) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
        at java.lang.Thread.run(Thread.java:833) [?:?]

I want to assume that this is just a case of confusing terminology, but I can’t seem to find anything about transport client configuration anywhere else than the plugins.security.ssl.transport settings, which as it states in the documentation is mandatory.

And the transport_enabled setting in the security config, but disabling that doesn’t help (and it’s enabled in the example configuration in the repo)

Configuration:

      dynamic:
        # Set filtered_alias_mode to 'disallow' to forbid more than 2 filtered aliases per index
        # Set filtered_alias_mode to 'warn' to allow more than 2 filtered aliases per index but warns about it (default)
        # Set filtered_alias_mode to 'nowarn' to allow more than 2 filtered aliases per index silently
        #filtered_alias_mode: warn
        #do_not_fail_on_forbidden: false
        kibana:
          # Kibana multitenancy
          multitenancy_enabled: true
          server_username: kibanaserver
          index: '.kibana'
        http:
          anonymous_auth_enabled: false
          xff:
            enabled: false
            #internalProxies: '10\.80\.\d+\.\d+' # regex pattern - Load balancer subnets.
            internalProxies: '.*' # trust all internal proxies, regex pattern
            remoteIpHeader:  'X-Forwarded-For'
            ###### see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html for regex help
            ###### more information about XFF https://en.wikipedia.org/wiki/X-Forwarded-For
            ###### and here https://tools.ietf.org/html/rfc7239
            ###### and https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve
        authc:
          basic_internal_auth_domain:
            description: "Authenticate via HTTP Basic against internal users database"
            http_enabled: true
            transport_enabled: false
            order: 0
            http_authenticator:
              type: basic
              challenge: false
            authentication_backend:
              type: intern
```

```
clusterName: "test-cluster"
nodeGroup: "master"
masterService: "test-cluster-master"
replicas: 3
roles:
  - master

opensearchJavaOpts: "-Xmx512M -Xms512M"
resources:
  requests:
    cpu: "100m"
    memory: "768Mi"

persistence:
  enabled: true
  enableInitChown: false
  storageClass: "default"
  size: 8Gi
  accessModes:
    - ReadWriteOnce
# Allows you to add any config files in {{ .Values.opensearchHome }}/config
opensearchHome: /usr/share/opensearch
# such as opensearch.yml and log4j2.properties
config:
  opensearch.yml: |
    cluster.name: test-cluster
    # Bind to all interfaces because we don't know what IP address Docker will assign to us.
    network.host: 0.0.0.0
    plugins:
      security:
        ssl:
          transport:
            pemcert_filepath: certs/node.crt
            pemkey_filepath: certs/node.key
            pemtrustedcas_filepath: certs/ca.crt
            enforce_hostname_verification: false
          http:
            enabled: true
            pemcert_filepath: certs/node.crt
            pemkey_filepath: certs/node.key
            pemtrustedcas_filepath: certs/ca.crt
        allow_unsafe_democertificates: false
        allow_default_init_securityindex: true
        authcz:
          admin_dn: # Generated by generate-certs.sh
            - CN=admin,OU=test-cluster
        audit.type: internal_opensearch
        enable_snapshot_restore_privilege: true
        check_snapshot_restore_write_privileges: true
        restapi:
          roles_enabled: ["all_access", "security_rest_api_access"]
        system_indices:
          enabled: true
          indices:
            [
              ".opendistro-alerting-config",
              ".opendistro-alerting-alert*",
              ".opendistro-anomaly-results*",
              ".opendistro-anomaly-detector*",
              ".opendistro-anomaly-checkpoints",
              ".opendistro-anomaly-detection-state",
              ".opendistro-reports-*",
              ".opendistro-notifications-*",
              ".opendistro-notebooks",
              ".opendistro-asynchronous-search-response*",
            ]
    ######## End OpenSearch Security Demo Configuration ########

secretMounts:
  - secretName: test-cluster-certs
    name: cluster-certs
    path: /usr/share/opensearch/config/certs

extraEnvs:
  - name: DISABLE_INSTALL_DEMO_CONFIG
    value: "true"

securityConfig:
  config:
    securityConfigSecret: test-cluster-securityconfig
```


**Relevant Logs or Screenshots**:

@albgus I think you’ve faced the scenario described in this thread.

This issue is not present when demo certificates are in use. However, when you migrate to production then you must configure plugins.security.nodes_dn:.

Any node that won’t be in that list, will produce the reported error in OpenSearch logs.
Please remember that the values of the plugins.security.nodes_dn: have to match the node certificate.

The error says that TLS is optional for the REST. That is correct. The REST connection is on port 9200. This can be either secured or not. However, since transport layer auth/auth is no longer supported, securityadmin.sh has moved from port 9300 to 9200. That forces port 9200 to be secured for running securityadmin.sh script.

TLS always was and is mandatory for the transport layer (ports 9300 - 9400).

To follow up on this, the root cause ended up being an error in the certificates. I had based the configuration on a setup that relied in setting subjectAltName=RID:1.2.3.4.5.5 on the certs instead of defining plugins.security.nodes_dn. Which is apparently valid and works but wasn’t clear.

Still, that error message is really bad, as it explicitly says that a required configuration has been removed and no longer supported.