Cross cluster search between two clusters not working

Hi I have two clusters and is configured with separated TSL certificates.
I need to perform cross-cluster-search from one cluster to another.
I followed the documentation and when I am searching I get the following error in the logs

[2022-01-06T07:29:25,608][INFO ][o.e.c.c.C.CoordinatorPublication] [prod-od-master-01] after [10s] publication of cluster state version [305368] is still waiting for {prod-od-data-11}{_wjj_8OBTo6sV2dTTWpOJA}{qxXbHz1TTnOA4dD466X3iA}{10.133.215.130}{10.133.215.130:9300}{dr}{mode=data_node, zone=zoneA, temp=hot} [SENT_APPLY_COMMIT]
[2022-01-06T07:29:25,966][INFO ][o.e.c.s.ClusterSettings  ] [prod-od-master-01] updating [cluster.remote.elk-prod-region-01.seeds] from [[]] to [["10.129.38.9:9300"]]
[2022-01-06T07:29:27,045][WARN ][o.e.t.SniffConnectionStrategy] [prod-od-master-01] fetching nodes from external cluster [elk-prod-region-01] failed
org.elasticsearch.transport.RemoteTransportException: [cybersec-fp-od-master-01][172.18.0.2:9300][internal:transport/handshake]
Caused by: org.elasticsearch.ElasticsearchException: Illegal parameter in http or transport request found.
This means that one node is trying to connect to another with
a non-node certificate (no OID or security.nodes_dn incorrect configured) or that someone
is spoofing requests. Check your TLS certificate setup as described here: See https://opendistro.github.io/for-elasticsearch-docs/docs/troubleshoot/tls/
        at com.amazon.opendistroforelasticsearch.security.ssl.util.ExceptionUtils.createBadHeaderException(ExceptionUtils.java:57) ~[?:?]
        at com.amazon.opendistroforelasticsearch.security.transport.OpenDistroSecurityRequestHandler.messageReceivedDecorate(OpenDistroSecurityRequestHandler.java:234) ~[?:?]
        at com.amazon.opendistroforelasticsearch.security.ssl.transport.OpenDistroSecuritySSLRequestHandler.messageReceived(OpenDistroSecuritySSLRequestHandler.java:142) ~[?:?]
        at com.amazon.opendistroforelasticsearch.security.OpenDistroSecurityPlugin$7$1.messageReceived(OpenDistroSecurityPlugin.java:639) ~[?:?]
        at com.amazon.opendistroforelasticsearch.indexmanagement.rollup.interceptor.RollupInterceptor$interceptHandler$1.messageReceived(RollupInterceptor.kt:124) ~[?:?]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:207) ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:107) ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:89) ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:700) ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142) ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117) ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82) ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
        at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1518) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1267) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[?:?]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]

My version of opendistro is 1.13.2

Can anyone help here?

Looks like this is a problem with your SSL/TLS config.

I’m going to move this to the security category.

@curiousmind If the root-ca.pem file is not the same for both clusters, you will need to combine the 2 root-ca.pem files from two clusters into one.

Hope this helps

@Anthony
Any idea/resource to do that?. I am kind of stuck with this

@curiousmind you simply add content of one CA certificate inside (after) the other.

So for example, if you have 2 clusters, A and B

Cluster A has it’s own CA (ca_cluster_A.pem) and relevant certificates
Cluster B has it’s own CA (ca_cluster_B.pem) and relevant certificates

You would need to extract both CA and run below command:

cat ca_cluster_A.pem >> ca_cluster_B.pem
cat ca_cluster_B.pem >> ca_cluster_A.pem

That way both clusters are aware of what the CA is for either cluster. Hope this helps

@curiousmind - were you able to verify that the approach suggested by @Anthony works?

It should do, but I have a very similar issue to you and when I tried this I got errors (that I won’t reproduce at this stage - I need to investigate/checkover my changes first). I am just curious if you were able to get this to work and what changes you had to make?

Thanks,

As a follow up to this - I got it working by adding in the remote root-ca CA certs and also updating

opendistro_security.nodes_dn:

with the remote nodes.

I have to try this. Will update and post here after that.
Thanks for the information

Update 09/09/2022

Got this working.
Steps :

  1. combine the two root-ca.pem files. Somewhere I read that the order matters. So put the co-ordinating cluster’s root-ca.pem content first and then the remote cluster’s root-ca.pem.
  2. copy to /etc/opensearch the combined root-ca.pem file.
    For co-ordinating cluster, i chose the Kibana node and for the seed node, i took an ingestion node in the remote cluster.
  3. give the right permissions and group ownerships.
  4. update the plugins.securtiy.nodes_dn security settings of the remote cluster. Provide with the co-ordinating cluster’s node information in that setting. In my case the kibana node’s information was provided to the plugins.securtiy.nodes_dn setting of the remote cluster’s seed node.
  5. Update the persistent setting in the co-ordinating cluster
{
  "persistent": {
    "cluster.remote": {
      "cluster-01": {
        "seeds": ["172.31.0.3:9300"]
      }
    }
  }
}

Documentation link : link here

  1. Restart the opensearch containers for both the nodes in the co-ordination cluster and remote cluster.

Now , another challange was to get the remote-index pattern in the Discover tab.

  1. Go to the remote cluster, note down the index pattern’s unique id.
    for example, let the remote-cluster name be cluster-01 , the index pattern be test-index and its unique id be d43c0cbc-9cd7-4876-ac00-162ba6a99449
  2. Go to the coordinating cluster, then stack management → index pattern → create index pattern
  3. type in the name of the index from the remote cluster like below
    cluster-01:test-index, then the most important thing here to be noted is that there will be a advanced setting named Custom Index pattern ID . We need to fill it with the unique id of the index pattern in the remote cluster (here d43c0cbc-9cd7-4876-ac00-162ba6a99449)

Now create the index pattern and the fields will be loaded. Otherwise, without the addition of the unique-id the index pattern will be created but the fields wont be loaded.