Cross cluster search between two clusters not working

Hi I have two clusters and is configured with separated TSL certificates.
I need to perform cross-cluster-search from one cluster to another.
I followed the documentation and when I am searching I get the following error in the logs

[2022-01-06T07:29:25,608][INFO ][o.e.c.c.C.CoordinatorPublication] [prod-od-master-01] after [10s] publication of cluster state version [305368] is still waiting for {prod-od-data-11}{_wjj_8OBTo6sV2dTTWpOJA}{qxXbHz1TTnOA4dD466X3iA}{}{}{dr}{mode=data_node, zone=zoneA, temp=hot} [SENT_APPLY_COMMIT]
[2022-01-06T07:29:25,966][INFO ][o.e.c.s.ClusterSettings  ] [prod-od-master-01] updating [cluster.remote.elk-prod-region-01.seeds] from [[]] to [[""]]
[2022-01-06T07:29:27,045][WARN ][o.e.t.SniffConnectionStrategy] [prod-od-master-01] fetching nodes from external cluster [elk-prod-region-01] failed
org.elasticsearch.transport.RemoteTransportException: [cybersec-fp-od-master-01][][internal:transport/handshake]
Caused by: org.elasticsearch.ElasticsearchException: Illegal parameter in http or transport request found.
This means that one node is trying to connect to another with
a non-node certificate (no OID or security.nodes_dn incorrect configured) or that someone
is spoofing requests. Check your TLS certificate setup as described here: See
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at$7$1.messageReceived( ~[?:?]
        at$interceptHandler$1.messageReceived(RollupInterceptor.kt:124) ~[?:?]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived( ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundHandler.handleRequest( ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundHandler.messageReceived( ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundHandler.inboundMessage( ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.TcpTransport.inboundMessage( ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundPipeline.forwardFragments( ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundPipeline.doHandleBytes( ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.InboundPipeline.handleBytes( ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead( ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at io.netty.handler.logging.LoggingHandler.channelRead( ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead( ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at io.netty.handler.ssl.SslHandler.unwrap( ~[?:?]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible( ~[?:?]
        at io.netty.handler.ssl.SslHandler.decode( ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection( ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode( ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead( ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at$HeadContext.channelRead( ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at$ ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$ ~[?:?]
        at io.netty.util.internal.ThreadExecutorMap$ ~[?:?]
        at [?:?]

My version of opendistro is 1.13.2

Can anyone help here?

Looks like this is a problem with your SSL/TLS config.

I’m going to move this to the security category.

@curiousmind If the root-ca.pem file is not the same for both clusters, you will need to combine the 2 root-ca.pem files from two clusters into one.

Hope this helps

Any idea/resource to do that?. I am kind of stuck with this

@curiousmind you simply add content of one CA certificate inside (after) the other.

So for example, if you have 2 clusters, A and B

Cluster A has it’s own CA (ca_cluster_A.pem) and relevant certificates
Cluster B has it’s own CA (ca_cluster_B.pem) and relevant certificates

You would need to extract both CA and run below command:

cat ca_cluster_A.pem >> ca_cluster_B.pem
cat ca_cluster_B.pem >> ca_cluster_A.pem

That way both clusters are aware of what the CA is for either cluster. Hope this helps

@curiousmind - were you able to verify that the approach suggested by @Anthony works?

It should do, but I have a very similar issue to you and when I tried this I got errors (that I won’t reproduce at this stage - I need to investigate/checkover my changes first). I am just curious if you were able to get this to work and what changes you had to make?


As a follow up to this - I got it working by adding in the remote root-ca CA certs and also updating


with the remote nodes.

I have to try this. Will update and post here after that.
Thanks for the information

Update 09/09/2022

Got this working.
Steps :

  1. combine the two root-ca.pem files. Somewhere I read that the order matters. So put the co-ordinating cluster’s root-ca.pem content first and then the remote cluster’s root-ca.pem.
  2. copy to /etc/opensearch the combined root-ca.pem file.
    For co-ordinating cluster, i chose the Kibana node and for the seed node, i took an ingestion node in the remote cluster.
  3. give the right permissions and group ownerships.
  4. update the plugins.securtiy.nodes_dn security settings of the remote cluster. Provide with the co-ordinating cluster’s node information in that setting. In my case the kibana node’s information was provided to the plugins.securtiy.nodes_dn setting of the remote cluster’s seed node.
  5. Update the persistent setting in the co-ordinating cluster
  "persistent": {
    "cluster.remote": {
      "cluster-01": {
        "seeds": [""]

Documentation link : link here

  1. Restart the opensearch containers for both the nodes in the co-ordination cluster and remote cluster.

Now , another challange was to get the remote-index pattern in the Discover tab.

  1. Go to the remote cluster, note down the index pattern’s unique id.
    for example, let the remote-cluster name be cluster-01 , the index pattern be test-index and its unique id be d43c0cbc-9cd7-4876-ac00-162ba6a99449
  2. Go to the coordinating cluster, then stack management → index pattern → create index pattern
  3. type in the name of the index from the remote cluster like below
    cluster-01:test-index, then the most important thing here to be noted is that there will be a advanced setting named Custom Index pattern ID . We need to fill it with the unique id of the index pattern in the remote cluster (here d43c0cbc-9cd7-4876-ac00-162ba6a99449)

Now create the index pattern and the fields will be loaded. Otherwise, without the addition of the unique-id the index pattern will be created but the fields wont be loaded.