Clusters in different namespace failing to spin up when operator is in kube-system namespace

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
Opensearch-operator chart version: 2.7.0
opensearch-operator image version: 2.6.0
opensearch version: 2.15.0
opensearch-dashboards version: 2.15.0

Describe the issue:
I am trying to setup 1 opensearch operator with multiple opensearch clusters in different namespace.
I have setup operator in kube-system namespace and is successfully deployed.
I am trying to spin up opensearch cluster in separate namespace (app-ahm-dev) but it is failing.

Configuration:
Deploying via argocd with following configuration of opensearch:

sources:
   - helm:
       parameters:
         - name: cluster.dashboards.imagePullSecrets[0].name
           value: <pull-secret>
         - name: cluster.general.image
           value: "<registry>/opensearchproject/opensearch"
         - name: cluster.general.version
           value: "2.15.0"
         - name: cluster.dashboards.image
           value: "<registry>/opensearchproject/opensearch-dashboards"
         - name: cluster.dashboards.version
           value: "2.15.0"
         - name: cluster.general.serviceName
           value: "poc-opensearch-cluster"
         - name: cluster.general.pluginsList[0]
           value: "repository-s3"
       values: |
         cluster:
           initHelper:
             image: "<registry>/public.ecr.aws/opsterio/busybox:1.27.2-buildx"
             imagePullPolicy: "Always"
             imagePullSecrets:
             - name: <pull-secret>
             version: "1.27.2-buildx"
               
           general:
             imagePullPolicy: "Always"
             imagePullSecrets:
             - name: <pull-secret>
             setVMMaxMapCount: false
             securityContext:
               allowPrivilegeEscalation: false
               privileged: false
               runAsUser: 1000
             podSecurityContext:
               allowPrivilegeEscalation: false
               privileged: false
               runAsUser: 1000
           nodePools:
             - component: masters
               diskSize: 10Gi
               replicas: 2
               resources:
                  requests:
                     memory: "1Gi"
                     cpu: "500m"
                  limits:
                     memory: "2Gi"
                     cpu: "500m"
               roles:
                 - "data"
                 - "cluster_manager"
               persistence:
                  pvc:
                     storageClass: "default"
                     accessModes:
                       - ReadWriteOnce
           dashboards:
             enable: true
             tls:
               enable: true
             replicas: 1
             resources:
               requests:
                  memory: "512Mi"
                  cpu: "200m"
               limits:
                  memory: "512Mi"
                  cpu: "200m"
             securityContext:
               allowPrivilegeEscalation: false
               privileged: false
               runAsUser: 1000   
           ingress:
             dashboards:
               enabled: true
               annotations: 
                 nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
                 nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
                 nginx.ingress.kubernetes.io/backend-protocol: HTTPS
                 nginx.ingress.kubernetes.io/proxy-buffer-size: "256k"
               className: "ahm-dev-nginx"
               hosts:
                - host: <dashboard-url>
                  paths:
                    - path: /
                      pathType: ImplementationSpecific
               tls: 
                - hosts:
                    - <dashbpard-url>
                  secretName: <tls-secret> 
     repoURL: 'https://opensearch-project.github.io/opensearch-k8s-operator/'
     targetRevision: 3.0.0
     chart: opensearch-cluster

Relevant Logs or Screenshots:

[2024-12-20T08:18:23,232][INFO ][o.o.s.l.BuiltinLogTypeLoader] [poc-opensearch-cluster-masters-0] Loaded [ad_ldap_logtype.json] log type
[2024-12-20T08:18:23,824][INFO ][o.o.t.TransportService   ] [poc-opensearch-cluster-masters-0] publish_address {poc-opensearch-cluster-masters-0/198.19.130.211:9300}, bound_addresses {[::]:9300}
[2024-12-20T08:18:24,461][INFO ][o.o.b.BootstrapChecks    ] [poc-opensearch-cluster-masters-0] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2024-12-20T08:18:24,533][INFO ][o.o.c.c.ClusterBootstrapService] [poc-opensearch-cluster-masters-0] skipping cluster bootstrapping as local node does not match bootstrap requirements: [poc-opensearch-cluster-bootstrap-0]
[2024-12-20T08:18:25,531][WARN ][o.o.t.OutboundHandler    ] [poc-opensearch-cluster-masters-0] send message failed [channel: Netty4TcpChannel{localAddress=/198.19.130.211:33164, remoteAddress=poc-opensearch-cluster-discovery/198.19.130.96:9300}]
javax.net.ssl.SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors
	at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
	at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:378) ~[?:?]
	at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321) ~[?:?]
	at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:316) ~[?:?]
	at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1318) ~[?:?]
	at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1195) ~[?:?]
	at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1138) ~[?:?]
	at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:393) ~[?:?]
	at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:476) ~[?:?]
	at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1273) ~[?:?]
	at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1260) ~[?:?]
	at java.base/java.security.AccessController.doPrivileged(AccessController.java:714) ~[?:?]
	at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1205) ~[?:?]
	at io.netty.handler.ssl.SslHandler.runDelegatedTasks(SslHandler.java:1651) [netty-handler-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1497) [netty-handler-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1338) [netty-handler-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1387) [netty-handler-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:530) [netty-codec-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:469) [netty-codec-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) [netty-codec-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1407) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:918) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:994) [netty-common-4.1.110.Final.jar:4.1.110.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.110.Final.jar:4.1.110.Final]
	at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]

Hi Team,

Any update on this request? My deployment is pending because of this issue.

Hi @pablo /@Eugene7 , could you please help here?

@Ghata According to the provided logs, OpenSearch node poc-opensearch-cluster-masters-0 has issues connecting at port 9300 with discovery service (poc-opensearch-cluster-discovery) due to SSL certificate validation.

Double-check that the same RootCA signed all OpenSearch nodes certificates (plugins.security.ssl.transport) in the cluster.