Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
I am using opensearch 2.8.0 on K8s cluster.
Describe the issue
I am trying to set a cross cluster replication on two cluster which run on k8s. When i follow the documentation i am getting;
“type” : “transport_exception”,
“reason” : “handshake failed because connection reset”
Configuration:
With respect to documentation i need to take DN of follower with;
GET /_opendistro/_security/api/ssl/certs?pretty
and add it to the leader whitelist. But when i try to get the certificate from leader, i am getting forbidden error. I check my users permission and everything looks okay.
I have used same certificates for both cluster which have created via cert-manager as fallow;
apiVersion: cert-manager/v1
kind: Issuer
metadata:
name: selfsigned-issuer
namespace: demo
spec:
selfSigned: {}
apiVersion: cert-manager/v1
kind: Certificate
metadata:
name: ca-certificate
namespace: demo
spec:
secretName: ca-cert
duration: 9000h # ~1year
renewBefore: 360h # 15d
commonName: Test CA
isCA: true
privateKey:
size: 2048
usages:
- digital signature
- key encipherment
issuerRef:
name: selfsigned-issuer
apiVersion: cert-manager/v1
kind: Issuer
metadata:
name: ca-issuer
namespace: demo
spec:
ca:
secretName: ca-cert
apiVersion: cert-manager/v1
kind: Certificate
metadata:
name: opensearch-certs
namespace: demo
spec:
secretName: opensearch-certs
duration: 9000h # ~1year
renewBefore: 360h # 15d
isCA: false
privateKey:
size: 2048
algorithm: RSA
encoding: PKCS8
dnsNames:
- my-cluster-3
- my-cluster-3-masters-0
- my-cluster-3-masters-1
- my-cluster-3-masters-2
- my-cluster-3-nodes-0
- my-cluster-3-nodes-1
- my-cluster-3-coordinators-0
- my-cluster-3-coordinators-1
- my-cluster-3-bootstrap-0
- my-cluster-4
- my-cluster-4-masters-0
- my-cluster-4-masters-1
- my-cluster-4-masters-2
- my-cluster-4-nodes-0
- my-cluster-4-nodes-1
- my-cluster-4-coordinators-0
- my-cluster-4-coordinators-1
- my-cluster-4-bootstrap-0
usages:
- signing
- key encipherment
- server auth
- client auth
commonName: Opensearch_Node
issuerRef:
name: ca-issuer
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: opensearch-admin-certs
namespace: demo
spec:
secretName: opensearch-admin-certs
duration: 9000h # ~1year
renewBefore: 360h # 15d
isCA: false
privateKey:
size: 2048
algorithm: RSA
encoding: PKCS8
commonName: OpenSearch_Admin
usages:
- signing
- key encipherment
- server auth
- client auth
issuerRef:
name: ca-issuer
apiVersion: cert-manager/v1
kind: Certificate
metadata:
name: opensearch-dashboards-certs
namespace: demo
spec:
secretName: opensearch-dashboards-certs
duration: 9000h # ~1year
renewBefore: 360h # 15d
isCA: false
privateKey:
size: 2048
algorithm: RSA
encoding: PKCS8
dnsNames:
- my-cluster-3-dashboards
- my-cluster-4-dashboards
usages:
- signing
- key encipherment
- server auth
- client auth
issuerRef:
name: ca-issuer
And the two cluster created using opster with same configuration only with different names as my-cluster-3 and my-cluster-4 so i am sharing the one of them.
apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
name: my-cluster-3
namespace: demo
spec:
general:
version: 2.8.0
httpPort: 9200
vendor: opensearch
serviceName: my-cluster-3
additionalConfig:
network.host: 0.0.0.0
plugins.security.nodes_dn_dynamic_config_enabled: “true”
indices.memory.index_buffer_size: 10% # default
setVMMaxMapCount: true
security:
config:
adminSecret:
name: opensearch-admin-certs
tls:
transport:
generate: false
perNode: false
secret:
name: opensearch-certs
nodesDn: [“CN=Opensearch_Node”, ]
adminDn: [“CN=OpenSearch_Admin”, ]
http:
generate: false
secret:
name: opensearch-certs
dashboards:
version: 2.8.0
enable: true
replicas: 2
resources:
requests:
memory: “2Gi”
cpu: “500m”
limits:
memory: “2Gi”
cpu: “500m”
additionalConfig:
server.host: 0.0.0.0
opensearch_security.multitenancy.enabled: “true”
pluginsList:
- "https://github.com/lguillaud/osd_transform_vis/releases/download/2.8.0/transformVis-2.8.0.zip"
confMgmt:
smartScaler: true
nodePools:
- component: masters
# no voting-only role so 3 masters should be there
replicas: 3
jvm: "-Xmx2048M -Xms2048M"
nodeSelector:
diskSize: "5Gi"
resources:
requests:
memory: "4Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "500m"
roles:
- "data"
- "cluster_manager"
- "remote_cluster_client"
persistence:
pvc:
storageClass: nfs-client
accessModes:
- ReadWriteOnce
- component: nodes
replicas: 2
jvm: "-Xmx2048M -Xms2048M"
diskSize: "5Gi"
nodeSelector:
resources:
requests:
memory: "4Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "500m"
roles:
- "data"
- "remote_cluster_client"
persistence:
pvc:
storageClass: nfs-client
accessModes:
- ReadWriteOnce
- component: coordinators
replicas: 1
jvm: "-Xmx2048M -Xms2048M"
# tune down since coordinators usually
# don't store a lot of information
diskSize: "2Gi"
nodeSelector:
resources:
requests:
memory: "4Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "500m"
roles:
- "ingest"
- "remote_cluster_client"
persistence:
pvc:
storageClass: nfs-client
accessModes:
- ReadWriteOnce
Relevant Logs or Screenshots:
My current user information taking via
GET /_opendistro/_security/api/account
{
“user_name” : “admin”,
“is_reserved” : true,
“is_hidden” : false,
“is_internal_user” : true,
“user_requested_tenant” : null,
“backend_roles” : [
“admin”
],
“custom_attribute_names” : ,
“tenants” : {
“global_tenant” : true,
“admin_tenant” : true,
“admin” : true
},
“roles” : [
“own_index”,
“all_access”
]
}