Configuration LDAP with custom certificates kubernetes

Getting error while running below script using custom certificates.

./securityadmin.sh -cd …/securityconfig/ -icl -nhnv -cacert …/…/…/config/root-ca.pem -cert …/…/…/config/admin.pem \ -key …/…/…/config/admin-key.pem

ERR: Seems you use a node certificate which is also an admin certificate
That may have worked with older Open Distro Security versions but it indicates
a configuration error and is therefore forbidden now.
Unable to check whether cluster is sane
ERR: An unexpected ElasticsearchSecurityException occured: No user found for cluster:monitor/nodes/info
Trace:
ElasticsearchSecurityException[No user found for cluster:monitor/nodes/info]
at com.amazon.opendistroforelasticsearch.security.filter.OpenDistroSecurityFilter.apply0(OpenDistroSecurityFilter.java:280)
at com.amazon.opendistroforelasticsearch.security.filter.OpenDistroSecurityFilter.apply(OpenDistroSecurityFilter.java:151)
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:177)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:155)
at org.elasticsearch.action.support.HandledTransportAction$TransportHandler.messageReceived(HandledTransportAction.java:64)
at org.elasticsearch.action.support.HandledTransportAction$TransportHandler.messageReceived(HandledTransportAction.java:60)
at com.amazon.opendistro.elasticsearch.performanceanalyzer.transport.PerformanceAnalyzerTransportRequestHandler.messageReceived(PerformanceAnalyzerTransportRequestHandler.java:49)
at com.amazon.opendistroforelasticsearch.security.ssl.transport.OpenDistroSecuritySSLRequestHandler.messageReceivedDecorate(OpenDistroSecuritySSLRequestHandler.java:182)
at com.amazon.opendistroforelasticsearch.security.transport.OpenDistroSecurityRequestHandler.messageReceivedDecorate(OpenDistroSecurityRequestHandler.java:293)
at com.amazon.opendistroforelasticsearch.security.ssl.transport.OpenDistroSecuritySSLRequestHandler.messageReceived(OpenDistroSecuritySSLRequestHandler.java:142)
at com.amazon.opendistroforelasticsearch.security.OpenDistroSecurityPlugin$7$1.messageReceived(OpenDistroSecurityPlugin.java:639)
at com.amazon.opendistroforelasticsearch.indexmanagement.rollup.interceptor.RollupInterceptor$interceptHandler$1.messageReceived(RollupInterceptor.kt:124)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72)
at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:207)
at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:107)
at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:89)
at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:700)
at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:142)
at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:117)
at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:82)
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:271)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1518)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1267)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:832)

Values.yaml

./securityadmin.sh -cd …/securityconfig/ -icl -nhnv -cacert …/…/…/config/root-ca.pem -cert …/…/…/config/admin.pem \ -key …/…/…/config/admin-key.pem

Copyright 2019 Viasat, Inc.

Licensed under the Apache License, Version 2.0 (the “License”).

You may not use this file except in compliance with the License.

A copy of the License is located at

http://www.apache.org/licenses/LICENSE-2.0

or in the “license” file accompanying this file. This file is distributed

on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either

express or implied. See the License for the specific language governing

permissions and limitations under the License.

kibana:
enabled: true
image: cnadatabase_prod/amazon/opendistro-for-elasticsearch-kibana
imageTag: 1.13.2

Specifies the image pull policy. Can be “Always” or “IfNotPresent” or “Never”.

Default to “Always”.

imagePullPolicy: “”
replicas: 1
port: 5601
externalPort: 443
resources: {}

limits:

cpu: 2500m

memory: 2Gi

requests:

cpu: 500m

memory: 512Mi

readinessProbe:
livenessProbe:
startupProbe:

elasticsearchAccount:
secret: “”
keyPassphrase:
enabled: false

extraEnvs:

extraVolumes:

- name: extras

emptyDir: {}

extraVolumeMounts:

- name: extras

mountPath: /usr/share/extras

readOnly: true

extraInitContainers:

- name: do-something

image: busybox

command: [‘do’, ‘something’]

extraContainers:

- name: do-something

image: busybox

command: [‘do’, ‘something’]

ssl:
kibana:
enabled: true
existingCertSecret: elasticseach-kibana-certs
existingCertSecretCertSubPath: kibana-crt.pem
existingCertSecretKeySubPath: kibana-key.pem
existingCertSecretRootCASubPath: kibana-root-ca.pem
elasticsearch:
enabled: true
existingCertSecret: elasticseach-rest-certs
existingCertSecretCertSubPath: elk-rest-crt.pem
existingCertSecretKeySubPath: elk-rest-key.pem
existingCertSecretRootCASubPath: elk-rest-root-ca.pem

configDirectory: “/usr/share/kibana/config”
certsDirectory: “/usr/share/kibana/certs”

ingress:
## Set to true to enable ingress record generation
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: “true”
labels: {}
path: /
hosts:
- chart-example.local
tls:
# - secretName: chart-example-tls
# hosts:
# - chart-example.local

service:
type: LoadBalancer
annotations:
avi_proxy: ‘{“virtualservice”: {“east_west_placement”: false, “vrf_context_ref”:
“/api/vrfcontext?name=VRF-WDC-TKGDEV-1”, “se_group_ref”: “/api/serviceenginegroup?name=SEG-WDC-TKGDEV-1”},
“pool”: {“se_group_ref”: “/api/serviceenginegroup?name=SEG-WDC-TKGDEV-1”}, “pool”:
{“vrf_ref”: “/api/vrfcontext?name=VRF-WDC-TKGDEV-1”}}’
ingress.kubernetes.io/proxy-body-size: 50m
kubernetes.io/ingress.class: avi

config:
## Default Kibana configuration from kibana-docker.
# server.name: kibana
server.host: “0.0.0.0”

## Replace with Elasticsearch DNS name picked during Service deployment
# elasticsearch.hosts: ${ELASTIC_URL}
  elasticsearch.hosts: "https://10.166.118.231:9200"
# elasticsearch.requestTimeout: 360000
  elasticsearch.username: kibanaserver
  elasticsearch.password: kibanaserver

## Kibana TLS Config
  server.ssl.enabled: true
  server.ssl.key: /usr/share/kibana/certs/kibana-key.pem
  server.ssl.certificate: /usr/share/kibana/certs/kibana-crt.pem
  elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/certs/kibana-root-ca.pem
  elasticsearch.ssl.verificationMode: none
  #elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/certs/elk-rest-root-ca.pem

  elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]

  opendistro_security.multitenancy.enabled: true
  opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
  opendistro_security.readonly_mode.roles: ["kibana_read_only"]

  newsfeed.enabled: false
  telemetry.optIn: false
  telemetry.enabled: false
  security.showInsecureClusterWarning: false


# opendistro_security.cookie.secure: true
# opendistro_security.cookie.password: ${COOKIE_PASS}

Node labels for pod assignment

ref: Assigning Pods to Nodes | Kubernetes

nodeSelector: {}

Tolerations for pod assignment

ref: Taints and Tolerations | Kubernetes

tolerations:

affinity: {}

serviceAccount:
## Specifies whether a ServiceAccount should be created
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the fullname template
name:

podAnnotations: {}

global:
clusterName: “k8s-logs”

psp:
create: false

rbac:
enabled: false

Optionally override the docker registry to use for images

imageRegistry: harbor-repo.vmware.com

Optionally specify an array of imagePullSecrets.

Secrets must be manually created in the namespace.

imagePullSecrets:
- harbor

elasticsearch:

Used when deploying hot/warm architecture. Allows second aliased deployment to find cluster.

Default {{ template opendistro-es.fullname }}-discovery.

discoveryOverride: “”
securityConfig:
enabled: true
path: “/usr/share/elasticsearch/plugins/opendistro_security/securityconfig”
actionGroupsSecret:
configSecret: “security-config”
internalUsersSecret:
rolesSecret:
rolesMappingSecret:
tenantsSecret:
#The following option simplifies securityConfig by using a single secret and specifying the respective secrets in the corresponding files instead of creating different secrets for config,internal users, roles, roles mapping and tenants
#Note that this is an alternative to the above secrets and shouldn’t be used if the above secrets are used
config:
securityConfigSecret:
data: {}
#config.yml: |-
# internal_users.yml: |-
# roles.yml: |-
# rolesMapping.yml: |-
# tenants.yml: |-

securityContext to apply to the pod. Allows for running as non-root

securityContextCustom: {}

fsGroup: 1000

runAsUser: 1000

runAsGroup: 1000

extraEnvs:

extraInitContainers:

- name: do-something

image: busybox

command: [‘do’, ‘something’]

extraVolumes:

- name: extras

emptyDir: {}

extraVolumeMounts:

- name: extras

mountPath: /usr/share/extras

readOnly: true

initContainer:
image: cnadatabase_prod/busybox
imageTag: 1.27.2

Set optimal sysctl’s. This requires privilege. Can be disabled if

the system has already been preconfigured.

sysctl:
enabled: true

Give SYS_CHROOT cap to ES pods. This might not be neccesary

sys_chroot:
enabled: true

init container to chown the mount volume. not neccesary if setting a

fsGroup in the securityContext

fixmount:
enabled: true

ssl:
## TLS is mandatory for the transport layer and can not be disabled
transport:
existingCertSecret: elasticseach-transport-certs
existingCertSecretCertSubPath: node.pem
existingCertSecretKeySubPath: node-key.pem
existingCertSecretRootCASubPath: elk-transport-root-ca.pem
rest:
enabled: true
existingCertSecret: elasticseach-rest-certs
existingCertSecretCertSubPath: elk-rest-crt.pem
existingCertSecretKeySubPath: elk-rest-key.pem
existingCertSecretRootCASubPath: elk-rest-root-ca.pem
admin:
enabled: true
existingCertSecret: elasticseach-admin-certs
existingCertSecretCertSubPath: admin.pem
existingCertSecretKeySubPath: admin-key.pem
existingCertSecretRootCASubPath: admin-root-ca.pem

master:
enabled: true
replicas: 3
updateStrategy: “RollingUpdate”

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  ##
  # existingClaim:

  ## The subdirectory of the volume to mount to, useful in dev environments
  ## and one PV for multiple services.
  ##
  subPath: ""

  ## Open Distro master Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "vsan-bloc-sc"
  accessModes:
    - ReadWriteOnce
  size: 4Gi
  annotations: {}

resources: 
  limits:
    cpu: 2
    memory: 3Gi
  requests:
    cpu: 500m
    memory: 2Gi
javaOpts: "-Xms1g -Xmx1g"
podDisruptionBudget:
  enabled: false
  minAvailable: 1
readinessProbe: []
livenessProbe:
  tcpSocket:
    port: transport
  initialDelaySeconds: 60
  periodSeconds: 10
startupProbe: []
nodeSelector: {}
tolerations: []
## Anti-affinity to disallow deploying client and master nodes on the same worker node
affinity: {}
#  podAntiAffinity:
#    requiredDuringSchedulingIgnoredDuringExecution:
#      - topologyKey: "kubernetes.io/hostname"
#        labelSelector:
#          matchLabels:
#            role: master
podAnnotations: {}

extraInitContainers: []
# - name: do-something
#   image: busybox
#   command: ['do', 'something']

extraContainers: []
# - name: do-something
#   image: busybox
#   command: ['do', 'something']

data:
enabled: true
## Enables dedicated statefulset for data. Otherwise master nodes as data storage
dedicatedPod:
enabled: true
replicas: 3
updateStrategy: “RollingUpdate”

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  ##
  # existingClaim:

  ## The subdirectory of the volume to mount to, useful in dev environments
  ## and one PV for multiple services.
  ##
  subPath: ""

  ## Open Distro master Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "vsan-bloc-sc"
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

resources: 
  limits:
    cpu: 2
    memory: 4Gi
  requests:
    cpu: 500m
    memory: 3Gi
javaOpts: "-Xms2g -Xmx2g"
podDisruptionBudget:
  enabled: false
  minAvailable: 1
readinessProbe: []
livenessProbe:
  tcpSocket:
    port: transport
  initialDelaySeconds: 60
  periodSeconds: 10
startupProbe: []
nodeSelector: {}
tolerations: []
## Anti-affinity to disallow deploying client and master nodes on the same worker node
affinity: {}
#  podAntiAffinity:
#    preferredDuringSchedulingIgnoredDuringExecution:
#      - weight: 1
#        podAffinityTerm:
#          topologyKey: "kubernetes.io/hostname"
#          labelSelector:
#            matchLabels:
#              role: data
podAnnotations: {}

client:
enabled: true
## Enables dedicated deployment for client/ingest. Otherwise master nodes as client/ingest
dedicatedPod:
enabled: true
service:
type: LoadBalancer
annotations:
avi_proxy: ‘{“virtualservice”: {“east_west_placement”: false, “vrf_context_ref”:
“/api/vrfcontext?name=VRF-WDC-TKGDEV-1”, “se_group_ref”: “/api/serviceenginegroup?name=SEG-WDC-TKGDEV-1”},
“pool”: {“se_group_ref”: “/api/serviceenginegroup?name=SEG-WDC-TKGDEV-1”}, “pool”:
{“vrf_ref”: “/api/vrfcontext?name=VRF-WDC-TKGDEV-1”}}’
ingress.kubernetes.io/proxy-body-size: 50m
kubernetes.io/ingress.class: avi
# # Defined ELB backend protocol as HTTPS to allow connection to Elasticsearch API
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https

    # # ARN of ACM certificate registered to the deployed ELB for handling connections over TLS
    # # ACM certificate should be issued to the DNS hostname defined earlier (elk.sec.example.com)
    # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:111222333444:certificate/c69f6022-b24f-43d9-b9c8-dfe288d9443d"
    # service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

    # service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
    # service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
    # service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"

    # # Annotation to create internal only ELB
    # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
replicas: 2
javaOpts: "-Xms1g -Xmx1g"
ingress:
  ## Set to true to enable ingress record generation
  enabled: false
  annotations: {}
  #  kubernetes.io/ingress.class: nginx
  #  kubernetes.io/tls-acme: "true"
  #  # Depending on your Ingress Controller you may need to set one of the two below annotations to have NGINX call the backend using HTTPS
  #  nginx.org/ssl-services:"{{ template "opendistro-es.fullname" . }}-client-service"
  #  nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  labels: {}
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local
resources: 
 limits:
    cpu: 2
    memory: 3Gi
 requests:
    cpu: 500m
    memory: 2Gi
podDisruptionBudget:
  enabled: false
  minAvailable: 1
readinessProbe: []
livenessProbe:
  tcpSocket:
    port: transport
  initialDelaySeconds: 60
  periodSeconds: 10
startupProbe: []
nodeSelector: {}
tolerations: []
## Weighted anti-affinity to disallow deploying client node to the same worker node as master node
affinity: {}
#  podAntiAffinity:
#    preferredDuringSchedulingIgnoredDuringExecution:
#      - weight: 1
#        podAffinityTerm:
#          topologyKey: "kubernetes.io/hostname"
#          labelSelector:
#            matchLabels:
#              role: client
podAnnotations: {}

config:
## Example Config
# opendistro_security.allow_unsafe_democertificates: false
# opendistro_security.allow_default_init_securityindex: true
# opendistro_security.audit.type: internal_elasticsearch
# opendistro_security.enable_snapshot_restore_privilege: true
# opendistro_security.check_snapshot_restore_write_privileges: true
# cluster.routing.allocation.disk.threshold_enabled: false
# opendistro_security.audit.config.disabled_rest_categories: NONE
# opendistro_security.audit.config.disabled_transport_categories: NONE
# cluster:
# name: ${CLUSTER_NAME}
# node:
# master: ${NODE_MASTER}
# data: ${NODE_DATA}
# name: ${NODE_NAME}
# ingest: ${NODE_INGEST}
# max_local_storage_nodes: 1
# attr.box_type: hot

# processors: ${PROCESSORS:1}

# network.host: ${NETWORK_HOST}

# thread_pool.bulk.queue_size: 800

# path:
#   data: /usr/share/elasticsearch/data
#   logs: /usr/share/elasticsearch/logs

# http:
#   enabled: ${HTTP_ENABLE}
#   compression: true

# discovery:
#   zen:
#     ping.unicast.hosts: ${DISCOVERY_SERVICE}
#     minimum_master_nodes: ${NUMBER_OF_MASTERS}

# # TLS Configuration Transport Layer
 opendistro_security.ssl.transport.pemcert_filepath: node.pem
 opendistro_security.ssl.transport.pemkey_filepath: node-key.pem
 opendistro_security.ssl.transport.pemtrustedcas_filepath: elk-transport-root-ca.pem
 opendistro_security.ssl.transport.enforce_hostname_verification: false

# # TLS Configuration REST Layer
 opendistro_security.ssl.http.enabled: true
 opendistro_security.ssl.http.pemcert_filepath: elk-rest-crt.pem
 opendistro_security.ssl.http.pemkey_filepath: elk-rest-key.pem
 opendistro_security.ssl.http.pemtrustedcas_filepath: elk-rest-root-ca.pem


 opendistro_security.allow_default_init_securityindex: true
 opendistro_security.authcz.admin_dn:
 - 'CN=admin,OU=IT,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'
 opendistro_security.nodes_dn:
 - 'CN=*,OU=IT,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'
 - 'CN=kibana,OU=IT,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'
 - 'CN=,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'
 - 'CN=k8s-logs*'
 - '/CN=.*regex/'

 opendistro_security.audit.type: internal_elasticsearch
 opendistro_security.enable_snapshot_restore_privilege: true
 opendistro_security.check_snapshot_restore_write_privileges: true
 opendistro_security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]
 cluster.routing.allocation.disk.threshold_enabled: false
 node.max_local_storage_nodes: 3

log4jConfig: “”

loggingConfig:
## Default config
## you can override this using by setting a system property, for example -Des.logger.level=DEBUG
es.logger.level: INFO
rootLogger: ${es.logger.level}, console
logger:
## log action execution errors for easier debugging
action: DEBUG
## reduce the logging for aws, too much is logged under the default INFO
com.amazonaws: WARN
appender:
console:
type: console
layout:
type: consolePattern
conversionPattern: “[%d{ISO8601}][%-5p][%-25c] %m%n”

transportKeyPassphrase:
enabled: false
passPhrase:

sslKeyPassphrase:
enabled: false
passPhrase:

maxMapCount: 262144

image: cnadatabase_prod/amazon/opendistro-for-elasticsearch
imageTag: 1.13.2

Specifies the image pull policy. Can be “Always” or “IfNotPresent” or “Never”.

Default to “Always”.

imagePullPolicy: “”

configDirectory: /usr/share/elasticsearch/config

serviceAccount:
## Specifies whether a ServiceAccount should be created
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the fullname template
name:

nameOverride: “”
fullnameOverride: “”

How did you create admin.pem certificate? How Subject is looks like for it? Do remember that output from openssl command for subject has to be like: Subject: C = US, ST = CA, L = Palo Alto, O = VMware, Inc., OU = IT, CN = admin

Below command i used to generated Admin certs.

openssl genrsa -out admin-key-temp.pem 2048
openssl pkcs8 -inform PEM -outform PEM -in admin-key-temp.pem -topk8 -nocrypt -v1 PBE-SHA1-3DES -out admin-key.pem
openssl req -new -key admin-key.pem -out admin.csr
openssl x509 -req -in admin.csr -CA root-ca.pem -CAkey root-ca-key.pem -CAcreateserial -sha256 -out admin.pem -days 30

$ openssl x509 -subject -nameopt RFC2253 -noout -in admin.pem
subject=CN=admin,OU=IT,O=VMware, Inc.,L=Palo Alto,ST=CA,C=US

fields in subject for cert needs to be in opposite order:
config: ```
‘CN=admin,OU=IT,O=VMware, Inc.,L=Palo Alto,ST=CA,C=US’

cert: ```
C = US, ST = CA, L = Palo Alto, O = VMware, Inc., OU = IT, CN = admin

How can we to do it?

In below Configuration ?

opendistro_security.authcz.admin_dn:

 - 'CN=admin,OU=IT,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'

 opendistro_security.nodes_dn:

 - 'CN=*,OU=IT,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'

 - 'CN=kibana,OU=IT,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'

 - 'CN=,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'

 - 'CN=k8s-logs*'

 - '/CN=.*regex/'

yes, you can change order of fields in elasticsearch.yml configuration file in"opendistro_security.authcz.admin_dn"

how about opendistro_security.nodes_dn: ?
opendistro_security.nodes_dn:

 - 'CN=*,OU=IT,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'

 - 'C=US,ST=CA,L=Palo Alto,O=VMware\, Inc.,OU=IT,CN=*'

 - 'CN=kibana,OU=IT,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'

 - 'CN=,O=VMware\, Inc.,L=Palo Alto,ST=CA,C=US'

 - 'CN=k8s-logs*'

 - '/CN=.*regex/'

for all DN in elasticsearch.yml

Getting below error now

./securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert /usr/share/elasticsearch/config/admin-root-ca.pem -cert /usr/share/elasticsearch/config/admin.pem -key /usr/share/elasticsearch/config/admin-key.pem

Open Distro Security Admin v7
Will connect to localhost:9300 … done
Connected as CN=admin,OU=IT,O=VMware, Inc.,L=Palo Alto,ST=CA,C=US
ERR: CN=admin,OU=IT,O=VMware, Inc.,L=Palo Alto,ST=CA,C=US is not an admin user
Seems you use a client certificate but this one is not registered as admin_dn
Make sure elasticsearch.yml on all nodes contains:
opendistro_security.authcz.admin_dn:

  • “CN=admin,OU=IT,O=VMware, Inc.,L=Palo Alto,ST=CA,C=US”

I used files to generate certs for opendistro script to generate all certificates needed for opendistro elastic+kibana+keycloak. securityadmin.sh script accepted my certificates and successfully finished

Thanks Maenko.

In Kubernetes, do we need to run securityadmin.sh all the time when we upgrade/any changes to yaml file?

some changes to configuration can be done through gui, some - only through different config files. When you change anything in config files - you need to run securityadmin.sh in order to import data into Elastic index where all config settings are located. It is true for kubernetes and docker environment.

I have a question. As you mentioned if any change in config file, need to run securityadmin.sh. In that case we might lose existing configurations which have configured through GUI?

There are several points where you have to go through configuration files: authentication/authorization is the main one,some reserved roles and user mappings for them, pre-defined permission groups… Other than that you may change configuration through a GUI or using corresponded API. I didn’t test but looks like running securityadmin.sh script with one config.yaml will not override rest of your configurations

securityadmin.sh script overrides configurations which was configured using GUI.
Is there any solution?

Below are commands
kubectl exec -it k8s-es-opendistro-es-master-2 – /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -nhnv -cacert /usr/share/elasticsearch/config/admin-root-ca.pem -cert /usr/share/elasticsearch/config/admin-crt.pem -key /usr/share/elasticsearch/config/admin-key.pem -cn k8s-es

/usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert /usr/share/elasticsearch/config/admin-root-ca.pem -cert /usr/share/elasticsearch/config/admin-crt.pem -key /usr/share/elasticsearch/config/admin-key.pem

You can run securityadmin.sh command to apply changes for one particular file only:

./securityadmin.sh -f ../securityconfig/config.yml -icl -nhnv -cert /etc/elasticsearch/kirk.pem -cacert /etc/elasticsearch/root-ca.pem -key /etc/elasticsearch/kirk-key.pem -t config

In this case only those changes in config file will be applied. For instance: you changed kibana authentication (implement OIDC), run securityadmin.sh with config.yaml only and all users,roles you created will not be affected