LDAP/LDAPS configuration using helm with Opensearch 3.2.0

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
Opensearch 3.2.0
Opensearch Dashboard 3.2.0

Describe the issue:
I have a Kubernetes cluster and I want to install Opensearch 3.2.0 with LDAPS via helm chart. I tried first to use LDAP. Seems that opensearch does not get the configuration that I set inside the helm file values.yaml

Configuration:

This is the values.yaml get from the official github page and edited https://github.com/opensearch-project/helm-charts/blob/main/charts/opensearch/values.yaml:

USER-SUPPLIED VALUES:
antiAffinity: soft
antiAffinityTopologyKey: kubernetes.io/hostname
clusterName: opensearch-cluster
securityConfig:
  enabled: true
  config:
    data:
      config.yml: |-
        ---
        _meta:
          type: "config"
          config_version: 2
        config:
          dynamic:
            http:
              anonymous_auth_enabled: false
            authc:
              internal_auth:
                order: 0
                description: "HTTP basic authentication using the internal user database"
                http_enabled: true
                transport_enabled: true
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: internal
              ldap:
                order: 1
                description: "Authenticate using LDAP"
                http_enabled: true
                transport_enabled: true
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: ldap
                  config:
                    enable_ssl: false
                    enable_start_tls: false
                    enable_ssl_client_auth: false
                    verify_hostnames: false
                    hosts:
                      - <my-url>:3268
                    bind_dn: <secret>
                    password: <secret>
                    userbase: <secret>
                    usersearch: (sAMAccountName={0})
                    username_attribute: name
opensearch.yml: |
    cluster.name: opensearch-cluster

    # Bind to all interfaces because we don't know what IP address Docker will assign to us.
    network.host: 0.0.0.0

    # Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
    # Implicitly done if ".singleNode" is set to "true".
    # discovery.type: single-node

    # Start OpenSearch Security Demo Configuration
    # WARNING: revise all the lines below before you go into production
    # plugins:
    #   security:
    #     ssl:
    #       transport:
    #         pemcert_filepath: esnode.pem
    #         pemkey_filepath: esnode-key.pem
    #         pemtrustedcas_filepath: root-ca.pem
    #         enforce_hostname_verification: false
    #       http:
    #         enabled: true
    #         pemcert_filepath: esnode.pem
    #         pemkey_filepath: esnode-key.pem
    #         pemtrustedcas_filepath: root-ca.pem
    #     allow_unsafe_democertificates: true
    #     allow_default_init_securityindex: true
    #     authcz:
    #       admin_dn:
    #         - CN=kirk,OU=client,O=client,L=test,C=de
    #     audit.type: internal_opensearch
    #     enable_snapshot_restore_privilege: true
    #     check_snapshot_restore_write_privileges: true
    #     restapi:
    #       roles_enabled: ["all_access", "security_rest_api_access"]
    #     system_indices:
    #       enabled: true
    #       indices:
    #         [
    #           ".opendistro-alerting-config",
    #           ".opendistro-alerting-alert*",
    #           ".opendistro-anomaly-results*",
    #           ".opendistro-anomaly-detector*",
    #           ".opendistro-anomaly-checkpoints",
    #           ".opendistro-anomaly-detection-state",
    #           ".opendistro-reports-*",
    #           ".opendistro-notifications-*",
    #           ".opendistro-notebooks",
    #           ".opendistro-asynchronous-search-response*",
    #         ]
    ######## End OpenSearch Security Demo Configuration ########
customAntiAffinity: {}
enableServiceLinks: true
envFrom: []
extraContainers: []
extraEnvs:
- name: OPENSEARCH_INITIAL_ADMIN_PASSWORD
  value: khf3jSPiwvkYmT5DY
extraInitContainers: []
extraObjects: []
extraVolumeMounts: []
extraVolumes: []
fsGroup: ""
fullnameOverride: ""
global:
  dockerRegistry: ""
hostAliases: []
httpHostPort: ""
httpPort: 9200
image:
  pullPolicy: IfNotPresent
  repository: opensearchproject/opensearch
  tag: ""
imagePullSecrets: []
initResources: {}
keystore: []
labels: {}
lifecycle: {}
livenessProbe: {}
majorVersion: ""
masterService: opensearch-cluster-master
masterTerminationFix: false
maxUnavailable: 1
metricsPort: 9600
nameOverride: ""
networkHost: 0.0.0.0
networkPolicy:
  create: false
  http:
    enabled: false
nodeAffinity: {}
nodeGroup: master
nodeSelector: {}
openSearchAnnotations: {}
opensearchHome: /usr/share/opensearch
opensearchJavaOpts: -Xmx512M -Xms512M
opensearchLifecycle: {}
persistence:
  accessModes:
  - ReadWriteOnce
  annotations: {}
  enableInitChown: false
  enabled: true
  labels:
    additionalLabels: {}
    enabled: false
  size: 512Gi
  storageClass: vsphere-csi
plugins:
  enabled: false
  installList: []
  removeList: []
podAffinity: {}
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000
podSecurityPolicy:
  create: false
  name: ""
  spec:
    fsGroup:
      rule: RunAsAny
    privileged: true
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
    - secret
    - configMap
    - persistentVolumeClaim
    - emptyDir
priorityClassName: ""
protocol: https
rbac:
  automountServiceAccountToken: true
  create: true
  serviceAccountAnnotations: {}
  serviceAccountName: opensearch-sa
readinessProbe:
  failureThreshold: 3
  periodSeconds: 5
  tcpSocket:
    port: 9200
  timeoutSeconds: 3
replicas: 3
resources:
  requests:
    cpu: 1000m
    memory: 100Mi
  limits:
    cpu: 2000m
    memory: 8Gi
roles:
- master
- ingest
- data
- remote_cluster_client
schedulerName: ""
secretMounts: []
securityConfig:
  actionGroupsSecret: null
  config:
    data: {}
    dataComplete: true
    securityConfigSecret: ""
  configSecret: null
  enabled: true
  internalUsersSecret: null
  path: /usr/share/opensearch/config/opensearch-security
  rolesMappingSecret: null
  rolesSecret: null
  tenantsSecret: null
securityContext:
  capabilities:
    drop:
    - ALL
  readOnlyRootFilesystem: false
  runAsNonRoot: true
  runAsUser: 1000
service:
  annotations: {}
  externalTrafficPolicy: ""
  headless:
    annotations: {}
  httpPortName: http
  labels: {}
  labelsHeadless: {}
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  metricsPortName: metrics
  nodePort: ""
  transportPortName: transport
  type: ClusterIP
serviceMonitor:
  basicAuth:
    enabled: false
  enabled: false
  interval: 10s
  labels: {}
  path: /_prometheus/metrics
  scheme: http
  tlsConfig: {}
sidecarResources: {}
singleNode: false
startupProbe:
  failureThreshold: 30
  initialDelaySeconds: 5
  periodSeconds: 10
  tcpSocket:
    port: 9200
  timeoutSeconds: 3
sysctl:
  enabled: false
sysctlInit:
  enabled: false
sysctlVmMaxMapCount: 262144
terminationGracePeriod: 120
tolerations: []
topologySpreadConstraints: []
transportHostPort: ""
transportPort: 9300
updateStrategy: RollingUpdate

helm install opensearch opensearch/opensearch -f values.yaml -n opensearch

I also deployed Opensearch Dashboard, and if I try to login with the local user works but not with the ldap users. If I go inside opensearch dashboard, go to security and then Authentication I see the default configuration as can be seen inside the screen below

Inside the pod there is only one logs:
[2025-10-03T15:35:33,997][WARN ][o.o.s.a.BackendRegistry ] [opensearch-cluster-master-2] Authentication finally failed for ldapbind from 10.42.2.16:52694

**
Relevant Logs or Screenshots**:

@alessandrofavale Just by looking at your config, it would appear you have “securityConfig” twice, line 5 and 215, which overwrites it with demo config. Hope this helps

Thank you so much. Now I deleted the double configuration at the line 215, now pods will not start and go in Crashloopbackoff, if I see the logs of the pods:

Enabling OpenSearch Security Plugin
Enabling execution of install_demo_configuration.sh for OpenSearch Security Plugin
OpenSearch 2.12.0 onwards, the OpenSearch Security Plugin a change that requires an initial password for ‘admin’ user.
Please define an environment variable ‘OPENSEARCH_INITIAL_ADMIN_PASSWORD’ with a strong password string.
If a password is not provided, the setup will quit.
For more details, please visit: Docker - OpenSearch Documentation
OpenSearch Security Demo Installer
Warning: Do not use on production or public reachable systems
OpenSearch install type: rpm/deb on Linux 5.14.0-503.23.1.el9_5.x86_64 amd64
OpenSearch config dir: /usr/share/opensearch/config/
OpenSearch config file: /usr/share/opensearch/config/opensearch.yml
OpenSearch bin dir: /usr/share/opensearch/bin/
OpenSearch plugins dir: /usr/share/opensearch/plugins/
OpenSearch lib dir: /usr/share/opensearch/lib/
Detected OpenSearch Version: 3.2.0
Detected OpenSearch Security Version: 3.2.0.0

This is the new values.yaml:

USER-SUPPLIED VALUES:
antiAffinity: soft
antiAffinityTopologyKey: kubernetes.io/hostname
clusterName: opensearch-cluster
securityConfig:
  enabled: true
  config:
    data:
      config.yml: |-
        ---
        _meta:
          type: "config"
          config_version: 2
        config:
          dynamic:
            http:
              anonymous_auth_enabled: false
            authc:
              internal_auth:
                order: 0
                description: "HTTP basic authentication using the internal user database"
                http_enabled: true
                transport_enabled: true
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: internal
              ldap:
                order: 1
                description: "Authenticate using LDAP"
                http_enabled: true
                transport_enabled: true
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: ldap
                  config:
                    enable_ssl: false
                    enable_start_tls: false
                    enable_ssl_client_auth: false
                    verify_hostnames: false
                    hosts:
                      - <my-url>:3268
                    bind_dn: <secret>
                    password: <secret>
                    userbase: <secret>
                    usersearch: (sAMAccountName={0})
                    username_attribute: name
opensearch.yml: |
    cluster.name: opensearch-cluster

    # Bind to all interfaces because we don't know what IP address Docker will assign to us.
    network.host: 0.0.0.0

    # Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
    # Implicitly done if ".singleNode" is set to "true".
    # discovery.type: single-node

    # Start OpenSearch Security Demo Configuration
    # WARNING: revise all the lines below before you go into production
    # plugins:
    #   security:
    #     ssl:
    #       transport:
    #         pemcert_filepath: esnode.pem
    #         pemkey_filepath: esnode-key.pem
    #         pemtrustedcas_filepath: root-ca.pem
    #         enforce_hostname_verification: false
    #       http:
    #         enabled: true
    #         pemcert_filepath: esnode.pem
    #         pemkey_filepath: esnode-key.pem
    #         pemtrustedcas_filepath: root-ca.pem
    #     allow_unsafe_democertificates: true
    #     allow_default_init_securityindex: true
    #     authcz:
    #       admin_dn:
    #         - CN=kirk,OU=client,O=client,L=test,C=de
    #     audit.type: internal_opensearch
    #     enable_snapshot_restore_privilege: true
    #     check_snapshot_restore_write_privileges: true
    #     restapi:
    #       roles_enabled: ["all_access", "security_rest_api_access"]
    #     system_indices:
    #       enabled: true
    #       indices:
    #         [
    #           ".opendistro-alerting-config",
    #           ".opendistro-alerting-alert*",
    #           ".opendistro-anomaly-results*",
    #           ".opendistro-anomaly-detector*",
    #           ".opendistro-anomaly-checkpoints",
    #           ".opendistro-anomaly-detection-state",
    #           ".opendistro-reports-*",
    #           ".opendistro-notifications-*",
    #           ".opendistro-notebooks",
    #           ".opendistro-asynchronous-search-response*",
    #         ]
    ######## End OpenSearch Security Demo Configuration ########
customAntiAffinity: {}
enableServiceLinks: true
envFrom: []
extraContainers: []
extraEnvs:
- name: OPENSEARCH_INITIAL_ADMIN_PASSWORD
  value: khf3jSPiwvkYmT5DY
extraInitContainers: []
extraObjects: []
extraVolumeMounts: []
extraVolumes: []
fsGroup: ""
fullnameOverride: ""
global:
  dockerRegistry: ""
hostAliases: []
httpHostPort: ""
httpPort: 9200
image:
  pullPolicy: IfNotPresent
  repository: opensearchproject/opensearch
  tag: ""
imagePullSecrets: []
initResources: {}
keystore: []
labels: {}
lifecycle: {}
livenessProbe: {}
majorVersion: ""
masterService: opensearch-cluster-master
masterTerminationFix: false
maxUnavailable: 1
metricsPort: 9600
nameOverride: ""
networkHost: 0.0.0.0
networkPolicy:
  create: false
  http:
    enabled: false
nodeAffinity: {}
nodeGroup: master
nodeSelector: {}
openSearchAnnotations: {}
opensearchHome: /usr/share/opensearch
opensearchJavaOpts: -Xmx512M -Xms512M
opensearchLifecycle: {}
persistence:
  accessModes:
  - ReadWriteOnce
  annotations: {}
  enableInitChown: false
  enabled: true
  labels:
    additionalLabels: {}
    enabled: false
  size: 512Gi
  storageClass: vsphere-csi
plugins:
  enabled: false
  installList: []
  removeList: []
podAffinity: {}
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000
podSecurityPolicy:
  create: false
  name: ""
  spec:
    fsGroup:
      rule: RunAsAny
    privileged: true
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
    - secret
    - configMap
    - persistentVolumeClaim
    - emptyDir
priorityClassName: ""
protocol: https
rbac:
  automountServiceAccountToken: true
  create: true
  serviceAccountAnnotations: {}
  serviceAccountName: opensearch-sa
readinessProbe:
  failureThreshold: 3
  periodSeconds: 5
  tcpSocket:
    port: 9200
  timeoutSeconds: 3
replicas: 3
resources:
  requests:
    cpu: 1000m
    memory: 100Mi
  limits:
    cpu: 2000m
    memory: 8Gi
roles:
- master
- ingest
- data
- remote_cluster_client
schedulerName: ""
secretMounts: []
securityContext:
  capabilities:
    drop:
    - ALL
  readOnlyRootFilesystem: false
  runAsNonRoot: true
  runAsUser: 1000
service:
  annotations: {}
  externalTrafficPolicy: ""
  headless:
    annotations: {}
  httpPortName: http
  labels: {}
  labelsHeadless: {}
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  metricsPortName: metrics
  nodePort: ""
  transportPortName: transport
  type: ClusterIP
serviceMonitor:
  basicAuth:
    enabled: false
  enabled: false
  interval: 10s
  labels: {}
  path: /_prometheus/metrics
  scheme: http
  tlsConfig: {}
sidecarResources: {}
singleNode: false
startupProbe:
  failureThreshold: 30
  initialDelaySeconds: 5
  periodSeconds: 10
  tcpSocket:
    port: 9200
  timeoutSeconds: 3
sysctl:
  enabled: false
sysctlInit:
  enabled: false
sysctlVmMaxMapCount: 262144
terminationGracePeriod: 120
tolerations: []
topologySpreadConstraints: []
transportHostPort: ""
transportPort: 9300
updateStrategy: RollingUpdate

If I install it with securityConfig.enabled: false all pods is ok

@alessandrofavale what you are doing by specifying files under securityConfig.config.data is overriding the rest of security configuration such as roles.yml, internal_users.yml etc.

Have a look at the instruction listed in the repo here

securityConfig:
  enabled: true
  path: "/usr/share/opensearch/config/opensearch-security"
  actionGroupsSecret:
  configSecret:
  internalUsersSecret:
  rolesSecret:
  rolesMappingSecret:
  tenantsSecret:
  # The following option simplifies securityConfig by using a single secret and
  # specifying the config files as keys in the secret instead of creating
  # different secrets for for each config file.
  # Note that this is an alternative to the individual secret configuration
  # above and shouldn't be used if the above secrets are used.
  config:
    # There are multiple ways to define the configuration here:
    # * If you define anything under data, the chart will automatically create
    #   a secret and mount it. This is best option to choose if you want to override all the
    #   existing yml files at once.
    # * If you define securityConfigSecret, the chart will assume this secret is
    #   created externally and mount it. This is best option to choose if your intention is to
    #   only update a single yml file.
    # * It is an error to define both data and securityConfigSecret.
    securityConfigSecret: ""
    dataComplete: true
    data: {}
      # config.yml: |-
      # internal_users.yml: |-
      # roles.yml: |-
      # roles_mapping.yml: |-
      # action_groups.yml: |-
      # tenants.yml: |-

@alessandrofavale to close the loop on this, You can use the following configuration:

First create a secret with the config.yml file:

kubectl create secret generic os-securityconfig \
  --from-file=config.yml=./config.yml

Update the values.yml as follows:

clusterName: opensearch-cluster
securityConfig:
  enabled: true
  configSecret: os-securityconfig
  config:
    securityConfigSecret: ""
    dataComplete: true 
    data: {}
1 Like

Thank you for your help. I figured out the problem, I must define other files:
internal_users.yml
roles.yml
roles_mapping.yml
action_groups.yml
tenants.yml

So now the values.yaml is:

So I defined in internal_user file the admin and kubanaserver users, if I don’t define kibanaserer and assign admin role to this user, pods of opensearch display the error “Authentication finally failed for kibanaserver”. It is correct this approch?
Another problem is that now I can login with LDAP user but opensearch display this error:

And the logs of the opensearch pod display this error:
No cluster-level perm match for User … No permissions for [indices:data/write/bulk]

@alessandrofavale with the latest configuration you have essentially removed all built in roles, as your roles.yml file is empty. Also you have mapped kibanaserver to all_access, which is not recommend and is therefore not part of the demo configuration.

If you want to only replace the config.yml, but keep all the other demo configuration (roles, tenants, internal_users etc) please use the configuration in my last reply:

First create a secret with the config.yml file:

kubectl create secret generic os-securityconfig \
  --from-file=config.yml=./config.yml

Update the values.yml as follows:

clusterName: opensearch-cluster
securityConfig:
  enabled: true
  configSecret: os-securityconfig
  config:
    securityConfigSecret: ""
    dataComplete: true 
    data: {}

Otherwise, start the cluster without any security configuration (securityConfig.enabled: false) and extract the generated security configuration files from any pod, then use these files in the values.yml (with config.yml updated as needed)

Thank you for your help. It works.
But now, users that login via LDAP which privileges they have?

1 Like

@alessandrofavale The roles are mapped to users using role_mappings.yml file, based on the roles/groups received from ldap as backend roles. I would recommend to go through the docs here

1 Like