OpenSearch Security not initialized error in master node

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

Helm Chart version: opensearch-2.17.3 and opensearch-dashboards-2.15.1

OpenSearch version: 2.11.1

Describe the issue:

I am trying to configure 3 node cluster like master, data and client running on different pods in Kubernetes but I am getting below error while executing.

Master logs:
[2024-03-28T07:03:23,607][ERROR][o.o.s.c.ConfigurationRepository] [opensearch-cluster-master-0] Cannot apply default config (this is maybe not an error!)
java.security.PrivilegedActionException: null
at java.security.AccessController.doPrivileged(AccessController.java:573) ~[?:?]
at org.opensearch.security.support.ConfigHelper.uploadFile(ConfigHelper.java:86) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]
at org.opensearch.security.support.ConfigHelper.uploadFile(ConfigHelper.java:65) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]
at org.opensearch.security.configuration.ConfigurationRepository.lambda$new$0(ConfigurationRepository.java:147) [opensearch-security-2.11.1.0.jar:2.11.1.0]
at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: java.io.FileNotFoundException: /usr/share/opensearch/config/opensearch-security/roles.yml (No such file or directory)
at java.io.FileInputStream.open0(Native Method) ~[?:?]
at java.io.FileInputStream.open(FileInputStream.java:216) ~[?:?]
at java.io.FileInputStream.(FileInputStream.java:157) ~[?:?]
at java.io.FileInputStream.(FileInputStream.java:111) ~[?:?]
at java.io.FileReader.(FileReader.java:103) ~[?:?]
at org.opensearch.security.support.ConfigHelper.fromYamlFile(ConfigHelper.java:181) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]
at org.opensearch.security.support.ConfigHelper.lambda$uploadFile$0(ConfigHelper.java:88) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]
at java.security.AccessController.doPrivileged(AccessController.java:569) ~[?:?]
… 4 more
[2024-03-28T07:03:23,617][WARN ][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] No data for internalusers while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-03-28T07:03:23,618][WARN ][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-03-28T07:03:23,637][INFO ][o.o.s.l.LogTypeService ] [opensearch-cluster-master-0] Indexing [429] fieldMappingDocs
[2024-03-28T07:03:23,638][INFO ][o.o.s.l.LogTypeService ] [opensearch-cluster-master-0] Indexing [429] fieldMappingDocs
[2024-03-28T07:03:23,652][WARN ][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] No data for roles while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-03-28T07:03:23,653][WARN ][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] No data for rolesmapping while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)

AND

on url I am getting below error.

Configuration:
config.yml

  config.yml: |-
    _meta:
      type: "config"
      config_version: 2
    config:
      dynamic:
        http:
          anonymous_auth_enabled: false
          xff:
            enabled: false
            internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
            remoteIpHeader:  'x-forwarded-for'
        authc:
          basic_internal_auth_domain:
            description: "Authenticate via HTTP Basic against internal users database"
            http_enabled: true
            transport_enabled: true
            order: 1 # Must set to 1 to prevent logs flooding with warnings
            http_authenticator:
              type: basic
              challenge: false
            authentication_backend:
              type: internal
          openid_auth_domain:
            http_enabled: true
            transport_enabled: true
            order: 0 # Must set to 0 to precede basic authentication or logs will flood with warnings
            http_authenticator:
              type: openid
              challenge: false
              config:
                openid_connect_idp:
                  enable_ssl: true # Without this JWT cookie will fail and token won't be passed to engine
                  pemtrustedcas_filepath: /usr/share/opensearch/config/root-ca.pem # Full chain pem file for you HTTPS certificate
                  verify_hostnames: false # If SNI certificate is used and hostnames match, set to true
                  subject_key: email # Users will be able to login with email and show it as username
                  roles_key: roles # Use roles from App registrations, do not set to groups.
                openid_connect_url: https://login.microsoftonline.com/<OPENSEARCH_TENANT_ID>/v2.0/.well-known/openid-configuration # App registration IDP metadata file URL
            authentication_backend:
              type: noop # Here we disable authentication backend since we will be using Azure AD for that

opensearch yaml
opensearch.yml: |
cluster.name: opensearch-cluster-1

# Bind to all interfaces because we don't know what IP address Docker will assign to us.
network.host: 0.0.0.0
#network.bind_host: 0.0.0.0
bootstrap.memory_lock: false
# Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
# Implicitly done if ".singleNode" is set to "true".
# discovery.type: single-node

# Start OpenSearch Security Demo Configuration
# WARNING: revise all the lines below before you go into production
plugins:
  security:
    #disabled: true
    ssl:
      transport:
        pemcert_filepath: esnode.pem
        pemkey_filepath: esnode-key.pem
        pemtrustedcas_filepath: root-ca.pem
        enforce_hostname_verification: false
      http:
        enabled: true
        pemcert_filepath: esnode.pem
        pemkey_filepath: esnode-key.pem
        pemtrustedcas_filepath: root-ca.pem
    nodes_dn:
      - 'CN=kirk,OU=client,O=client,L=test,C=de'
    allow_unsafe_democertificates: false
    allow_default_init_securityindex: true
    cache:
      ttl_minutes: 0
    authcz:
      admin_dn:
        - CN=kirk,OU=client,O=client,L=test,C=de
          #- "CN=kirk,OU=client,O=client,L=test,C=de"
    audit.type: internal_opensearch
    enable_snapshot_restore_privilege: true
    check_snapshot_restore_write_privileges: true
    restapi:
      roles_enabled: ["all_access", "security_rest_api_access"]
    system_indices:
      enabled: true
      indices:
        [
          ".opendistro-alerting-config",
          ".opendistro-alerting-alert*",
          ".opendistro-anomaly-results*",
          ".opendistro-anomaly-detector*",
          ".opendistro-anomaly-checkpoints",
          ".opendistro-anomaly-detection-state",
          ".opendistro-reports-*",
          ".opendistro-notifications-*",
          ".opendistro-notebooks",
          ".opendistro-asynchronous-search-response*",
        ]

opendistro_security.audit.config.disabled_rest_categories: NONE
opendistro_security.audit.config.disabled_transport_categories: NONE
cluster.routing.allocation.disk.threshold_enabled: false

Note

Its related to OpenSearch OpenID Connect Integration with Azure AD not working

Hi @nawaz,

A few things stand out in your config.yml, could you test with the below:

              openid_auth_domain:
                http_enabled: true
                transport_enabled: true
                order: 0 # Must set to 0 to precede basic authentication or logs will flood with warnings
                http_authenticator:
                  type: openid
                  challenge: false
                  config:
                    subject_key: email # Users will be able to login with email and show it as username
                    roles_key: roles # Use roles from App registrations, do not set to groups.
                    openid_connect_url: https://login.microsoftonline.com/<OPENSEARCH_TENANT_ID>/v2.0/.well-known/openid-configuration # App registration IDP metadata file URL
                authentication_backend:
                  type: noop # 

Best,
mj

I have noticed a few more things:

this option will prevent you from using “Demo” certificates (as I am assuming that you are using the atm).

Do you have roles.yml in your config directory? Would you mind sharing your values.yml file?

Hi @Mantas ,

I don’t have role.yml in value.yml.

value.yml

---
clusterName: "opensearch-cluster"
nodeGroup: "master"

singleNode: false

masterService: "opensearch-cluster-master"

roles:
  - master
  - ingest: "false"
  - data: "false"
  - remote_cluster_client: "false"

replicas: 1

majorVersion: ""

global:
  dockerRegistry: ""

opensearchHome: /usr/share/opensearch

config:
  opensearch.yml: |
    cluster.name: opensearch-cluster-1

    network.host: 0.0.0.0
    bootstrap.memory_lock: false

    plugins:
      security:
        ssl:
          transport:
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
            enforce_hostname_verification: false
          http:
            enabled: true
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
        nodes_dn:
          - 'CN=kirk,OU=client,O=client,L=test,C=de'
        allow_unsafe_democertificates: false
        allow_default_init_securityindex: true
        cache:
          ttl_minutes: 0
        authcz:
          admin_dn:
            - CN=kirk,OU=client,O=client,L=test,C=de
        audit.type: internal_opensearch
        enable_snapshot_restore_privilege: true
        check_snapshot_restore_write_privileges: true
        restapi:
          roles_enabled: ["all_access", "security_rest_api_access"]
        system_indices:
          enabled: true
          indices:
            [
              ".opendistro-alerting-config",
              ".opendistro-alerting-alert*",
              ".opendistro-anomaly-results*",
              ".opendistro-anomaly-detector*",
              ".opendistro-anomaly-checkpoints",
              ".opendistro-anomaly-detection-state",
              ".opendistro-reports-*",
              ".opendistro-notifications-*",
              ".opendistro-notebooks",
              ".opendistro-asynchronous-search-response*",
            ]


extraEnvs: []

envFrom: []

secretMounts: []

hostAliases: []

image:
  repository: "opensearchproject/opensearch"
  tag: ""
  pullPolicy: "IfNotPresent"

podAnnotations: {}

openSearchAnnotations: {}

labels: {}

opensearchJavaOpts: "-Xmx512M -Xms512M"

resources:
  requests:
    cpu: "1000m"
    memory: "2048Mi"

initResources: {}

sidecarResources: {}

networkHost: "0.0.0.0"

rbac:
  create: false
  serviceAccountAnnotations: {}
  serviceAccountName: ""
  automountServiceAccountToken: false

podSecurityPolicy:
  create: false
  name: ""
  spec:
    privileged: true
    fsGroup:
      rule: RunAsAny
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
      - secret
      - configMap
      - persistentVolumeClaim
      - emptyDir

persistence:
  enabled: true
  enableInitChown: true
  labels:
    enabled: false
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

extraVolumes: []

extraVolumeMounts: []

extraContainers: []

extraInitContainers: []

priorityClassName: ""

antiAffinityTopologyKey: "kubernetes.io/hostname"

antiAffinity: "soft"

customAntiAffinity: {}

nodeAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
      - matchExpressions:
        - key: poolname
          operator: In
          values:
          - prodopenscrh

podAffinity: {}

topologySpreadConstraints: []

podManagementPolicy: "Parallel"

enableServiceLinks: true

protocol: https
httpPort: 9200
transportPort: 9300
metricsPort: 9600
httpHostPort: ""
transportHostPort: ""


service:
  labels: {}
  labelsHeadless: {}
  headless:
    annotations: {}
  type: ClusterIP
  nodePort: ""
  annotations: {}
  httpPortName: http
  transportPortName: transport
  metricsPortName: metrics
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  externalTrafficPolicy: ""

updateStrategy: RollingUpdate

maxUnavailable: 1

podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000

securityContext:
  capabilities:
    drop:
      - ALL
  runAsNonRoot: true
  runAsUser: 1000

securityConfig:
  enabled: true
  path: "/usr/share/opensearch/config/opensearch-security"
  actionGroupsSecret:
  configSecret:
  internalUsersSecret:
  rolesSecret:
  rolesMappingSecret:
  tenantsSecret:
  config:
    securityConfigSecret: ""
    dataComplete: true
    data:
      config.yml: |-
        _meta:
          type: "config"
          config_version: 2
        config:
          dynamic:
            http:
              anonymous_auth_enabled: false
              xff:
                enabled: false
                remoteIpHeader:  'x-forwarded-for'
            authc:
              basic_internal_auth_domain:
                description: "Authenticate via HTTP Basic against internal users database"
                http_enabled: true
                transport_enabled: true
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: internal
              openid_auth_domain:
                http_enabled: true
                transport_enabled: true
                http_authenticator:
                  type: openid
                  challenge: false
                  config:
                    openid_connect_idp:
                authentication_backend:
      

      

terminationGracePeriod: 120

sysctlVmMaxMapCount: 262144

startupProbe:
  tcpSocket:
    port: 9200
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 3
  failureThreshold: 30

livenessProbe: {}

readinessProbe:
  tcpSocket:
    port: 9200
  periodSeconds: 5
  timeoutSeconds: 3
  failureThreshold: 3

schedulerName: ""

imagePullSecrets: []
nodeSelector: {}
tolerations:
  - key: "sku"
    operator: "Equal"
    value: "d16adsv5"
    effect: "NoSchedule"

ingress:
  enabled: false

  annotations: {}
  ingressLabels: {}
  path: /
  hosts:
    - chart-example.local
  tls: []

nameOverride: ""
fullnameOverride: ""

masterTerminationFix: false

opensearchLifecycle: {}

lifecycle: {}

keystore: []

networkPolicy:
  create: false

  http:
    enabled: false

fsGroup: ""

sysctl:
  enabled: false

sysctlInit:
  enabled: false

plugins:
  enabled: false
  installList: []

extraObjects: []

Hi @Mantas ,

I tried this but still getting same error.

OpenSearch Security not initialized.

Hi @Mantas ,

I tried this but still getting same error.

OpenSearch Security not initialized.

Yes I am using demo certificate

You will have to provide content forroles.yml in your values.yml.
as per:helm-charts/charts/opensearch/values.yaml at main · opensearch-project/helm-charts · GitHub

You should specify the following (in your values.yml):

    data: {}
      # config.yml: |-
      # internal_users.yml: |-
      # roles.yml: |-
      # roles_mapping.yml: |-
      # action_groups.yml: |-
      # tenants.yml: |-

Please see some guidance (demo) for the .yml files here: Modifying the YAML files - OpenSearch Documentation

Best,
mj

In this case you will have to set allow_unsafe_democertificates: true

Hi @Mantas ,

As I mentioned earlier I trying to integrate Azure Active Directory using Azure app registrations. I followed multiple article and come to conclusion that after master,data and client pods are up we need to run “./securityadmin.sh -cd …/…/…/config/opensearch-security/ -rev -cacert …/…/…/root-ca.pem -cert …/…/…/kirk.pem -key …/…/…/kirk.key.pem”. After runing this command I am getting below error. Please refrence of other config file as well.

It will very helpful If you provide proper doc or help to setup my requirement.

./securityadmin.sh run output

**************************************************************************
** This tool will be deprecated in the next major release of OpenSearch **
** https://github.com/opensearch-project/security/issues/1755           **
**************************************************************************
Security Admin v7
Will connect to localhost:9200 ... done
Connected as "CN=kirk,OU=client,O=client,L=test,C=de"
ERR: Seems you use a node certificate which is also an admin certificate
     That may have worked with older OpenSearch Security versions but it indicates
     a configuration error and is therefore forbidden now.
OpenSearch Version: 2.11.1
Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ...
Clustername: opensearch-cluster
Clusterstate: GREEN
Number of nodes: 3
Number of data nodes: 1
.opendistro_security index already exists, so we do not need to create one.
Populate config from /usr/share/opensearch/config/opensearch-security
Will update '/config' with ../../../config/opensearch-security/config.yml 
   SUCC: Configuration for 'config' created or updated
Will update '/roles' with ../../../config/opensearch-security/roles.yml 
   SUCC: Configuration for 'roles' created or updated
ERR: Seems ../../../config/opensearch-security/roles_mapping.yml is not in OpenSearch Security 7 format: java.io.IOException: A version of 2 must have a _meta key for ROLESMAPPING
ERR: Seems ../../../config/opensearch-security/internal_users.yml is not in OpenSearch Security 7 format: java.io.FileNotFoundException: ../../../config/opensearch-security/internal_users.yml (No such file or directory)
ERR: Seems ../../../config/opensearch-security/action_groups.yml is not in OpenSearch Security 7 format: java.io.FileNotFoundException: ../../../config/opensearch-security/action_groups.yml (No such file or directory)
ERR: Seems ../../../config/opensearch-security/tenants.yml is not in OpenSearch Security 7 format: java.io.FileNotFoundException: ../../../config/opensearch-security/tenants.yml (No such file or directory)
ERR: Seems ../../../config/opensearch-security/nodes_dn.yml is not in OpenSearch Security 7 format: java.io.FileNotFoundException: ../../../config/opensearch-security/nodes_dn.yml (No such file or directory)
ERR: Seems ../../../config/opensearch-security/whitelist.yml is not in OpenSearch Security 7 format: java.io.FileNotFoundException: ../../../config/opensearch-security/whitelist.yml (No such file or directory)
ERR: cannot upload configuration, see errors above

config.yml

_meta:
  type: "config"
  config_version: 2
config:
  dynamic:
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
        remoteIpHeader:  'x-forwarded-for'
    authc:
      basic_internal_auth_domain:
        description: "Authenticate via HTTP Basic against internal users database"
        http_enabled: true
        transport_enabled: true
        order: 1 # Must set to 1 to prevent logs flooding with warnings
        http_authenticator:
          type: basic
          challenge: false
        authentication_backend:
          type: internal
      openid_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 0 # Must set to 0 to precede basic authentication or logs will flood with warnings
        http_authenticator:
          type: openid
          challenge: false
          config:
            subject_key: email # Users will be able to login with email and show it as username
            roles_key: roles # Use roles from App registrations, do not set to groups.
            openid_connect_url: https://login.microsoftonline.com/tenetanant_id/v2.0/.well-known/openid-configuration # App registration IDP metadata file URL
            skip_users:
              - admin
              - kibanaserver
              - kibanaro
              - logstash
              - readall
              - snapshotrestore
        authentication_backend:
          type: noop # Here we disable authentication backend since we will be using Azure AD for that

roles.yml

complex-role:
  reserved: false
  hidden: false
  cluster_permissions:
  - "read"
  - "cluster:monitor/nodes/stats"
  - "cluster:monitor/task/get"
  index_permissions:
  - index_patterns:
    - "opensearch_dashboards_sample_data_*"
    dls: "{\"match\": {\"FlightDelay\": true}}"
    fls:
    - "~FlightNum"
    masked_fields:
    - "Carrier"
    allowed_actions:
    - "read"
  tenant_permissions:
  - tenant_patterns:
    - "analyst_*"
    allowed_actions:
    - "kibana_all_write"
  static: false
_meta:
  type: "roles"
  config_version: 2        

roles_mapping

all_access:
  reserved: false
  backend_roles:
    - "admin"
  users:
    - 'username@test.com'
  description: "Maps admin to all_access"

opensearch.yml

cluster.name: opensearch-cluster-1
network.host: 0.0.0.0
#network.bind_host: 0.0.0.0
bootstrap.memory_lock: false
#Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
#Implicitly done if ".singleNode" is set to "true".
#discovery.type: single-node

#Start OpenSearch Security Demo Configuration
#WARNING: revise all the lines below before you go into production
plugins:
  security:
    #disabled: true
    ssl:
      transport:
        pemcert_filepath: esnode.pem
        pemkey_filepath: esnode-key.pem
        pemtrustedcas_filepath: root-ca.pem
        enforce_hostname_verification: false
      http:
        enabled: true
        pemcert_filepath: esnode.pem
        pemkey_filepath: esnode-key.pem
        pemtrustedcas_filepath: root-ca.pem
    nodes_dn:
      - 'CN=kirk,OU=client,O=client,L=test,C=de'
    allow_unsafe_democertificates: true
    allow_default_init_securityindex: true
    cache:
      ttl_minutes: 0
    authcz:
      admin_dn:
        - CN=kirk,OU=client,O=client,L=test,C=de
          #- "CN=kirk,OU=client,O=client,L=test,C=de"
    audit.type: internal_opensearch
    enable_snapshot_restore_privilege: true
    check_snapshot_restore_write_privileges: true
    restapi:
      roles_enabled: ["all_access", "security_rest_api_access"]
    system_indices:
      enabled: true
      indices:
        [
          ".opendistro-alerting-config",
          ".opendistro-alerting-alert*",
          ".opendistro-anomaly-results*",
          ".opendistro-anomaly-detector*",
          ".opendistro-anomaly-checkpoints",
          ".opendistro-anomaly-detection-state",
          ".opendistro-reports-*",
          ".opendistro-notifications-*",
          ".opendistro-notebooks",
          ".opendistro-asynchronous-search-response*",
        ]

        #opendistro_security.audit.config.disabled_rest_categories: NONE
        #opendistro_security.audit.config.disabled_transport_categories: NONE
        #cluster.routing.allocation.disk.threshold_enabled: false
######## End OpenSearch Security Demo Configuration ########

######## Start OpenSearch Security Demo Configuration ########
#WARNING: revise all the lines below before you go into production
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: esnode.pem
plugins.security.ssl.http.pemkey_filepath: esnode-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".plugins-ml-config", ".plugins-ml-connector", ".plugins-ml-model-group", ".plugins-ml-model", ".plugins-ml-task", ".plugins-ml-conversation-meta", ".plugins-ml-conversation-interactions", ".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opensearch-notifications-*", ".opensearch-notebooks", ".opensearch-observability", ".ql-datasources", ".opendistro-asynchronous-search-response*", ".replication-metadata-store", ".opensearch-knn-models", ".geospatial-ip2geo-data*"]
node.max_local_storage_nodes: 3
######## End OpenSearch Security Demo Configuration ########

@nawaz

Can you please share your full (latest) values.yml file?

please use “`” (or the highlighted below) to wrap any code:
image

Thanks,
mj

The above is caused by nodes_dn: set in your opensearch.yml, is there a reason why you set your admin_dn as nodes_dn?

As per my comment above:

NOTE please make sure to include _meta key to all (see samples below):

            _meta:
              type: "roles"
              config_version: 2
            _meta:
              type: "rolesmapping"
              config_version: 2

Best,
mj

There is no particular reason I just took references from one of the article. Should I comment it out?

Hi @Mantas ,

Again it got failed with below error.

[2024-04-03T06:45:58,992][INFO ][o.o.s.s.ConfigHelper ] [opensearch-cluster-master-0] Will update ‘roles’ with /usr/share/opensearch/config/opensearch-security/roles.yml and populate it with empty doc if file missing and populateEmptyIfFileMissing=false
[2024-04-03T06:45:58,994][ERROR][o.o.s.c.ConfigurationRepository] [opensearch-cluster-master-0] Cannot apply default config (this is maybe not an error!)
java.security.PrivilegedActionException: null
at java.security.AccessController.doPrivileged(AccessController.java:573) ~[?:?]
at org.opensearch.security.support.ConfigHelper.uploadFile(ConfigHelper.java:86) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]
at org.opensearch.security.support.ConfigHelper.uploadFile(ConfigHelper.java:65) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]
at org.opensearch.security.configuration.ConfigurationRepository.lambda$new$0(ConfigurationRepository.java:147) [opensearch-security-2.11.1.0.jar:2.11.1.0]
at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: com.fasterxml.jackson.core.JsonParseException: Duplicate field ‘_meta’
at [Source: (FileReader); line: 27, column: 6]
at com.fasterxml.jackson.core.json.JsonReadContext._checkDup(JsonReadContext.java:243) ~[jackson-core-2.15.2.jar:2.15.2]
at com.fasterxml.jackson.core.json.JsonReadContext.setCurrentName(JsonReadContext.java:237) ~[jackson-core-2.15.2.jar:2.15.2]
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:482) ~[jackson-dataformat-yaml-2.15.2.jar:2.15.2]
at com.fasterxml.jackson.core.JsonParser.nextFieldName(JsonParser.java:1091) ~[jackson-core-2.15.2.jar:2.15.2]
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer._deserializeContainerNoRecursion(JsonNodeDeserializer.java:536) ~[jackson-databind-2.15.2.jar:2.15.2]
at com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:100) ~[jackson-databind-2.15.2.jar:2.15.2]
at com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:25) ~[jackson-databind-2.15.2.jar:2.15.2]
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:323) ~[jackson-databind-2.15.2.jar:2.15.2]
at com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4867) ~[jackson-databind-2.15.2.jar:2.15.2]
at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:3208) ~[jackson-databind-2.15.2.jar:2.15.2]
at org.opensearch.security.support.ConfigHelper.fromYamlReader(ConfigHelper.java:166) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]
at org.opensearch.security.support.ConfigHelper.fromYamlFile(ConfigHelper.java:181) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]
at org.opensearch.security.support.ConfigHelper.lambda$uploadFile$0(ConfigHelper.java:88) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]
at java.security.AccessController.doPrivileged(AccessController.java:569) ~[?:?]
… 4 more
[2024-04-03T06:45:59,010][WARN ][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] No data for internalusers while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-04-03T06:45:59,010][WARN ][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-04-03T06:45:59,034][WARN ][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] No data for roles while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2024-04-03T06:45:59,034][WARN ][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] No data for rolesmapping while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)

value.yml

clusterName: "opensearch-cluster"
nodeGroup: "master"

# If discovery.type in the opensearch configuration is set to "single-node",
# this should be set to "true"
# If "true", replicas will be forced to 1
singleNode: false

# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: "opensearch-cluster-master"

# OpenSearch roles that will be applied to this nodeGroup
# These will be set as environment variable "node.roles". E.g. node.roles=master,ingest,data,remote_cluster_client
roles:
  - master
  - ingest: "false"
  - data: "false"
  - remote_cluster_client: "false"

replicas: 1

# if not set, falls back to parsing .Values.imageTag, then .Chart.appVersion.
majorVersion: ""

global:
  # Set if you want to change the default docker registry, e.g. a private one.
  dockerRegistry: ""

# Allows you to add any config files in {{ .Values.opensearchHome }}/config
opensearchHome: /usr/share/opensearch

# such as opensearch.yml and log4j2.properties
config:
  # Values must be YAML literal style scalar / YAML multiline string.
  # <filename>: |
  #   <formatted-value(s)>
  # log4j2.properties: |
  #   status = error
  #
  #   appender.console.type = Console
  #   appender.console.name = console
  #   appender.console.layout.type = PatternLayout
  #   appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
  #
  #   rootLogger.level = info
  #   rootLogger.appenderRef.console.ref = console
  opensearch.yml: |
    cluster.name: opensearch-cluster-1

    # Bind to all interfaces because we don't know what IP address Docker will assign to us.
    network.host: 0.0.0.0
    #network.bind_host: 0.0.0.0
    bootstrap.memory_lock: false
    # Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
    # Implicitly done if ".singleNode" is set to "true".
    # discovery.type: single-node

    # Start OpenSearch Security Demo Configuration
    # WARNING: revise all the lines below before you go into production
    plugins:
      security:
        #disabled: true
        ssl:
          transport:
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
            enforce_hostname_verification: false
          http:
            enabled: true
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
              #nodes_dn:
              #- 'CN=kirk,OU=client,O=client,L=test,C=de'
        allow_unsafe_democertificates: true
        allow_default_init_securityindex: true
        cache:
          ttl_minutes: 0
        authcz:
          admin_dn:
            - CN=kirk,OU=client,O=client,L=test,C=de
              #- "CN=kirk,OU=client,O=client,L=test,C=de"
        audit.type: internal_opensearch
        enable_snapshot_restore_privilege: true
        check_snapshot_restore_write_privileges: true
        restapi:
          roles_enabled: ["all_access", "security_rest_api_access"]
        system_indices:
          enabled: true
          indices:
            [
              ".opendistro-alerting-config",
              ".opendistro-alerting-alert*",
              ".opendistro-anomaly-results*",
              ".opendistro-anomaly-detector*",
              ".opendistro-anomaly-checkpoints",
              ".opendistro-anomaly-detection-state",
              ".opendistro-reports-*",
              ".opendistro-notifications-*",
              ".opendistro-notebooks",
              ".opendistro-asynchronous-search-response*",
            ]

            #opendistro_security.audit.config.disabled_rest_categories: NONE
            #opendistro_security.audit.config.disabled_transport_categories: NONE
            #cluster.routing.allocation.disk.threshold_enabled: false
    ######## End OpenSearch Security Demo Configuration ########
  # log4j2.properties:

# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
#  - name: MY_ENVIRONMENT_VAR
#    value: the_value_goes_here

# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
#     name: env-secret
# - configMapRef:
#     name: config-map

# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []

hostAliases: []
# - ip: "127.0.0.1"
#   hostnames:
#   - "foo.local"
#   - "bar.local"

image:
  repository: "opensearchproject/opensearch"
  # override image tag, which is .Chart.AppVersion by default
  tag: ""
  pullPolicy: "IfNotPresent"

podAnnotations: {}
  # iam.amazonaws.com/role: es-cluster

# OpenSearch Statefulset annotations
openSearchAnnotations: {}

# additionals labels
labels: {}

opensearchJavaOpts: "-Xmx512M -Xms512M"

resources:
  requests:
    cpu: "1000m"
    memory: "2048Mi"

initResources: {}
#  limits:
#     cpu: "25m"
#     memory: "128Mi"
#  requests:
#     cpu: "25m"
#     memory: "128Mi"

sidecarResources: {}
#   limits:
#     cpu: "25m"
#     memory: "128Mi"
#   requests:
#     cpu: "25m"
#     memory: "128Mi"

networkHost: "0.0.0.0"

rbac:
  create: false
  serviceAccountAnnotations: {}
  serviceAccountName: ""
  # Controls whether or not the Service Account token is automatically mounted to /var/run/secrets/kubernetes.io/serviceaccount
  automountServiceAccountToken: false

podSecurityPolicy:
  create: false
  name: ""
  spec:
    privileged: true
    fsGroup:
      rule: RunAsAny
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
      - secret
      - configMap
      - persistentVolumeClaim
      - emptyDir

persistence:
  enabled: true
  # Set to false to disable the `fsgroup-volume` initContainer that will update permissions on the persistent disk.
  enableInitChown: true
  # override image, which is busybox by default
  # image: busybox
  # override image tag, which is latest by default
  # imageTag:
  labels:
    # Add default labels for the volumeClaimTemplate of the StatefulSet
    enabled: false
  # OpenSearch Persistent Volume Storage Class
  # If defined, storageClassName: <storageClass>
  # If set to "-", storageClassName: "", which disables dynamic provisioning
  # If undefined (the default) or set to null, no storageClassName spec is
  #   set, choosing the default provisioner.  (gp2 on AWS, standard on
  #   GKE, AWS & OpenStack)
  #
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

extraVolumes: []
  # - name: extras
  #   emptyDir: {}

extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

extraContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

extraInitContainers: []
  # - name: do-somethings
  #   image: busybox
  #   command: ['do', 'something']

# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""

# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"

# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort".
# Setting this to custom will use what is passed into customAntiAffinity.
antiAffinity: "soft"

# Allows passing in custom anti-affinity settings as defined in
# https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#types-of-inter-pod-affinity-and-anti-affinity
# Using this parameter requires setting antiAffinity to custom.
customAntiAffinity: {}

# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
#nodeAffinity: {}
nodeAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
      - matchExpressions:
        - key: poolname
          operator: In
          values:
          - test

# This is the pod affinity settings as defined in
# https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#types-of-inter-pod-affinity-and-anti-affinity
podAffinity: {}

# This is the pod topology spread constraints
# https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []

# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"

# The environment variables injected by service links are not used, but can lead to slow OpenSearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true

protocol: https
httpPort: 9200
transportPort: 9300
metricsPort: 9600
httpHostPort: ""
transportHostPort: ""


service:
  labels: {}
  labelsHeadless: {}
  headless:
    annotations: {}
  type: ClusterIP
  # The IP family and IP families options are to set the behaviour in a dual-stack environment
  # Omitting these values will let the service fall back to whatever the CNI dictates the defaults
  # should be
  #
  # ipFamilyPolicy: SingleStack
  # ipFamilies:
  # - IPv4
  nodePort: ""
  annotations: {}
  httpPortName: http
  transportPortName: transport
  metricsPortName: metrics
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  externalTrafficPolicy: ""

updateStrategy: RollingUpdate

# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1

podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000

securityContext:
  capabilities:
    drop:
      - ALL
  # readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000
    #command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']

securityConfig:
  enabled: true
  path: "/usr/share/opensearch/config/opensearch-security"
  actionGroupsSecret:
  configSecret:
  internalUsersSecret:
  rolesSecret:
  rolesMappingSecret:
  tenantsSecret:
  # The following option simplifies securityConfig by using a single secret and
  # specifying the config files as keys in the secret instead of creating
  # different secrets for for each config file.
  # Note that this is an alternative to the individual secret configuration
  # above and shouldn't be used if the above secrets are used.
  config:
    # There are multiple ways to define the configuration here:
    # * If you define anything under data, the chart will automatically create
    #   a secret and mount it. This is best option to choose if you want to override all the
    #   existing yml files at once.
    # * If you define securityConfigSecret, the chart will assume this secret is
    #   created externally and mount it. This is best option to choose if your intention is to
    #   only update a single yml file.
    # * It is an error to define both data and securityConfigSecret.
    securityConfigSecret: ""
    dataComplete: true
    data:
      config.yml: |-
        _meta:
          type: "config"
          config_version: 2
        config:
          dynamic:
            http:
              anonymous_auth_enabled: false
              xff:
                enabled: false
                internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
                remoteIpHeader:  'x-forwarded-for'
            authc:
              basic_internal_auth_domain:
                description: "Authenticate via HTTP Basic against internal users database"
                http_enabled: true
                transport_enabled: true
                order: 1 # Must set to 1 to prevent logs flooding with warnings
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: internal
              openid_auth_domain:
                http_enabled: true
                transport_enabled: true
                order: 0 # Must set to 0 to precede basic authentication or logs will flood with warnings
                http_authenticator:
                  type: openid
                  challenge: false
                  config:
                    subject_key: email # Users will be able to login with email and show it as username
                    roles_key: roles # Use roles from App registrations, do not set to groups.
                    openid_connect_url: https://login.microsoftonline.com/tenantid/v2.0/.well-known/openid-configuration # App registration IDP metadata file URL
                    skip_users:
                      - admin
                      - kibanaserver
                      - kibanaro
                      - logstash
                      - readall
                      - snapshotrestore
                authentication_backend:
                  type: noop # Here we disable authentication backend since we will be using Azure AD for that
      

      
      # internal_users.yml: |-
      roles.yml: |-
        _meta:
          type: "roles"
          config_version: 2
        complex-role:
          reserved: false
          hidden: false
          cluster_permissions:
          - "read"
          - "cluster:monitor/nodes/stats"
          - "cluster:monitor/task/get"
          index_permissions:
          - index_patterns:
            - "opensearch_dashboards_sample_data_*"
            dls: "{\"match\": {\"FlightDelay\": true}}"
            fls:
            - "~FlightNum"
            masked_fields:
            - "Carrier"
            allowed_actions:
            - "read"
          tenant_permissions:
          - tenant_patterns:
            - "analyst_*"
            allowed_actions:
            - "kibana_all_write"
          static: false
        _meta:
          type: "roles"
          config_version: 2        
        
      roles_mapping.yml: |-
        _meta:
          type: "rolesmapping"
          config_version: 2
        all_access:
          reserved: false
          backend_roles:
            - "admin"
          users:
            - 'nawaz.shaikh@presight.ai'
          description: "Maps admin to all_access"
      # action_groups.yml: |-
      # tenants.yml: |-

# How long to wait for opensearch to stop gracefully
terminationGracePeriod: 120

sysctlVmMaxMapCount: 262144
#sysctlVmMaxMapCount: 65000

startupProbe:
  tcpSocket:
    port: 9200
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 3
  failureThreshold: 30

livenessProbe: {}
  # periodSeconds: 20
  # timeoutSeconds: 5
  # failureThreshold: 10
  # successThreshold: 1
  # initialDelaySeconds: 10
  # tcpSocket:
  #   port: 9200

readinessProbe:
  tcpSocket:
    port: 9200
  periodSeconds: 5
  timeoutSeconds: 3
  failureThreshold: 3

## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""

imagePullSecrets: []
nodeSelector: {}
#tolerations: []
tolerations:
  - key: "sku"
    operator: "Equal"
    value: "adadasda"
    effect: "NoSchedule"

# Enabling this will publically expose your OpenSearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
  enabled: false
  # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
  # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
  # ingressClassName: nginx

  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  ingressLabels: {}
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

nameOverride: ""
fullnameOverride: ""

masterTerminationFix: false

opensearchLifecycle: {}
  # preStop:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the preStart handler > /usr/share/message"]
  # postStart:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]

lifecycle: {}
  # preStop:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
  # postStart:
  #   exec:
  #     command:
  #       - bash
  #       - -c
  #       - |
  #         #!/bin/bash
  #         # Add a template to adjust number of shards/replicas1
  #         TEMPLATE_NAME=my_template
  #         INDEX_PATTERN="logstash-*"
  #         SHARD_COUNT=8
  #         REPLICA_COUNT=1
  #         ES_URL=http://localhost:9200
  #         while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
  #         curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'

keystore: []
# To add secrets to the keystore:
#  - secretName: opensearch-encryption-key

networkPolicy:
  create: false
  ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
  ## In order for a Pod to access OpenSearch, it needs to have the following label:
  ## {{ template "uname" . }}-client: "true"
  ## Example for default configuration to access HTTP port:
  ## opensearch-master-http-client: "true"
  ## Example for default configuration to access transport port:
  ## opensearch-master-transport-client: "true"

  http:
    enabled: false

# Deprecated
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""

## Set optimal sysctl's through securityContext. This requires privilege. Can be disabled if
## the system has already been preconfigured. (Ex: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html)
## Also see: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
sysctl:
  enabled: false
    #- name: vm.max_map_count
    #value: "262200"

## Set optimal sysctl's through privileged initContainer.
sysctlInit:
  enabled: false
  # override image, which is busybox by default
  # image: busybox
  # override image tag, which is latest by default
  # imageTag:

## Enable to add 3rd Party / Custom plugins not offered in the default OpenSearch image.
plugins:
  enabled: false
  installList: []
  # - example-fake-plugin

# -- Array of extra K8s manifests to deploy
extraObjects: []
  # - apiVersion: secrets-store.csi.x-k8s.io/v1
  #   kind: SecretProviderClass
  #   metadata:
  #     name: argocd-secrets-store
  #   spec:
  #     provider: aws
  #     parameters:
  #       objects: |
  #         - objectName: "argocd"
  #           objectType: "secretsmanager"
  #           jmesPath:
  #               - path: "client_id"
  #                 objectAlias: "client_id"
  #               - path: "client_secret"
  #                 objectAlias: "client_secret"
  #     secretObjects:
  #     - data:
  #       - key: client_id
  #         objectName: client_id
  #       - key: client_secret
  #         objectName: client_secret
  #       secretName: argocd-secrets-store
  #       type: Opaque
  #       labels:
  #         app.kubernetes.io/part-of: argocd
  # - |
  #    apiVersion: policy/v1
  #    kind: PodDisruptionBudget
  #    metadata:
  #      name: {{ template "opensearch.uname" . }}
  #      labels:
  #        {{- include "opensearch.labels" . | nindent 4 }}
  #    spec:
  #      minAvailable: 1
  #      selector:
  #        matchLabels:
  #          {{- include "opensearch.selectorLabels" . | nindent 6 }}
type or paste code here

Hi @nawaz ,

In your roles.yml you have _meta twice, could you please delete one?

Best,
mj

here is some sample data if that helps:

data: 
  config.yml: |-
    _meta:
      type: "config"
      config_version: 2
    config:
      dynamic:
        http:
          anonymous_auth_enabled: false
          xff:
            enabled: false
            internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
            remoteIpHeader:  'x-forwarded-for'
        authc:
          basic_internal_auth_domain:
            description: "Authenticate via HTTP Basic against internal users database"
            http_enabled: true
            transport_enabled: true
            order: 1 # Must set to 1 to prevent logs flooding with warnings
            http_authenticator:
              type: basic
              challenge: false
            authentication_backend:
              type: internal
          openid_auth_domain:
            http_enabled: true
            transport_enabled: true
            order: 0 # Must set to 0 to precede basic authentication or logs will flood with warnings
            http_authenticator:
              type: openid
              challenge: false
              config:
                subject_key: email # Users will be able to login with email and show it as username
                roles_key: roles # Use roles from App registrations, do not set to groups.
                openid_connect_url: https://login.microsoftonline.com/tenantid/v2.0/.well-known/openid-configuration # App registration IDP metadata file URL
                skip_users:
                  - admin
                  - kibanaserver
                  - kibanaro
                  - logstash
                  - readall
                  - snapshotrestore
            authentication_backend:
              type: noop # Here we disable authentication backend since we will be using Azure AD for that
  internal_users.yml: |-
        ---
        # This is the internal user database
        # The hash value is a bcrypt hash and can be generated with plugin/tools/hash.sh

        _meta:
          type: "internalusers"
          config_version: 2

        # Define your internal users here

        ## Demo users

        admin:
          hash: "$2a$12$VcCDgh2NDk07JGN0rjGbM.Ad41qVR/YFJcgHp0UGns5JDymv..TOG"
          reserved: true
          backend_roles:
          - "admin"
          description: "Demo admin user"

        anomalyadmin:
          hash: "$2y$12$TRwAAJgnNo67w3rVUz4FIeLx9Dy/llB79zf9I15CKJ9vkM4ZzAd3."
          reserved: false
          opendistro_security_roles:
          - "anomaly_full_access"
          description: "Demo anomaly admin user, using internal role"

        kibanaserver:
          hash: "$2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H."
          reserved: true
          description: "Demo OpenSearch Dashboards user"

        kibanaro:
          hash: "$2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC"
          reserved: false
          backend_roles:
          - "kibanauser"
          - "readall"
          attributes:
            attribute1: "value1"
            attribute2: "value2"
            attribute3: "value3"
          description: "Demo OpenSearch Dashboards read only user, using external role mapping"

        logstash:
          hash: "$2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2"
          reserved: false
          backend_roles:
          - "logstash"
          description: "Demo logstash user, using external role mapping"

        readall:
          hash: "$2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2"
          reserved: false
          backend_roles:
          - "readall"
          description: "Demo readall user, using external role mapping"

        snapshotrestore:
          hash: "$2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W"
          reserved: false
          backend_roles:
          - "snapshotrestore"
          description: "Demo snapshotrestore user, using external role mapping"
    
  roles.yml: |-
        ---
        _meta:
          type: "roles"
          config_version: 2
        alerting_ack_alerts:
          reserved: true
          cluster_permissions:
          - "cluster:admin/opendistro/alerting/alerts/*"
        alerting_full_access:
          reserved: true
          cluster_permissions:
          - "cluster_monitor"
          - "cluster:admin/opendistro/alerting/*"
          index_permissions:
          - index_patterns:
            - "*"
            allowed_actions:
            - "indices_monitor"
            - "indices:admin/aliases/get"
            - "indices:admin/mappings/get"
        anomaly_read_access:
          reserved: true
          cluster_permissions:
          - "cluster:admin/opendistro/ad/detector/info"
          - "cluster:admin/opendistro/ad/detector/search"
          - "cluster:admin/opendistro/ad/detectors/get"
          - "cluster:admin/opendistro/ad/result/search"
        anomaly_full_access:
          reserved: true
          cluster_permissions:
          - "cluster_monitor"
          - "cluster:admin/opendistro/ad/*"
          index_permissions:
          - index_patterns:
            - "*"
            allowed_actions:
            - "indices_monitor"
            - "indices:admin/aliases/get"
            - "indices:admin/mappings/get"
        reports_instances_read_access:
          reserved: true
          cluster_permissions:
          - "cluster:admin/opendistro/reports/instance/list"
          - "cluster:admin/opendistro/reports/instance/get"
          - "cluster:admin/opendistro/reports/menu/download"
        reports_read_access:
          reserved: true
          cluster_permissions:
          - "cluster:admin/opendistro/reports/definition/get"
          - "cluster:admin/opendistro/reports/definition/list"
          - "cluster:admin/opendistro/reports/instance/list"
          - "cluster:admin/opendistro/reports/instance/get"
          - "cluster:admin/opendistro/reports/menu/download"
        reports_full_access:
          reserved: true
          cluster_permissions:
          - "cluster:admin/opendistro/reports/definition/create"
          - "cluster:admin/opendistro/reports/definition/update"
          - "cluster:admin/opendistro/reports/definition/on_demand"
          - "cluster:admin/opendistro/reports/definition/delete"
          - "cluster:admin/opendistro/reports/definition/get"
          - "cluster:admin/opendistro/reports/definition/list"
          - "cluster:admin/opendistro/reports/instance/list"
          - "cluster:admin/opendistro/reports/instance/get"
          - "cluster:admin/opendistro/reports/menu/download"
        SYSTEME_ABC-XYZ:
          reserved: false
          hidden: false
          cluster_permissions:
          - 'cluster_composite_ops'
          index_permissions:
          - index_patterns:
            - 'abc*logs-socles*'
            fls: []
            dls: ""
            masked_fields: []
            allowed_actions:
            - 'read'
            - 'kibana_all_read'
          - index_patterns:
            - 'abc*logs-2*'
            fls: []
            dls: ""
            masked_fields: []
            allowed_actions:
            - 'read'
            - 'kibana_all_read'
          - index_patterns:
            - '.kibana'
            fls: []
            dls: ""
            masked_fields: []
            allowed_actions:
            - 'read'
            - 'kibana_all_read'
    
  roles_mapping.yml: |-
        ---
        # In this file users, backendroles and hosts can be mapped to Security roles.
        # Permissions for OpenSearch roles are configured in roles.yml

        _meta:
          type: "rolesmapping"
          config_version: 2

        # Define your roles mapping here

        ## Demo roles mapping

        all_access:
          reserved: false
          backend_roles:
          - "admin"
          description: "Maps admin to all_access"

        own_index:
          reserved: false
          users:
          - "*"
          description: "Allow full access to an index named like the username"

        logstash:
          reserved: false
          backend_roles:
          - "logstash"

        kibana_user:
          reserved: false
          backend_roles:
          - "kibanauser"
          description: "Maps kibanauser to kibana_user"

        readall:
          reserved: false
          backend_roles:
          - "readall"

        manage_snapshots:
          reserved: false
          backend_roles:
          - "snapshotrestore"

        kibana_server:
          reserved: true
          users:
          - "kibanaserver"
    
  action_groups.yml: |-
    _meta:
      type: "actiongroups"
      config_version: 2

  tenants.yml: |-
        ---
        _meta:
          type: "tenants"
          config_version: 2

        # Define your tenants here

        ## Demo tenants
        admin_tenant:
          reserved: false
          description: "Demo tenant for admin user"

Hi @Mantas ,

I was on leave, so didn’t response. But after following your config I am getting below error.

[2024-04-24T08:09:00,123][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] java.lang.IllegalArgumentException: No implementation class found for INTERNALUSERS and config version 0

[2024-04-24T08:09:00,124][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] Exception while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)

java.lang.IllegalArgumentException: No implementation class found for INTERNALUSERS and config version 0

at org.opensearch.security.securityconf.impl.SecurityDynamicConfiguration.fromJson(SecurityDynamicConfiguration.java:84) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]

at org.opensearch.security.configuration.ConfigurationLoaderSecurity7.toConfig(ConfigurationLoaderSecurity7.java:339) ~[opensearch-security-2.11.1.0.jar:2.11.1.0]

at org.opensearch.security.configuration.ConfigurationLoaderSecurity7$2.onResponse(ConfigurationLoaderSecurity7.java:255) [opensearch-security-2.11.1.0.jar:2.11.1.0]

at org.opensearch.security.configuration.ConfigurationLoaderSecurity7$2.onResponse(ConfigurationLoaderSecurity7.java:244) [opensearch-security-2.11.1.0.jar:2.11.1.0]

at org.opensearch.action.support.TransportAction$1.onResponse(TransportAction.java:113) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.action.support.TransportAction$1.onResponse(TransportAction.java:107) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.action.get.TransportMultiGetAction$1.finishHim(TransportMultiGetAction.java:168) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.action.get.TransportMultiGetAction$1.onResponse(TransportMultiGetAction.java:151) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.action.get.TransportMultiGetAction$1.onResponse(TransportMultiGetAction.java:143) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.action.support.TransportAction$1.onResponse(TransportAction.java:113) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.action.support.TransportAction$1.onResponse(TransportAction.java:107) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$2.handleResponse(TransportSingleShardAction.java:298) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$2.handleResponse(TransportSingleShardAction.java:284) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.security.transport.SecurityInterceptor$RestoringTransportResponseHandler.handleResponse(SecurityInterceptor.java:396) [opensearch-security-2.11.1.0.jar:2.11.1.0]

at org.opensearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1516) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.transport.InboundHandler.doHandleResponse(InboundHandler.java:411) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.transport.InboundHandler.handleResponse(InboundHandler.java:403) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.transport.InboundHandler.messageReceived(InboundHandler.java:168) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.transport.InboundHandler.inboundMessage(InboundHandler.java:123) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.transport.TcpTransport.inboundMessage(TcpTransport.java:770) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:175) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:150) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:115) [opensearch-2.11.1.jar:2.11.1]

at org.opensearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:95) [transport-netty4-client-2.11.1.jar:2.11.1]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280) [netty-handler-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1471) [netty-handler-4.1.100.Final.jar:4.1.100.Final]

at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1334) [netty-handler-4.1.100.Final.jar:4.1.100.Final]

at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1383) [netty-handler-4.1.100.Final.jar:4.1.100.Final]

at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) [netty-codec-4.1.100.Final.jar:4.1.100.Final]

at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) [netty-codec-4.1.100.Final.jar:4.1.100.Final]

at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) [netty-codec-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.100.Final.jar:4.1.100.Final]

at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.100.Final.jar:4.1.100.Final]

at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.100.Final.jar:4.1.100.Final]

at java.lang.Thread.run(Thread.java:833) [?:?]

[2024-04-24T08:09:00,141][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] java.lang.IllegalArgumentException: No implementation class found for ROLESMAPPING and config version 0

[2024-04-24T08:09:00,142][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-0] Exception while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)

Hi @nawaz, looks like configurations for INTERNALUSERS and ROLESMAPPING were not applied, could you check your logs for errors when the cluster is started and see if there are some clues?

Would you mind sharing your latest and full values.yml file?

Thanks,
mj