OpenSearch integration with keycloak using k8s opster operator

**On behalf of a user of Slack **

"Hi Team,
Am trying to integrate keycloak with opensearch using k8s opster operator. I have added the dashboards.additionconfig and config.yaml(openid_auth_domain).

dashboards.additionconfig:
      logging.verbose: "true"
      server.name: my-cluster1-dashboards
      server.host: 0.0.0.0
      server.ssl.enabled: "true"
      server.ssl.key: /usr/share/opensearch-dashboards/config/opensearch.example.org.key
      server.ssl.certificate: /usr/share/opensearch-dashboards/config/opensearch.example.org.cert
      opensearch_security.auth.type: openid
      opensearch_security.openid.connect_url: https://x.x.x.x/.well-known/openid-configuration
      opensearch_security.openid.base_redirect_url: http://osdash.test.com/
      opensearch_security.openid.client_id: xxxxx
      opensearch_security.openid.client_secret: xxxxxxxxxxxxxxxxxxxx
      opensearch_security.openid.scope: openid profile email
      opensearch_security.openid.header: Authorization
      opensearch_security.openid.verify_hostnames: "false"
      opensearch_security.openid.trust_dynamic_headers: "true"
      opensearch.hosts: "https://my-cluster1:9200/"
      opensearch.username: "kibanaserver"
      opensearch.password: "kibanaserver"
      opensearch.ssl.verificationMode: none
      opensearch.requestHeadersWhitelist:  |
        ["securitytenant","Authorization"]
Config.yaml
        config:
          dynamic:
            authz: {}
            authc:
              openid_auth_domain:
                http_enabled: true
                transport_enabled: true
                order: 0
                http_authenticator:
                  type: openid
                  challenge: false
                  config:
                    openid_connect_idp:
                      enable_ssl: true
                      verify_hostnames: false
                      pemtrustedcas_filepath: "https://x.x.x.x/protocol/openid-connect/certs"
                    subject_key: preferred_username
                    roles_key: roles
                    openid_connect_url: "https://x.x.x.x/.well-known/openid-configuration"
                authentication_backend:
                  type: noop

Opensearch pods are running fine and health is 100%, but dashboard pod is not running. Dashboard logs shows:-

Unable to retrieve version information from OpenSearch nodes.
{
  "level": "unavailable",
  "summary": "[59] services are unavailable",
  "detail": "See the status page for more information",
  "meta": {
    "affectedServices": {
      "opensearch": {
        "level": "unavailable",
        "summary": "Waiting for OpenSearch",
        "meta": {
          "warningNodes": [],
          "incompatibleNodes": []
        }
      }
	  .
	  .
	  .
	}
  }
}

"

Would you mind sharing your Opster config file with the node configuration or the full cluster configuration?

Hi @pablo ,

Please refer the below yaml files.

opensearch.yaml

apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
  name: my-cluster1
  namespace: opensearch
spec:
  security:
    config:
      adminCredentialsSecret:
        name: a-admin-credentials-secret
      securityConfigSecret:
        name: a-securityconfig-secret
    tls:
      transport:
        generate: true
      http:
        generate: true
  general:
    serviceName: my-cluster1
    version: 2.8.0
    pluginsList: ["repository-s3"]
    drainDataNodes: true
    setVMMaxMapCount: true
  dashboards:
    additionalConfig:
      logging.verbose: "true"
      server.name: my-cluster1-dashboards
      server.host: 0.0.0.0
      server.ssl.enabled: "true"
      server.ssl.key: /usr/share/opensearch-dashboards/config/opensearch.example.org.key
      server.ssl.certificate: /usr/share/opensearch-dashboards/config/opensearch.example.org.cert
      opensearch_security.auth.type: "openid"
      opensearch_security.auth.multiple_auth_enabled: "True"
      opensearch_security.openid.connect_url: https://x.x.x.x/.well-known/openid-configuration
      opensearch_security.openid.base_redirect_url: <dashboard-url>
      opensearch_security.openid.client_id: xxxxxx
      opensearch_security.openid.client_secret: xxxxxxxxxx
      opensearch_security.openid.scope: openid profile email
      opensearch_security.openid.header: Authorization
      opensearch_security.openid.trust_dynamic_headers: "true"
      opensearch.optimizedHealthcheckId: "my-cluster1"
      opensearch.hosts: "https://<opensearch-svc-ip>:9200"
      opensearch.username: "kibanaserver"
      opensearch.password: "kibanaserver"
      opensearch_security.openid.verify_hostnames: "false"
      opensearch_security.cookie.secure: "false"
      opensearch.ssl.verificationMode: none
      opensearch.requestHeadersWhitelist:  |
        ["securitytenant","Authorization","security_tenant"]
    opensearchCredentialsSecret:
      name: a-admin-credentials-secret
    enable: true
    version: 2.8.0
    replicas: 1
    resources:
      requests:
         memory: "512Mi"
         cpu: "200m"
      limits:
         memory: "512Mi"
         cpu: "200m"
  nodePools:
    - component: masters
      replicas: 3
      diskSize: "5Gi"
      jvm: "-Dopensearch.allow_insecure_settings=true"
      resources:
         requests:
            memory: "2Gi"
            cpu: "500m"
         limits:
            memory: "3Gi"
            cpu: "1000m"
      roles:
        - "data"
        - "master"
        - "ingest"
#      persistence:
#        pvc:
#          storageClass: mystorageclass
#          accessModes: # You can change the accessMode
#          - ReadWriteOnce

admin-credentials-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: a-admin-credentials-secret
  namespace: opensearch
type: Opaque
data:
  # admin
  username: YWRtaW4=
  # admin123
  password: YWRtaW4xMjMK

securityconfig-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: a-securityconfig-secret
  namespace: opensearch
type: Opaque
stringData:
      internal_users.yml: |-
        _meta:
          type: "internalusers"
          config_version: 2
        admin:
          hash: "$2a$12$vZHSkfDt1zB/rSLZ/GtIsuEpQ2O9aP8QorCXVSQMmptAD/9XpPSnK"
          reserved: true
          backend_roles:
          - "admin"
          description: "Demo admin user"

        anomalyadmin:
          hash: "$2y$12$TRwAAJgnNo67w3rVUz4FIeLx9Dy/llB79zf9I15CKJ9vkM4ZzAd3."
          reserved: false
          opendistro_security_roles:
          - "anomaly_full_access"
          description: "Demo anomaly admin user, using internal role"

        kibanaserver:
          hash: "$2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H."
          reserved: true
          description: "Demo OpenSearch Dashboards user"

        kibanaro:
          hash: "$2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC"
          reserved: false
          backend_roles:
          - "kibanauser"
          - "readall"
          attributes:
            attribute1: "value1"
            attribute2: "value2"
            attribute3: "value3"
          description: "Demo OpenSearch Dashboards read only user, using external role mapping"

        logstash:
          hash: "$2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2"
          reserved: false
          backend_roles:
          - "logstash"
          description: "Demo logstash user, using external role mapping"

        readall:
          hash: "$2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2"
          reserved: false
          backend_roles:
          - "readall"
          description: "Demo readall user, using external role mapping"

        snapshotrestore:
          hash: "$2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W"
          reserved: false
          backend_roles:
          - "snapshotrestore"
          description: "Demo snapshotrestore user, using external role mapping"
      config.yml: |-
        _meta:
          type: "config"
          config_version: 2
        config:
          dynamic:
            authz: {}
            authc:
              basic_internal_auth_domain:
                http_enabled: true
                transport_enabled: true
                order: 0
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: intern

              openid_auth_domain:
                http_enabled: true
                transport_enabled: true
                order: 1
                http_authenticator:
                  type: openid
                  challenge: false
                  config:
                    openid_connect_idp:
                      enable_ssl: true
                      verify_hostnames: false
                      pemtrustedcas_filepath: "https://x.x.x.x/protocol/openid-connect/certs"
                    subject_key: preferred_username
                    roles_key: roles
                    openid_connect_url: "https://x.x.x.x/.well-known/openid-configuration"
                authentication_backend:
                  type: noop

This must point to the keycloak’s SSL certificate file, not the URL.

Also, you don’t need this part.

@pablo ,

Kindly suggest the ways/method to copy the Keycloak ssl cert file into opensearch pod and point the same to pemtrustedcas_filepath.

Since it is opster operator, not sure how to copy it.

Will this operator supports for adding custom init containers?

@rmssath I’m not an Opster expert, however, have you tried additional volumes?

@pablo ,

It is working now after adding the additional volumes.

But the keycloak user is not having the privileges to view the logs/dashboard. This user is mapped to the role “own_index”.

How to map roles for the users those who authenticated from keycloak.

@rmssath You need to map Keycloak’s roles assigned to the user and map as backend roles in roles_mapping.yml file.

Could you share your current roles_mapping.yml?

@pablo ,

Please find the role_mapping.yaml

roles_mapping.yml: |-
  _meta:
    type: "rolesmapping"
    config_version: 2

  # Define your roles mapping here

  ## Demo roles mapping

  all_access:
    reserved: false
    backend_roles:
    - "admin"
    description: "Maps admin to all_access"

  own_index:
    reserved: false
    users:
    - "*"
    description: "Allow full access to an index named like the username"

  logstash:
    reserved: false
    backend_roles:
    - "logstash"

  kibana_user:
    reserved: false
    backend_roles:
    - "kibanauser"
    description: "Maps kibanauser to kibana_user"

  readall:
    reserved: false
    backend_roles:
    - "readall"

  manage_snapshots:
    reserved: false
    backend_roles:
    - "snapshotrestore"

  kibana_server:
    reserved: true
    users:
    - "kibanaserver"

@rmssath As per the shared file, there are no Keycloak’s roles configured.
The reason why users get own_index assigned in OpenSearch is that all authenticated users are mapped to that role.

If you’d like to add your Keycloak’s user to the all_access group, you should create a role in Keycloak, assign it to the user and configure it in the roles_mapping.yml or with OpenSearch Dashboards.

i.e.
Keycloak’s role is called custom_role_1

the mapping should be like the one below.

  all_access:
    reserved: false
    backend_roles:
    - "admin"
    - "custom_role_1"
    description: "Maps admin to all_access"

Hi @pablo ,

In keycloak, users are by default mapped to the role default-roles-test in our case.

We configured the default-roles-test as backend_roles under all_access in opensearch roles_mapping.yml, but unable to access any dashboards.

Also we created new role in keycloak and mapped user to it. Then we configure the same in roles_mapping.yaml. Same issue again.

@rmssath Did you configure user realm roles in Keycloak’s client?

@pablo

It is working fine after creating user realm and mapper.

In the above cases, keycloak was running in http. After changing the keycloak to https, we are getting 401 Unauthorized.

When we hit the dashboard url, it is redirecting properly to keycloak login. After the successfull login, we are getting this error while redirecting from keycloak to opensearch dashboard.

What do you mean by that? According to your config in the initial message you were already using HTTPS in keycloak.