SAML/Okta login to Kibana not working with roles

I’m using v1.3.0 of the Docker images.

I can successfully login to Kibana if I configure Kibana/ES for login with SAML/Okta without roles

# config.yml
    http_enabled: true
    transport_enabled: false
    order: 1
      type: saml
      challenge: true
          entity_id: <entity_id>
          metadata_file: okta-metadata.xml
          entity_id: kibana-saml
        kibana_url: <kibana_url>
        #roles_key: Roles
        exchange_key: <exchange_key>
      type: noop
# removed authz

But if I add roles_key: Roles and change roles_mapping.yml to

# roles_mapping.yml
  reserved: false
  - "kibanauser"
  - "myrole2"
  description: "Maps kibanauser to kibana_user"
  - "*"

and try to login I get redirected to the login page again without error message in the UI.

In the logs I can see that the roles are returned by Okta and parsed correctly by the security plugin.

[TRACE][c.a.o.s.a.BackendRegistry] [myhost] Try to extract auth creds from saml http authenticator
[DEBUG][c.a.o.s.a.BackendRegistry] [myhost] Rest user 'User [name=myusername, backend_roles=[myrole1, myrole2, .... ], requestedTenant=null]' is authenticated

It seems like the roles are not evaluated.

Kibana is configured with

# kibana.yml

--- kibana "0"

opendistro_security.auth.type: "saml"
server.xsrf.whitelist: ["/_opendistro/_security/saml/acs", "/_opendistro/_security/saml/logout"]

For debugging I added

# =
logger.token.level = debug =
logger.opendistro_security_br.level = trace

opendistro_security.audit.config.log4j.logger_name = auditlogger
opendistro_security.audit.config.log4j.level = INFO

What am I doing wrong? How can I debug this any further?

Hi Clifford,

If you remember we have discussed on my post recently also (Unable to configure SAML with Jumpcloud and Opendistro), I was using Jumpcloud instead of Okta.
However, I wanted to try your approach and wanted to see will it work.

  1. I created new Okta trial account, I will upload pictures of the configuration.
  2. From Opendistro side, my configuration is absolutely same, except for secret stuff of course (exchange_key of 32 characters, Okta`s idp entity_id, kibana_url). I had “roles_key: Roles” both commented and uncommented.
  3. I have not changed roles_mapping.yml.
  4. I have also followed these two articles:
    Add Single Sign-On (SSO) to Open Distro for Elasticsearch Kibana using SAML and Okta | AWS Open Source Blog
    Add Single Sign-On to Open Distro for Elasticsearch Kibana Using SAML and ADFS | AWS Open Source Blog
    specially second one on Role mapping part, to ensure having same name for my Okta group where my user is, as the name of Backend Role for all_access Role. (not sure if that can help with your Roles issue)

However, I am still not able to login onto Kibana with SAML, I am using 1.4 Opendistro version.

Do you might have any additional configuration or any advice what should I look for since I am also using Okta?
I also cannot find any useful logs in ES logs, even I added additional log levels in


Hi Jovan,

sorry for not answering earlier.

I also created a test account to debug the role permission issue I have.

To my surprise I also can’t login into Kibana when configured to use this test account. I compared the configuration with our working production account and couldn’t find any differences. This is really weird.


Hi Clifford,

I have finally managed to setup SAML. I created new topic that can be used as a reffer for setup.
My roles mapping also works properly.
I assume my biggest issue was not having same ES config on all nodes.
However, you can try to look into it here, I hope it will help you!

Hi Jovan,

glad you worked it out and thank you for your detailed description of the solution.

Your remark

IMPORTANT NOTE make sure that you have configuration same on all 3 Elasticsearch nodes, this ate me serious time. Security script does update your security config index, but for some reason you need to have configuration on all nodes.

made me think. When I tried to enabled roles I bootstrapped a cluster on our test environment with the current working configuration (with roles disabled), changed the config.yml to enabled roles on one of the nodes and executed the admin script.

So after reading your remark I bootstrapped the cluster with roles enabled from the beginning. Sadly this didn’t solve the issue for me. So I still have to investigate more.


Since we upgraded to 1.12.0 it works.