I have been making progress in getting ODFE configured (1.13.2) with respect to security. I have successfully setup LDAP and been able to map a user (testadmin) who is in a specific group (odfe-admins) to be able to get mapped to the “all_access” built in role. By successful, I mean that I can see the kibana webpage, login, and see the UI as well as use the security admin tool to apply the security settings. My next step is to move the kibana connection to elasticsearch to use PKI. For the moment, I have kibana and elasticsearch (both installed from the ODFE repos on Centos 8) on the same machine. That seems to have gone pretty well after a few not too hard to fix issues. (NOTE: I have replaced my actual domain with testdomain.com in the rest, but I am using the ‘real’ domain in my configs). I have changed in kibana.yml:
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
to
elasticsearch.ssl.certificate: /etc/kibana/ssl/kibana.crt
elasticsearch.ssl.key: /etc/kibana/ssl/kibana.key
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/ssl/ca.crt"]
The cn of the certificate issues above is testodfe.testdomain.com. In roles_mapping.yml, I have changed the kibana_server role to the following:
kibana_server:
reserved: true
hosts: ['testodfe.testdomain.com']
users:
- "kibanaserver"
- "testodfe.testdomain.com"
When I restart kibana with the above settings applied (and of course, with the securityadmin.sh being applied), I immediately start getting this in the cluster log for ES:
[2021-05-29T15:16:25,938][WARN ][c.a.o.s.c.PrivilegesInterceptorImpl] [testodfe.testdomain.com] Tenant global_tenant is not allowed for user testodfe.testdomain.com
[2021-05-29T15:16:28,439][WARN ][c.a.o.s.c.PrivilegesInterceptorImpl] [testodfe.testdomain.com] Tenant global_tenant is not allowed for user testodfe.testdomain.com
[2021-05-29T15:16:30,942][WARN ][c.a.o.s.c.PrivilegesInterceptorImpl] [testodfe.testdomain.com] Tenant global_tenant is not allowed for user testodfe.testdomain.com
It seems like it is finding the right user, as the log appears to correctly identify the user ‘testodfe.testdomain.com’. As a test, I tried seeing what error I get if I use simple authentication and specify a bad user (I get: ‘Authentication finally failed for kibanaserver2’ instead). If attempt to do the same in intentionally skewing the config for PKI and intentionally change the roles_mapping to some other host (testodfe2.testdomain.com, for example), I get:
No cluster-level perm match for User [name=testodfe2.testdomain.com, backend_roles=[], requestedTenant=null] Resolved [aliases=[*], allIndices=[*], types=[*], originalRequested=[*], remoteIndices=[]] [Action [cluster:monitor/nodes/info]] [RolesChecked []]. No permissions for [cluster:monitor/nodes/info]
So, it seems like it is correctly reading in the cn from the cert and matching the role in roles_mapping. However, I still get the ‘Tenant global_tenant is not allowed for user’, even though I think it is matching the same exact mapping that the simple authentication kibanaserver user is.
For reference, my authc in config.yml looks like this:
clientcert_auth_domain:
description: "Authenticate via SSL client certificates"
http_enabled: true
transport_enabled: true
order: 2
http_authenticator:
type: clientcert
config:
username_attribute: cn #optional, if omitted DN becomes username
challenge: false
authentication_backend:
type: noop
I am running out of ideas as to what is causing the issue. Does anyone have any suggestions? Thanks!