I was seeing inconsistent access to kibana for ldap users. I tried accessing the .kibana index on multiple nodes with the REST API and saw different behavior across the cluster. Two nodes respond with no permissions and two nodes return the index details. These are using ldap roles and show different results depending on the node that it is connected to. The securityadmin.sh tool was run on es01 node to add and update the ldap configuration. I do have multiple elasticsearch hosts defined in kibana.yml, so a short term solution was to change that to only hosts which return valid ldap queries.
root@test:~# curl -k https://es01:9200/_opendistro/_security/authinfo?pretty -u testuser
Enter host password for user ‘testuser’:
{
“user” : “User [name=Joe, roles=, requestedTenant=null]”,
“user_name” : “Joe”,
“user_requested_tenant” : null,
“remote_address” : “10.1.1.82:37734”,
“backend_roles” : ,
“custom_attribute_names” : [
“attr.ldap.primaryGroupID”,
…
“attr.ldap.lastLogonTimestamp”
],
“roles” : [
“own_index”
],
“tenants” : {
“Joe” : true
},
“principal” : null,
“peer_certificates” : “0”,
“sso_logout_url” : null
}
root@test:~# curl -k https://es03:9200/_opendistro/_security/authinfo?pretty -u testuser
Enter host password for user ‘testuser’:
{
“user” : “User [name=Joe, roles=[admins, network-rw, net-admins], requestedTenant=null]”,
“user_name” : “Joe”,
“user_requested_tenant” : null,
“remote_address” : “10.1.1.82:42124”,
“backend_roles” : [
“admins”,
“network-rw”,
“net-admins”
],
“custom_attribute_names” : [
“attr.ldap.primaryGroupID”,
…
“attr.ldap.lastLogonTimestamp”
],
“roles” : [
“kibana_user”,
“ldap_admin”,
“own_index”,
“readall”,
“security_rest_api_access”
],
“tenants” : {
“Joe” : true,
“global_tenant” : true
},
“principal” : null,
“peer_certificates” : “0”,
“sso_logout_url” : null
}