Inconsistent permissions across nodes

I was seeing inconsistent access to kibana for ldap users. I tried accessing the .kibana index on multiple nodes with the REST API and saw different behavior across the cluster. Two nodes respond with no permissions and two nodes return the index details. These are using ldap roles and show different results depending on the node that it is connected to. The securityadmin.sh tool was run on es01 node to add and update the ldap configuration. I do have multiple elasticsearch hosts defined in kibana.yml, so a short term solution was to change that to only hosts which return valid ldap queries.

root@test:~# curl -k https://es01:9200/_opendistro/_security/authinfo?pretty -u testuser
Enter host password for user ‘testuser’:
{
“user” : “User [name=Joe, roles=, requestedTenant=null]”,
“user_name” : “Joe”,
“user_requested_tenant” : null,
“remote_address” : “10.1.1.82:37734”,
“backend_roles” : ,
“custom_attribute_names” : [
“attr.ldap.primaryGroupID”,

“attr.ldap.lastLogonTimestamp”
],
“roles” : [
“own_index”
],
“tenants” : {
“Joe” : true
},
“principal” : null,
“peer_certificates” : “0”,
“sso_logout_url” : null
}
root@test:~# curl -k https://es03:9200/_opendistro/_security/authinfo?pretty -u testuser
Enter host password for user ‘testuser’:
{
“user” : “User [name=Joe, roles=[admins, network-rw, net-admins], requestedTenant=null]”,
“user_name” : “Joe”,
“user_requested_tenant” : null,
“remote_address” : “10.1.1.82:42124”,
“backend_roles” : [
“admins”,
“network-rw”,
“net-admins”
],
“custom_attribute_names” : [
“attr.ldap.primaryGroupID”,

“attr.ldap.lastLogonTimestamp”
],
“roles” : [
“kibana_user”,
“ldap_admin”,
“own_index”,
“readall”,
“security_rest_api_access”
],
“tenants” : {
“Joe” : true,
“global_tenant” : true
},
“principal” : null,
“peer_certificates” : “0”,
“sso_logout_url” : null
}

It seems that es01 now sees the correct backend_roles from ldap. I don’t know if I did something to propogate the changes, but es02 still shows no backend_roles for the user. I am using puppet and working on getting my manual changes back into our puppet code, but my understanding is that the config.yml has to be applied to one of the nodes with securityadmin.sh and that is all. I have puppet writing out the config.yml, but not executing the securityadmin.sh to apply it.

Solution: The truststore was incorrectly named on that host. I already filed a bug in the docs that this is required when using ldap for authz. The documentation indicates it is an alternative to key/pem files, but I found that authz requires this to be present.

Good afternoon @kiowajoe .
I have a similar problem as you. I have a three-node cluster and only one node - the one where I ran securityadmin.sh - responds with index details whereas the other both have no permissions.

Could you provide more details how you solved this issue?

I managed to solve this: