Describe the issue:
curl -k -u user:password https://ip_address:9200/_cluster/health
Unauthorized
Configuration:
Configured docker-compose to authenticate against ldap backend.
But configured no authz part as my ldap has no group.
I would like to authorize by default every authenticated user
I thought not providing the authz part of the security yml would do the trick.
Is it possible to authorize every successfully authenticated user to all access ?
Hi @dnyamy
It is possible to skip the authz configuration part. You need to add a username from LDAP to the role mappings.
You can configure it in OpenSearch Dashboards. In order to do that you should open Security section → Roles → Choose one of the roles, for example, “all_access” role → Mapped users → Manage mapping. In the “Users” field, you need to write the username from LDAP or write * for all users. After that, click “Map”.
Hello @Eugene7,
Thanks a lot for replying
I have not yet installed Dashboards
I first want Opensearch to be running fine with security set correctly
Dashboards will then be configured to authenticate with an ssh key
My solution was to write a roles_mapping.yml file directly
This is what I wrote:
Hello,
I am using open ldap
I have an idea, now that you talk about IdP:
I am using docker for my opensearch
But the ldap server is not a docker container, it is running directly on the host system
Could it be that the container is failing to get authentication reply from ldap ?
How to check that ?
Thanks a lot for taking your time to reply to me
---
# This is the main OpenSearch Security configuration file where authentication
# and authorization is defined.
#
# You need to configure at least one authentication domain in the authc of this file.
# An authentication domain is responsible for extracting the user credentials from
# the request and for validating them against an authentication backend like Active Directory for example.
#
# If more than one authentication domain is configured the first one which succeeds wins.
# If all authentication domains fail then the request is unauthenticated.
# In this case an exception is thrown and/or the HTTP status is set to 401.
#
# After authentication authorization (authz) will be applied. There can be zero or more authorizers which collect
# the roles from a given backend for the authenticated user.
#
# Both, authc and auth can be enabled/disabled separately for REST and TRANSPORT layer. Default is true for both.
# http_enabled: true
# transport_enabled: true
#
# For HTTP it is possible to allow anonymous authentication. If that is the case then the HTTP authenticators try to
# find user credentials in the HTTP request. If credentials are found then the user gets regularly authenticated.
# If none can be found the user will be authenticated as an "anonymous" user. This user has always the username "anonymous"
# and one role named "anonymous_backendrole".
# If you enable anonymous authentication all HTTP authenticators will not challenge.
#
#
# Note: If you define more than one HTTP authenticators make sure to put non-challenging authenticators like "proxy" or "clientcert"
# first and the challenging one last.
# Because it's not possible to challenge a client with two different authentication methods (for example
# Kerberos and Basic) only one can have the challenge flag set to true. You can cope with this situation
# by using pre-authentication, e.g. sending a HTTP Basic authentication header in the request.
#
# Default value of the challenge flag is true.
#
#
# HTTP
# basic (challenging)
# proxy (not challenging, needs xff)
# kerberos (challenging)
# clientcert (not challenging, needs https)
# jwt (not challenging)
# host (not challenging) #DEPRECATED, will be removed in a future version.
# host based authentication is configurable in roles_mapping
# Authc
# internal
# noop
# ldap
# Authz
# ldap
# noop
_meta:
type: "config"
config_version: 2
config:
dynamic:
# Set filtered_alias_mode to 'disallow' to forbid more than 2 filtered aliases per index
# Set filtered_alias_mode to 'warn' to allow more than 2 filtered aliases per index but warns about it (default)
# Set filtered_alias_mode to 'nowarn' to allow more than 2 filtered aliases per index silently
#filtered_alias_mode: warn
#do_not_fail_on_forbidden: false
#kibana:
# Kibana multitenancy
#multitenancy_enabled: true
#server_username: kibanaserver
#index: '.kibana'
http:
anonymous_auth_enabled: false
xff:
enabled: false
internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
#internalProxies: '.*' # trust all internal proxies, regex pattern
#remoteIpHeader: 'x-forwarded-for'
###### see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html for regex help
###### more information about XFF https://en.wikipedia.org/wiki/X-Forwarded-For
###### and here https://tools.ietf.org/html/rfc7239
###### and https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve
authc:
clientcert_auth_domain:
description: "Authenticate via SSL client certificates"
http_enabled: false
transport_enabled: false
order: 2
http_authenticator:
type: clientcert
config:
username_attribute: cn #optional, if omitted DN becomes username
challenge: false
authentication_backend:
type: noop
ldap:
description: "Authenticate via LDAP or Active Directory"
http_enabled: true
transport_enabled: true
order: 1
http_authenticator:
type: basic
challenge: true
authentication_backend:
type: ldap
config:
enable_ssl: true
enable_start_tls: false
enable_ssl_client_auth: false
verify_hostnames: true
hosts:
- vpsxxxxxxx.ovh.net:389
bind_dn: 'cn=admin,dc=vps-xxxxxxx,dc=vps,dc=ovh,dc=net'
password: 'xxxxxxxx'
userbase: 'ou=Users,dc=vps-xxxxxxx,dc=vps,dc=ovh,dc=net'
usersearch: '(sAMAccountName={0})'
username_attribute: 'uid'
# authz:
# ldap:
# description: "Authorize via LDAP or Active Directory"
# http_enabled: true
# transport_enabled: true
# authorization_backend:
# type: ldap
# config:
# # enable ldaps
# enable_ssl: false
# # enable start tls, enable_ssl should be false
# enable_start_tls: false
# # send client certificate
# enable_ssl_client_auth: false
# # verify ldap hostname
# verify_hostnames: true
# hosts:
# - localhost:389
# bind_dn: null
# password: null
# rolebase: 'ou=groups,dc=example,dc=com'
# # Filter to search for roles (currently in the whole subtree beneath rolebase)
# # {0} is substituted with the DN of the user
# # {1} is substituted with the username
# # {2} is substituted with an attribute value from user's directory entry, of the authenticated user. Use userroleattribute to specify the name of the attribute
# rolesearch: '(member={0})'
# # Specify the name of the attribute which value should be substituted with {2} above
# userroleattribute: null
# # Roles as an attribute of the user entry
# userrolename: disabled
# #userrolename: memberOf
# # The attribute in a role entry containing the name of that role, Default is "name".
# # Can also be "dn" to use the full DN as rolename.
# rolename: cn
# # Resolve nested roles transitive (roles which are members of other roles and so on ...)
# resolve_nested_roles: true
# userbase: 'ou=people,dc=example,dc=com'
# # Filter to search for users (currently in the whole subtree beneath userbase)
# # {0} is substituted with the username
# usersearch: '(uid={0})'
# # Skip users matching a user name, a wildcard or a regex pattern
# #skip_users:
# # - 'cn=Michael Jackson,ou*people,o=TEST'
# # - '/\S*/'
I have changed to enable_ssl: false
I would like to make things work first with ldap without SSL
But the result is not better
Still getting the Unauthorized reply
**************************************************************************
** This tool will be deprecated in the next major release of OpenSearch **
** https://github.com/opensearch-project/security/issues/1755 **
**************************************************************************
Security Admin v7
Will connect to localhost:9200 ... done
Connected as "CN=A,OU=IT,O=XXXXXXXXXX,L=PARIS,C=FR"
OpenSearch Version: 2.3.0
Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ...
Clustername: opensearch-cluster
Clusterstate: YELLOW
Number of nodes: 1
Number of data nodes: 1
.opendistro_security index already exists, so we do not need to create one.
Populate config from /usr/share/opensearch/config/opensearch-security/
Will update '/config' with /usr/share/opensearch/config/opensearch-security/config.yml
SUCC: Configuration for 'config' created or updated
Will update '/roles' with /usr/share/opensearch/config/opensearch-security/roles.yml
SUCC: Configuration for 'roles' created or updated
Will update '/rolesmapping' with /usr/share/opensearch/config/opensearch-security/roles_mapping.yml
SUCC: Configuration for 'rolesmapping' created or updated
Will update '/internalusers' with /usr/share/opensearch/config/opensearch-security/internal_users.yml
SUCC: Configuration for 'internalusers' created or updated
Will update '/actiongroups' with /usr/share/opensearch/config/opensearch-security/action_groups.yml
SUCC: Configuration for 'actiongroups' created or updated
Will update '/tenants' with /usr/share/opensearch/config/opensearch-security/tenants.yml
SUCC: Configuration for 'tenants' created or updated
Will update '/nodesdn' with /usr/share/opensearch/config/opensearch-security/nodes_dn.yml
SUCC: Configuration for 'nodesdn' created or updated
Will update '/whitelist' with /usr/share/opensearch/config/opensearch-security/whitelist.yml
SUCC: Configuration for 'whitelist' created or updated
Will update '/audit' with /usr/share/opensearch/config/opensearch-security/audit.yml
SUCC: Configuration for 'audit' created or updated
Will update '/allowlist' with /usr/share/opensearch/config/opensearch-security/allowlist.yml
SUCC: Configuration for 'allowlist' created or updated
SUCC: Expected 10 config types for node {"updated_config_types":["allowlist","tenants","rolesmapping","nodesdn","audit","roles","whitelist","internalusers","actiongroups","config"],"updated_config_size":10,"message":null} is 10 (["allowlist","tenants","rolesmapping","nodesdn","audit","roles","whitelist","internalusers","actiongroups","config"]) due to: null
Done with success
Hello @Eugene7,
Thank you so much !!!
I have tested connection to LDAP from my container and realized that it was failing.
The fact is I cannot access the host from container through the vps’ domain name (vpsxxxxxxx.ovh.net). I finally changed to my domain name (example.net) and it worked.
So I have updated the conf file and it is working fine now.
Thank you very much !!!