OpenSearch client certificate authentication

Hi,

I’m trying to setup Dashboard → OpenSearch communication using client certificate instead of basic auth (I have few clusters were I’d like to replace ELK stack with OpenSearch and would prefer to use client cert auth instead of having to generate new dashboard and fluentbit users in each cluster).

For now I’m trying to make it work with the python script like that:

import requests
import json
base_url = 'https://localhost:9200/'
headers = {
  'Content-Type': 'application/json'
}
cert_file_path = "..."
key_file_path = "..."
root_ca_path = "..."

# Send the request.
path = 'movies/_doc/3'
url = base_url + path
response = requests.get(url, cert = (cert_file_path, key_file_path), verify=False)
print(response.text)

(I’m using no-verify because I’m port-forwarding from my machine to a node in k8s cluster)

but all I’m getting is Unauthorized and audit log not saying much more than:

[2022-05-06T13:14:30,774][INFO ][stdout                   ] [opensearch-cluster-master-0] AUDIT_LOG: {
[2022-05-06T13:14:30,774][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_cluster_name" : "opensearch-cluster",
[2022-05-06T13:14:30,774][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_rest_request_params" : {
[2022-05-06T13:14:30,775][INFO ][stdout                   ] [opensearch-cluster-master-0]     "index" : "movies",
[2022-05-06T13:14:30,775][INFO ][stdout                   ] [opensearch-cluster-master-0]     "id" : "3"
[2022-05-06T13:14:30,775][INFO ][stdout                   ] [opensearch-cluster-master-0]   },
[2022-05-06T13:14:30,775][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_node_name" : "opensearch-cluster-master-0",
[2022-05-06T13:14:30,775][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_rest_request_method" : "GET",
[2022-05-06T13:14:30,775][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_category" : "FAILED_LOGIN",
[2022-05-06T13:14:30,777][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_request_origin" : "REST",
[2022-05-06T13:14:30,778][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_node_id" : "mBwCH71SSqKIyBkyyBK30Q",
[2022-05-06T13:14:30,778][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_request_layer" : "REST",
[2022-05-06T13:14:30,778][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_rest_request_path" : "/movies/_doc/3",
[2022-05-06T13:14:30,778][INFO ][stdout                   ] [opensearch-cluster-master-0]   "@timestamp" : "2022-05-06T13:14:30.757+00:00",
[2022-05-06T13:14:30,778][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_request_effective_user_is_admin" : false,
[2022-05-06T13:14:30,778][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_format_version" : 4,
[2022-05-06T13:14:30,779][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_request_remote_address" : "127.0.0.1",
[2022-05-06T13:14:30,779][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_node_host_address" : "XXX",
[2022-05-06T13:14:30,779][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_rest_request_headers" : {
[2022-05-06T13:14:30,784][INFO ][stdout                   ] [opensearch-cluster-master-0]     "content-length" : [
[2022-05-06T13:14:30,784][INFO ][stdout                   ] [opensearch-cluster-master-0]       "0"
[2022-05-06T13:14:30,784][INFO ][stdout                   ] [opensearch-cluster-master-0]     ],
[2022-05-06T13:14:30,785][INFO ][stdout                   ] [opensearch-cluster-master-0]     "Accept" : [
[2022-05-06T13:14:30,785][INFO ][stdout                   ] [opensearch-cluster-master-0]       "*/*"
[2022-05-06T13:14:30,785][INFO ][stdout                   ] [opensearch-cluster-master-0]     ],
[2022-05-06T13:14:30,785][INFO ][stdout                   ] [opensearch-cluster-master-0]     "User-Agent" : [
[2022-05-06T13:14:30,785][INFO ][stdout                   ] [opensearch-cluster-master-0]       "python-requests/2.27.1"
[2022-05-06T13:14:30,785][INFO ][stdout                   ] [opensearch-cluster-master-0]     ],
[2022-05-06T13:14:30,785][INFO ][stdout                   ] [opensearch-cluster-master-0]     "Connection" : [
[2022-05-06T13:14:30,785][INFO ][stdout                   ] [opensearch-cluster-master-0]       "keep-alive"
[2022-05-06T13:14:30,785][INFO ][stdout                   ] [opensearch-cluster-master-0]     ],
[2022-05-06T13:14:30,785][INFO ][stdout                   ] [opensearch-cluster-master-0]     "Host" : [
[2022-05-06T13:14:30,786][INFO ][stdout                   ] [opensearch-cluster-master-0]       "localhost:9200"
[2022-05-06T13:14:30,786][INFO ][stdout                   ] [opensearch-cluster-master-0]     ],
[2022-05-06T13:14:30,786][INFO ][stdout                   ] [opensearch-cluster-master-0]     "Accept-Encoding" : [
[2022-05-06T13:14:30,786][INFO ][stdout                   ] [opensearch-cluster-master-0]       "gzip, deflate"
[2022-05-06T13:14:30,786][INFO ][stdout                   ] [opensearch-cluster-master-0]     ]
[2022-05-06T13:14:30,788][INFO ][stdout                   ] [opensearch-cluster-master-0]   },
[2022-05-06T13:14:30,788][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_request_effective_user" : "<NONE>",
[2022-05-06T13:14:30,788][INFO ][stdout                   ] [opensearch-cluster-master-0]   "audit_node_host_name" : "XXX"
[2022-05-06T13:14:30,788][INFO ][stdout                   ] [opensearch-cluster-master-0] }

Is there any way to get more details of failed logins than just this? I’m sure my configuration must bo broken somewhere but not sure how to find it

@jkosecki I assume this has no connection to the Dashboards, as you are calling opensearch on localhost:9200 directly from code.

Can you share your config.yaml file?

Regarding the logging, assuming you still have basic auth enabled with demo admin account, you should be able to enable ‘debug’ or ‘trace’ logging using below:

curl -u admin:admin --insecure -X PUT "https://localhost:9200/_cluster/settings" -H "Content-Type:application/json" -d '
{"transient": {
"logger._root": "trace"
}
}'

This will be very verbose however.

Indeed it was but didn’t really see anything that would suggest why the login attempt is rejected.

And yes, for now I’m just trying to connect using python client with the code above to see if I can make client cert auth working.

So I’m using a cert:

dashboard cert details:
subject= CN=dashboard,O=ORG,L=London,C=UK

My security config.yaml is:

_meta:
  type: "config"
  config_version: 2

config:
  dynamic:
    # Set filtered_alias_mode to 'disallow' to forbid more than 2 filtered aliases per index
    # Set filtered_alias_mode to 'warn' to allow more than 2 filtered aliases per index but warns about it (default)
    # Set filtered_alias_mode to 'nowarn' to allow more than 2 filtered aliases per index silently
    #filtered_alias_mode: warn
    #do_not_fail_on_forbidden: false
    #kibana:
    # Kibana multitenancy
    #multitenancy_enabled: true
    #server_username: kibanaserver
    #index: '.kibana'
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
        #internalProxies: '.*' # trust all internal proxies, regex pattern
        #remoteIpHeader:  'x-forwarded-for'
        ###### see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html for regex help
        ###### more information about XFF https://en.wikipedia.org/wiki/X-Forwarded-For
        ###### and here https://tools.ietf.org/html/rfc7239
        ###### and https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve
    authc:
      basic_internal_auth_domain:
        description: "Authenticate via HTTP Basic against internal users database"
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          type: basic
          challenge: true
        authentication_backend:
          type: intern
      jwt_auth_domain:
        description: "Authenticate via Json Web Token"
        http_enabled: false
        transport_enabled: false
        order: 0
        http_authenticator:
          type: jwt
          challenge: false
          config:
            signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
            jwt_header: "Authorization"
            jwt_url_parameter: null
            roles_key: null
            subject_key: null
        authentication_backend:
          type: noop
      clientcert_auth_domain:
        description: "Authenticate via SSL client certificates"
        http_enabled: true
        transport_enabled: true
        order: 2
        http_authenticator:
          type: clientcert
          config:
            username_attribute: cn #optional, if omitted DN becomes username
          challenge: false
        authentication_backend:
          type: noop

and roles_mapping.yaml:

_meta:
  type: "rolesmapping"
  config_version: 2

# Define your roles mapping here

## Demo roles mapping

all_access:
  reserved: false
  backend_roles:
  - "admin"
  description: "Maps admin to all_access"

own_index:
  reserved: false
  users:
  - "*"
  description: "Allow full access to an index named like the username"

logstash:
  reserved: false
  backend_roles:
  - "logstash"

kibana_user:
  reserved: false
  backend_roles:
  - "kibanauser"
  description: "Maps kibanauser to kibana_user"

readall:
  reserved: false
  backend_roles:
  - "readall"

manage_snapshots:
  reserved: false
  backend_roles:
  - "snapshotrestore"

kibana_server:
  reserved: true
  users:
  - "kibanaserver"
  - "dashboard"

(I added dashboard user (CN of my cert) to kibana_server).

And in case it helps my opensearch.yml:

cluster.name: opensearch-cluster

# Bind to all interfaces because we don't know what IP address Docker will assign to us.
network.host: 0.0.0.0

# # minimum_master_nodes need to be explicitly set when bound on a public IP
# # set to 1 to allow single node clusters
# discovery.zen.minimum_master_nodes: 1

# Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
# discovery.type: single-node

# Start OpenSearch Security Demo Configuration
# WARNING: revise all the lines below before you go into production
plugins:
  security:
    ssl:
      transport:
        pemcert_filepath: certs/nodes/nodes-crt.pem
        pemkey_filepath: certs/nodes/nodes-key.pem
        pemtrustedcas_filepath: certs/ca/root-ca.pem
        enforce_hostname_verification: false
      http:
        enabled: true
        pemcert_filepath: certs/nodes/nodes-crt.pem
        pemkey_filepath: certs/nodes/nodes-key.pem
        pemtrustedcas_filepath: certs/ca/root-ca.pem
        clientauth_mode: OPTIONAL
    allow_unsafe_democertificates: false
    allow_default_init_securityindex: true
    authcz:
      admin_dn:
        - CN=admin,O=ORG,L=London,C=UK
    nodes_dn:
    - CN=nodes,O=ORG,L=London,C=UK
    # audit.type: internal_opensearch
    audit.type: debug
    enable_snapshot_restore_privilege: true
    check_snapshot_restore_write_privileges: true
    restapi:
      roles_enabled: ["all_access", "security_rest_api_access"]
    system_indices:
      enabled: true
      indices:
        [
          ".opendistro-alerting-config",
          ".opendistro-alerting-alert*",
          ".opendistro-anomaly-results*",
          ".opendistro-anomaly-detector*",
          ".opendistro-anomaly-checkpoints",
          ".opendistro-anomaly-detection-state",
          ".opendistro-reports-*",
          ".opendistro-notifications-*",
          ".opendistro-notebooks",
          ".opendistro-asynchronous-search-response*",
        ]

@jkosecki You have basicauth challenge flag set to True, therefore the authentication would not try the next auth domain (clientcert), can you change this flag to false in basicauth and try again.

Also kibana server should only be used by kibana/dashboards, while testing can you map it all_access (just to get this working) and later a new role would need to be created for this.

Hi @Anthony,

thanks for great hint! It definitely helped move things forward. I initially mapped dashboard to kibana_server because that’s what I was trying to achieve but after hours of just seeing ResponseError logs in dashboard I gave up on it and decided to use something easier, i.e a python script.

Disabling the challenge changed a response to:

Forbidden
{"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for [indices:data/read/get] and User [name=dashboard, backend_roles=[], requestedTenant=null]"}],"type":"security_exception","reason":"no permissions for [indices:data/read/get] and User [name=dashboard, backend_roles=[], requestedTenant=null]"},"status":403}

so looks like the client cert is correctly read and so and now I just need to map it to the right role.
Thank you!