Opensearch Dashboards cannot connect to Opensearch

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

opensearch:2.3.0 (Docker installation)

Describe the issue:

I am able to curl opensearch cluster from within the dashboard’s container

[opensearch-dashboards@a220ae8e54a5 ~]$ curl --cacert /usr/share/opensearch-dashboards/config/root-ca.pem \
>      --cert /usr/share/opensearch-dashboards/config/client.pem \
>      --key /usr/share/opensearch-dashboards/config/client-key.pem \
>      https://mydomainname.net:9200
{
  "name" : "opensearch-node1",
  "cluster_name" : "opensearch-cluster",
  "cluster_uuid" : "x5sA6WCgS9CohDJDDgAn4Q",
  "version" : {
    "distribution" : "opensearch",
    "number" : "2.3.0",
    "build_type" : "tar",
    "build_hash" : "6f6e84ebc54af31a976f53af36a5c69d474a5140",
    "build_date" : "2022-09-09T00:07:12.137133581Z",
    "build_snapshot" : false,
    "lucene_version" : "9.3.0",
    "minimum_wire_compatibility_version" : "7.10.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "The OpenSearch Project: https://opensearch.org/"
}

But the dashboard seems to be failing to connect to the cluster using the same security params

opensearch-dashboards  | {"type":"log","@timestamp":"2023-03-17T15:12:34Z","tags":["info","plugins-system"],"pid":1,"message":"Setting up [46] plugins: [alertingDashboards,usageCollection,opensearchDashboardsUsageCollection,opensearchDashboardsLegacy,mapsLegacy,share,opensearchUiShared,embeddable,legacyExport,expressions,data,home,console,apmOss,management,indexPatternManagement,advancedSettings,savedObjects,reportsDashboards,securityDashboards,indexManagementDashboards,anomalyDetectionDashboards,dashboard,visualizations,visTypeMarkdown,visTypeTimeline,timeline,visTypeVega,visTypeTable,tileMap,regionMap,customImportMapDashboards,inputControlVis,ganttChartDashboards,visualize,queryWorkbenchDashboards,notificationsDashboards,bfetch,charts,visTypeVislib,visTypeTimeseries,visTypeTagcloud,visTypeMetric,observabilityDashboards,discover,savedObjectsManagement]"}
opensearch-dashboards  | {"type":"log","@timestamp":"2023-03-17T15:12:34Z","tags":["info","savedobjects-service"],"pid":1,"message":"Waiting until all OpenSearch nodes are compatible with OpenSearch Dashboards before starting saved objects migrations..."}
opensearch-dashboards  | {"type":"log","@timestamp":"2023-03-17T15:12:34Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
opensearch-dashboards  | {"type":"log","@timestamp":"2023-03-17T15:12:34Z","tags":["error","savedobjects-service"],"pid":1,"message":"Unable to retrieve version information from OpenSearch nodes."}
opensearch-dashboards  | {"type":"log","@timestamp":"2023-03-17T15:12:37Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
opensearch-dashboards  | {"type":"log","@timestamp":"2023-03-17T15:12:39Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
opensearch-dashboards  | {"type":"log","@timestamp":"2023-03-17T15:12:42Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
opensearch-dashboards  | {"type":"log","@timestamp":"2023-03-17T15:12:44Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}

And the browser displays the message attached in the screen capture : OpenSearch Dashboards server is not ready yet

Configuration:

Here is dashboard’s config :

opensearch.hosts: ["https://mydomainname.net:9200"]
opensearch.ssl.verificationMode: certificate
opensearch.username: "kibanaserver"
opensearch.requestHeadersAllowlist: [ authorization,securitytenant ]
server.ssl.enabled: true
server.ssl.certificate: /usr/share/opensearch-dashboards/config/client.pem
server.ssl.key: /usr/share/opensearch-dashboards/config/client-key.pem
opensearch.ssl.certificateAuthorities: [ "/usr/share/opensearch-dashboards/config/root-ca.pem"]
opensearch_security.multitenancy.enabled: true
opensearch_security.multitenancy.tenants.preferred: ["Private", "Global"]
opensearch_security.readonly_mode.roles: ["kibana_read_only"]
opensearch_security.cookie.secure: true

And for now my roles_mapping.yml grants all access to all authenticated users

all_access:
  reserved: false
#  backend_roles:
#  - "admin"
  users:
  - "*"

Please any idea ?
Thanks

Relevant Logs or Screenshots:

image

@dnyamy this might be a question for the #security channel, since it looks like you are using certificate verification for OpenSearch

Hello @ashwinpc,
Thank you for your remark
I have updated the tag

@dnyamy Could you share your docker-compose.yml and config.yml files?

Hello @pablo,
Thanks for your attention

Here is my docker-compose.yml file

version: '3'
services:
  opensearch-node1:
    image: opensearchproject/opensearch:2.3.0
    container_name: opensearch-node1
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node1
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
      - "DISABLE_INSTALL_DEMO_CONFIG=true" # disables execution of install_demo_configuration.sh bundled with security plugin, which installs demo certificates and security configurations to OpenSearch
      #- "DISABLE_SECURITY_PLUGIN=true" # disables security plugin entirely in OpenSearch by setting plugins.security.disabled: true in opensearch.yml
      - "discovery.type=single-node" # disables bootstrap checks that are enabled when network.host is set to a non-loopback address
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
        hard: 65536
    volumes:
      #- opensearch-data1:/usr/share/opensearch/data
      - ./root-ca.pem:/usr/share/opensearch/config/root-ca.pem
      - ./admin.pem:/usr/share/opensearch/config/admin.pem
      - ./admin-key.pem:/usr/share/opensearch/config/admin-key.pem
      - ./node1.pem:/usr/share/opensearch/config/node1.pem
      - ./node1-key.pem:/usr/share/opensearch/config/node1-key.pem
      - ./custom-opensearch.yml:/usr/share/opensearch/config/opensearch.yml
      - ./custom-security-config.yml:/usr/share/opensearch/config/opensearch-security/config.yml
      - ./roles_mapping.yml:/usr/share/opensearch/config/opensearch-security/roles_mapping.yml
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - opensearch-net

  opensearch-dashboards:
    build:
       context: .
       dockerfile: Dockerfile
    container_name: opensearch-dashboards
    ports:
      - 5601:5601
    expose:
      - "5601"
    restart: always
    environment:
      - 'OPENSEARCH_HOSTS=["https://dev.cphlabs.net:9200"]'
      #- "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true" # disables security dashboards plugin in OpenSearch Dashboards
    volumes:
      #- opensearch-data1:/usr/share/opensearch/data
      - ./root-ca.pem:/usr/share/opensearch-dashboards/config/root-ca.pem
      - ./client.pem:/usr/share/opensearch-dashboards/config/client.pem
      - ./client-key.pem:/usr/share/opensearch-dashboards/config/client-key.pem
      - ./custom-opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml
    networks:
      - opensearch-net

volumes:
  opensearch-data1:

networks:
  opensearch-net:

Here is my custom-opensearch.yml file

cluster.name: docker-cluster

# Bind to all interfaces because we don't know what IP address Docker will assign to us.
network.host: 0.0.0.0

# # minimum_master_nodes need to be explicitly set when bound on a public IP
# # set to 1 to allow single node clusters
# discovery.zen.minimum_master_nodes: 1

# Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.  
# discovery.type: single-node

plugins.security.ssl.transport.pemcert_filepath: node1.pem
plugins.security.ssl.transport.pemkey_filepath: node1-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: node1.pem
plugins.security.ssl.http.pemkey_filepath: node1-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.http.clientauth_mode: OPTIONAL
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:
          - 'CN=admin,OU=IT,O=*********,L=PARIS,C=FR'
plugins.security.nodes_dn:
          - 'CN=node1.dns.a-record,OU=IT,O=*********,L=PARIS,C=FR'
plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
opendistro_security.audit.config.disabled_rest_categories: NONE
opendistro_security.audit.config.disabled_transport_categories: NONE

Here is my custom-security-config.yml file

_meta:
  type: "config"
  config_version: 2

config:
  dynamic:
    # Set filtered_alias_mode to 'disallow' to forbid more than 2 filtered aliases per index
    # Set filtered_alias_mode to 'warn' to allow more than 2 filtered aliases per index but warns about it (default)
    # Set filtered_alias_mode to 'nowarn' to allow more than 2 filtered aliases per index silently
    #filtered_alias_mode: warn
    #do_not_fail_on_forbidden: false
    kibana:
      multitenancy_enabled: true
      server_username: kibanaserver
      #index: '.kibana'
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
        #internalProxies: '.*' # trust all internal proxies, regex pattern
        #remoteIpHeader:  'x-forwarded-for'
        ###### see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html for regex help
        ###### more information about XFF https://en.wikipedia.org/wiki/X-Forwarded-For
        ###### and here https://tools.ietf.org/html/rfc7239
        ###### and https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve
    authc:
      clientcert_auth_domain:
        description: "Authenticate via SSL client certificates"
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          type: clientcert
          config:
            username_attribute: cn #optional, if omitted DN becomes username
          challenge: false
        authentication_backend:
          type: noop
      ldap:
        description: "Authenticate via LDAP or Active Directory"
        http_enabled: true
        transport_enabled: true
        order: 2
        http_authenticator:
          type: basic
          challenge: true
        authentication_backend:
          type: ldap
          config:
            enable_ssl: false
            enable_start_tls: false
            enable_ssl_client_auth: false
            verify_hostnames: false
            hosts:
            - dev.*********.net:389
            bind_dn: 'cn=admin,dc=*********,dc=vps,dc=ovh,dc=net'
            password: '*********'
            userbase: 'ou=Users,dc=*********,dc=vps,dc=ovh,dc=net'
            usersearch: '(uid={0})'
            username_attribute: 'uid'

Here is my complete roles_mapping.yml file

_meta:
  type: "rolesmapping"
  config_version: 2

# Define your roles mapping here

## Demo roles mapping

all_access:
  reserved: false
#  backend_roles:
#  - "admin"
  users:
  - "*"

logstash:
  reserved: false
  backend_roles:
  - "logstash"

kibana_read_only:
  reserved: false
  backend_roles:
  - "kibanareadonly"
  
kibana_user:
  reserved: false
  backend_roles:
  - "kibanauser"

readall:
  reserved: false
  backend_roles:
  - "readall"

manage_snapshots:
  reserved: false
  backend_roles:
  - "snapshotrestore"

And on the dashboard side, here is the Dockerfile content

FROM opensearchproject/opensearch-dashboards:2.3.0
MAINTAINER ********** <admin.group@********.net>

RUN /usr/share/opensearch-dashboards/bin/opensearch-dashboards-plugin install https://github.com/fbaligand/kibana-enhanced-table/releases/download/v1.13.3/enhanced-table-1.13.3_osd-2.3.0.zip---

For clarity sake I put again my custom-opensearch_dashboards.yml file

opensearch.hosts: ["https://dev.*********.net:9200"]
opensearch.ssl.verificationMode: certificate
opensearch.username: "kibanaserver"
opensearch.requestHeadersAllowlist: [ authorization,securitytenant ]
server.ssl.enabled: true
server.ssl.certificate: /usr/share/opensearch-dashboards/config/client.pem
server.ssl.key: /usr/share/opensearch-dashboards/config/client-key.pem
opensearch.ssl.certificateAuthorities: [ "/usr/share/opensearch-dashboards/config/root-ca.pem"]
opensearch_security.multitenancy.enabled: true
opensearch_security.multitenancy.tenants.preferred: ["Private", "Global"]
opensearch_security.readonly_mode.roles: ["kibana_read_only"]
opensearch_security.cookie.secure: true

@dnyamy With this OpenSearch Dashboards config you must enable basic auth authentication in config.yml.
You have only enabled certificate and LDAP authentication unless you have kibanaserver user in LDAP.

Also, in docker environment you can use container name instead of docker host FQDN.
Try configuring the below in docker-compose.yml

- 'OPENSEARCH_HOSTS=["https://opensearch-node1:9200"]'

Hello @pablo,

I would like dashboards to authenticate through certificate and not with user:password.
How can I update the config to get that behavior ?

Thanks

@dnyamy Try using opensearch.ssl.certificate and opensearch.ssl.key. These are not documented. However, it was already reported to the dev team.

Just be aware, that root CA of the OpenSearch Dashboards client certificate must be known to the OpenSearch node. Ideally, if the OpenSearch node and OpenSearch Dashboards certs were created with the same RootCA.

1 Like

Hello @pablo,
Thank you so much !!!
The following lines in dahsboards’ configuration solved the issue

server.ssl.enabled: true
server.ssl.certificate: /usr/share/opensearch-dashboards/config/node1.pem
server.ssl.key: /usr/share/opensearch-dashboards/config/node1-key.pem
opensearch.ssl.certificate: /usr/share/opensearch-dashboards/config/client.pem
opensearch.ssl.key: /usr/share/opensearch-dashboards/config/client-key.pem

Indeed the doc is not clear
Thanks again

1 Like