Multitenancy not separating visualizations

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

Opensearch: 2.11
Opensearch-dashboards: 2.11

Describe the issue:

I am trying to visualize index patterns for just one tenant, but what I am encountering is that I see every index-pattern and visualization previously made. I have the following tenants:

  • Private
  • Global
  • Admin tenant
  • Lab01
  • Lab02

If I select Lab01 and I create a new dashboard called prueba1. I switch to another tenant and I can see that same dashboard.

From what I have seen in others Opensearchs (different versions) this is not the normal behavior, but in this cluster I can’t figure why it’s happening.

Am I missing something? Is there a relation between a visualization and a tenant? Downloading a Json I can’t see any relation between the two.

Configuration:

opensearch-dashboards.yml (tenant part)

opensearch_security.multitenancy.enabled: true
opensearch_security.multitenancy.tenants.enable_global: true
opensearch_security.multitenancy.tenants.enable_private: true
opensearch_security.multitenancy.tenants.preferred: ["Global"]
opensearch_security.multitenancy.enable_filter: true

Relevant Logs or Screenshots:

@esalas Could you share the output of the GET _cat/indices?
Also please share the config.yml file.

Hi @pablo, thanks for the fast response!

Output from GET _cat/indices?

green  open  netflow-archiver-20231114                          VrVZI6knT2qSHKGWRR3OgA 1 0 16723805      0    8.9gb    8.9gb
green  close netflow-archiver-20231111                          DYw_kCLsT52jGBjP9ONoXg 1 0                                  
green  open  netflow-archiver-20231112                          PBwQVPWOSeyp-mEEBX2t-g 1 0 19530450      0   10.1gb   10.1gb

I assume that by config.yml you mean opensearch.yml and opensearch-dashboards.yml, right?

opensearch.yml

cluster.name: os-cluster
network.host: 0.0.0.0
node.max_local_storage_nodes: 3
bootstrap.memory_lock: "true" # along with the memlock settings below, disables swapping

cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.low: 93%
cluster.routing.allocation.disk.watermark.high: 95%

plugins.security.restapi.roles_enabled: ["all_access"]
plugins.security.allow_unsafe_democertificates: true
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemtrustedcas_filepath: certificates/ca/ca.pem
plugins.security.ssl.transport.enabled: true
plugins.security.ssl.transport.pemtrustedcas_filepath: certificates/ca/ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false

plugins.security.authcz.admin_dn:
  - 'CN=ADMIN,O=BBVA,L=MADRID,ST=MADRID,C=EU'
plugins.security.nodes_dn:
  - 'CN=os01,O=BBVA,L=MADRID,ST=MADRID,C=EU'
  - 'CN=os02,O=BBVA,L=MADRID,ST=MADRID,C=EU'

opensearch-dashboards.yml

server.name: os_dashboards
server.host: "0.0.0.0"

opensearch.username: "admin"
opensearch.password: "admin"

# Encrypt traffic between the browser and OpenSearch-Dashboards
server.ssl.enabled: true
server.ssl.certificate: "/usr/share/opensearch-dashboards/config/certificates/os-dashboards/os-dashboards.pem"
server.ssl.key: "/usr/share/opensearch-dashboards/config/certificates/os-dashboards/os-dashboards.key"

# Encrypt traffic between OpenSearch-Dashboards and Opensearch
opensearch.ssl.certificateAuthorities: ["/usr/share/opensearch-dashboards/config/certificates/ca/ca.pem"]
opensearch.ssl.verificationMode: full

opensearch_security.multitenancy.enabled: true
opensearch_security.multitenancy.tenants.enable_global: true
opensearch_security.multitenancy.tenants.enable_private: true
opensearch_security.multitenancy.tenants.preferred: ["Global"]
opensearch_security.multitenancy.enable_filter: true

@esalas The config.yml is part of the security plugin. Please run the following API call in the Dev Tools and share the output.

GET _plugins/_security/api/securityconfig

Understood @pablo thanks for the explanation

securityconfig

{
  "config": {
    "dynamic": {
      "filtered_alias_mode": "warn",
      "disable_rest_auth": false,
      "disable_intertransport_auth": false,
      "respect_request_indices_options": false,
      "kibana": {
        "multitenancy_enabled": true,
        "private_tenant_enabled": true,
        "default_tenant": "Global",
        "server_username": "kibanaserver",
        "index": ".kibana"
      },
      "http": {
        "anonymous_auth_enabled": false,
        "xff": {
          "enabled": false,
          "internalProxies": "192\\.168\\.0\\.10|192\\.168\\.0\\.11",
          "remoteIpHeader": "X-Forwarded-For"
        }
      },
      "authc": {
        "jwt_auth_domain": {
          "http_enabled": false,
          "transport_enabled": false,
          "order": 0,
          "http_authenticator": {
            "challenge": false,
            "type": "jwt",
            "config": {
              "signing_key": "base64 encoded HMAC key or public RSA/ECDSA pem key",
              "jwt_header": "Authorization"
            }
          },
          "authentication_backend": {
            "type": "noop",
            "config": {}
          },
          "description": "Authenticate via Json Web Token"
        },
        "ldap": {
          "http_enabled": false,
          "transport_enabled": false,
          "order": 5,
          "http_authenticator": {
            "challenge": false,
            "type": "basic",
            "config": {}
          },
          "authentication_backend": {
            "type": "ldap",
            "config": {
              "enable_ssl": false,
              "enable_start_tls": false,
              "enable_ssl_client_auth": false,
              "verify_hostnames": true,
              "hosts": [
                "localhost:8389"
              ],
              "userbase": "ou=people,dc=example,dc=com",
              "usersearch": "(sAMAccountName={0})"
            }
          },
          "description": "Authenticate via LDAP or Active Directory"
        },
        "basic_internal_auth_domain": {
          "http_enabled": true,
          "transport_enabled": true,
          "order": 4,
          "http_authenticator": {
            "challenge": true,
            "type": "basic",
            "config": {}
          },
          "authentication_backend": {
            "type": "intern",
            "config": {}
          },
          "description": "Authenticate via HTTP Basic against internal users database"
        },
        "proxy_auth_domain": {
          "http_enabled": false,
          "transport_enabled": false,
          "order": 3,
          "http_authenticator": {
            "challenge": false,
            "type": "proxy",
            "config": {
              "user_header": "x-proxy-user",
              "roles_header": "x-proxy-roles"
            }
          },
          "authentication_backend": {
            "type": "noop",
            "config": {}
          },
          "description": "Authenticate via proxy"
        },
        "clientcert_auth_domain": {
          "http_enabled": false,
          "transport_enabled": false,
          "order": 2,
          "http_authenticator": {
            "challenge": false,
            "type": "clientcert",
            "config": {
              "username_attribute": "cn"
            }
          },
          "authentication_backend": {
            "type": "noop",
            "config": {}
          },
          "description": "Authenticate via SSL client certificates"
        },
        "kerberos_auth_domain": {
          "http_enabled": false,
          "transport_enabled": false,
          "order": 6,
          "http_authenticator": {
            "challenge": true,
            "type": "kerberos",
            "config": {
              "krb_debug": false,
              "strip_realm_from_principal": true
            }
          },
          "authentication_backend": {
            "type": "noop",
            "config": {}
          }
        }
      },
      "authz": {
        "roles_from_another_ldap": {
          "http_enabled": false,
          "transport_enabled": false,
          "authorization_backend": {
            "type": "ldap",
            "config": {}
          },
          "description": "Authorize via another Active Directory"
        },
        "roles_from_myldap": {
          "http_enabled": false,
          "transport_enabled": false,
          "authorization_backend": {
            "type": "ldap",
            "config": {
              "enable_ssl": false,
              "enable_start_tls": false,
              "enable_ssl_client_auth": false,
              "verify_hostnames": true,
              "hosts": [
                "localhost:8389"
              ],
              "rolebase": "ou=groups,dc=example,dc=com",
              "rolesearch": "(member={0})",
              "userrolename": "disabled",
              "rolename": "cn",
              "resolve_nested_roles": true,
              "userbase": "ou=people,dc=example,dc=com",
              "usersearch": "(uid={0})"
            }
          },
          "description": "Authorize via LDAP or Active Directory"
        }
      },
      "auth_failure_listeners": {},
      "do_not_fail_on_forbidden": false,
      "multi_rolespan_enabled": true,
      "hosts_resolver_mode": "ip-only",
      "do_not_fail_on_forbidden_empty": false
    }
  }
}

What permissions does the users/roles have for the tenants? You need to be granular if you want to limit what users should be able to access which tenant.

@esalas Your configuration has tenancy enabled. I’ve tested 2.11.0 on my side and dashboards don’t leak between tenants.
When multitenancy is enabled, the security plugin will create .kibana index per tenant. The OpenSearch Dashboards objects created in the tenant will be placed in the corresponding index.

Could you run the below command again and share the output?

GET _cat/indices?expand_wildcards=all

@diskoteket the role is prueba_readonly and it has cluster_composite_ops_ro for cluster permissions. As for Index permissions I have especified 2 patterns with read, search and get Index permissions. The user is mapped to the role, with no attributes nor backend roles.

@pablo I have 4 .kibana index:

yellow open  .kibana_1                                               XNl3fc1RTnOwP4pU23ERdA 1 1      36      0   29.8kb   29.8kb
green  open  .kibana_2                                               3mJkrMAsQ1WZDEyomvFUYA 1 1      15      0    142kb     71kb
yellow open  .kibana_3                                               5vyxw0ChQWmM18hqdSSjNw 1 1     273     12  200.3kb  200.3kb
yellow open  .kibana_4                                               GQmOLLnlSISThluO_c0cig 1 1     278     18  222.5kb  222.5kb
green  open  netflow-archiver-20231122                               mgniYnM9TM6HEPVJBoTcDg 1 0       0      0     208b     208b
green  open  netflow-archiver-20231123                               kaBaagPvSf6Tz4GkjJUDpg 1 0  252343      0  167.2mb  167.2mb
green  open  netflow-archiver-20231121                               Z06cvnBuQYuUjZ3hB79PLg 1 0  107890      0   68.4mb   68.4mb

I guess each one is linked with one tenant.

In other OSD installation I dont have this issue, the difference between them is that this one is run in docker with docker-compose, the rest are installed via deb package.

Thanks both for the inputs.

@esalas These indices belong to the Global tenant.

Did you switch the tenant when creating Dashboards? According to this output, the indices for custom tenants haven’t been created yet.

Below is an example of my indices.

green open .kibana_1                        3oe_AcEyRjuIChWqB7TbLw 1 1  1 0  10.3kb   5.1kb
green open .kibana_106421716_pablo_1        RoA_tm_eQFecePUOSlZlfw 1 1  1 0  10.3kb   5.1kb
green open .kibana_110251487_test1_1        83oXKt80SS2kOw9zqInwLQ 1 1  1 0  10.3kb   5.1kb
green open .kibana_110251488_test2_1        LkHRD1p_QR63ApBetq1qRg 1 1  1 0  10.3kb   5.1kb

The first index belongs to the Global tenant, next is the private tenant of the pablo user, last two belong to test1 and test2 tenants.