Opensearch/opensearch-dashboard Failed to get saml header: Error: Error: failed parsing SAML config

Hello i’m trying to use opensearch and opensearch-dashboard with SAML.

I made a full installation on AWS EKS via the helm charts,

I have the following error when i’m trying to connect :

Error: failed parsing SAML config
opensearch-dashboards-79b549c84b-mfqjk dashboards     at SecurityClient.getSamlHeader (/usr/share/opensearch-dashboards/plugins/securityDashboards/server/backend/opensearch_security_client.ts:176:15)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at runMicrotasks (<anonymous>)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at processTicksAndRejections (internal/process/task_queues.js:95:5)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at /usr/share/opensearch-dashboards/plugins/securityDashboards/server/auth/types/saml/routes.ts:65:30
opensearch-dashboards-79b549c84b-mfqjk dashboards     at Router.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:163:44)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at handler (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:124:50)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at exports.Manager.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/toolkit.js:60:28)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at Object.internals.handler (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:46:20)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at exports.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:31:20)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at Request._lifecycle (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:371:32)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at Request._execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:281:9)
opensearch-dashboards-79b549c84b-mfqjk dashboards {"type":"log","@timestamp":"2022-07-08T14:23:12Z","tags":["error","plugins","securityDashboards"],"pid":1,"message":"Failed to get saml header: Error: Error: failed parsing SAML config"}
opensearch-dashboards-79b549c84b-mfqjk dashboards {"type":"error","@timestamp":"2022-07-08T14:23:12Z","tags":[],"pid":1,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Error: Internal Server Error\n    at HapiResponseAdapter.toError (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:143:19)\n    at HapiResponseAdapter.toHapiResponse (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:97:19)\n    at HapiResponseAdapter.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:92:17)\n    at Router.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:164:34)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)\n    at handler (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:124:50)\n    at exports.Manager.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/toolkit.js:60:28)\n    at Object.internals.handler (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:46:20)\n    at exports.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:31:20)\n    at Request._lifecycle (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:371:32)\n    at Request._execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:281:9)"},"url":"http://opensearch-dashboards.mydomain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","message":"Internal Server Error"}
opensearch-dashboards-79b549c84b-mfqjk dashboards {"type":"response","@timestamp":"2022-07-08T14:23:12Z","tags":[],"pid":1,"method":"get","statusCode":500,"req":{"url":"/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","method":"get","headers":{"host":"opensearch-dashboards.mydomain.com","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","cache-control":"max-age=0","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"document","sec-fetch-mode":"navigate","sec-fetch-site":"none","sec-fetch-user":"?1","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"X.X.X.X","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"X.X.X.X","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36"},"res":{"statusCode":500,"responseTime":90,"contentLength":9},"message":"GET /auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards 500 90ms - 9.0B"}
opensearch-dashboards-79b549c84b-mfqjk dashboards {"type":"response","@timestamp":"2022-07-08T14:23:12Z","tags":[],"pid":1,"method":"get","statusCode":401,"req":{"url":"/favicon.ico","method":"get","headers":{"host":"opensearch-dashboards.mydomain.com,"accept":"image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","referer":"https://opensearch-dashboards.mydomain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"image","sec-fetch-mode":"no-cors","sec-fetch-site":"same-origin","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"X.X.X.X","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"X.X.X.X","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","referer":"https://opensearch-dashboards.mydomain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards"},"res":{"statusCode":401,"responseTime":1,"contentLength":9},"message":"GET /favicon.ico 401 1ms - 9.0B"}

You can see my configuration below for opensearch helm chart (skipping not useful part)

config.yml: |-
        _meta:
          type: "config"
          config_version: "2"
        config:
          dynamic:
            http:
              anonymous_auth_enabled: false
            authc:
              basic_internal_auth_domain:
                description: "Authenticate via HTTP Basic against internal users database"
                http_enabled: true
                transport_enabled: true
                order: 0
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: intern
              saml_auth_domain:
                order: 1
                description: "SAML provider"
                http_enabled: true
                transport_enabled: false
                http_authenticator:
                  type: saml
                  challenge: true
                  config:
                    idp:
                      metadata_file: "/usr/share/opensearch/plugins/opensearch-security/securityconfig/gsuite.xml"
                      entity_id: "https://accounts.google.com/o/saml2?idpid=XXXXXXXX"
                    sp:
                      entity_id: "https://opensearch-dashboards.mydomain.com"
                    kibana_url: "https://opensearch-dashboards.mydomain.com"
                    subject_key: NameID
                    roles_key: Role
                authentication_backend:
                  type: noop
      gsuite.xml: |-
        <?xml version="1.0" encoding="UTF-8"?><md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://accounts.google.com/o/saml2?idpid=XXXXXXXX" validUntil="XXXXXXXXX">
          <md:IDPSSODescriptor WantAuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
            <md:KeyDescriptor use="signing">
              <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
                <ds:X509Data>
                  <ds:X509Certificate>XXX</ds:X509Certificate>
                </ds:X509Data>
              </ds:KeyInfo>
            </md:KeyDescriptor>
            <md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
            <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://accounts.google.com/o/saml2/idp?idpid=XXXXXXXX"/>
            <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://accounts.google.com/o/saml2/idp?idpid=XXXXXXXX"/>
          </md:IDPSSODescriptor>
        </md:EntityDescriptor>

And here for opensearch-dashboard.yml :

config:
  # Default OpenSearch Dashboards configuration from docker image of Dashboards
   opensearch_dashboards.yml: |
    server:
      ssl: 
        enabled: "false"
      xsrf:
        allowlist: ["/_plugins/_security/saml/acs","/_plugins/_security/saml/acs/idpinitiated","/_plugins/_security/saml/logout","/_opendistro/_security/saml/acs/idpinitiated", "/_opendistro/_security/saml/acs", "/_opendistro/_security/saml/logout"]
    opensearch_security:
      auth:
        type: "saml"
    opensearch:
      ssl:
        verificationMode: "none"
      hosts: ["${var.elasticsearch-host}:9200"]
      username: "username"
      password: "password"

Do you have any idea why i have this SAML parsing error ?

Thanks a lot for your help!

@nomopo Could you try to move gsuite.xml to /usr/share/opensearch/config from /usr/share/opensearch/plugins/opensearch-security/securityconfig/?

Have you tried metadata_url instead?

Hello,

I tried your solution but it doesn’t work, metadata_url also.

What i tried is too change the version used by older one and now i don’t have this error anymore but i have many other errors instead.

@nomopo Do you see any SAML connection/init errors during the OpenSearch startup?

What version are you referring to? Could you share the errors?

Ok i deleted eveything.

I use the 2.1.0 tag for the image and below the config my helm values (i have a new error now) :

for opensearch-dashboard :

# Copyright OpenSearch Contributors
# SPDX-License-Identifier: Apache-2.0

# Default values for opensearch-dashboards.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

opensearchHosts: "${var.elasticsearch-host}:9200"
replicaCount: 1
image:
  repository: "opensearchproject/opensearch-dashboards"
  # override image tag, which is .Chart.AppVersion by default
  tag: "${var.imagetag}"
  pullPolicy: "IfNotPresent"

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

rbac:
  create: true

# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
#  - name: certs
#    secretName: dashboard-certs
#    path: /usr/share/dashboards/certs

podAnnotations: {}

extraEnvs: []
#  - name: "NODE_OPTIONS"
#    value: "--max-old-space-size=1800"

envFrom: []

extraVolumes: []
  # - name: extras
  #   emptyDir: {}

extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

extraInitContainers: ""

extraContainers: ""

podSecurityContext: {}

securityContext:
  capabilities:
    drop:
      - ALL
  # readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000

config:
  # Default OpenSearch Dashboards configuration from docker image of Dashboards
   opensearch_dashboards.yml: |
    timelion:
      ui:
        enabled: "true"
    server:
      host: "0"
      ssl: 
        enabled: "false"
      xsrf:
        allowlist: ["/_plugins/_security/api/authtoken", "/_opendistro/_security/api/authtoken", "/_opendistro/_security/saml/acs/idpinitiated", "/_opendistro/_security/saml/acs", "/_opendistro/_security/saml/logout", "/_plugins/_security/saml/acs/idpinitiated", "/_plugins/_security/saml/acs", "/_plugins/_security/saml/logout"]
    opensearch_security:
      auth:
        type: "saml"
      multitenancy:
        enabled: "true"
        tenants:
          preferred: ["Private", "Global"]
    opensearch:
      ssl:
        verificationMode: "none"
      hosts: ["${var.elasticsearch-host}:9200"]
      username: "kibanaserver"
      password: "mypassword"
      requestHeadersAllowlist: ["securitytenant", "security_tenant", "Authorization"]




  # Dashboards TLS Config (Ensure the cert files are present before enabling SSL
      # ssl:
      #   enabled: true
      #   key: /usr/share/opensearch-dashboards/certs/dashboards-key.pem
      #   certificate: /usr/share/opensearch-dashboards/certs/dashboards-crt.pem

    # determines how dashboards will verify certificates (needs to be none for default opensearch certificates to work)
    # opensearch:
    #   ssl:
    #     certificateAuthorities: /usr/share/opensearch-dashboards/certs/dashboards-root-ca.pem
    #     if utilizing custom CA certs for connection to opensearch, provide the CA here

priorityClassName: ""

opensearchAccount:
  secret: ""
  keyPassphrase:
    enabled: false

labels: {}

hostAliases: []
# - ip: "127.0.0.1"
#   hostnames:
#   - "foo.local"
#   - "bar.local"

serverHost: "0.0.0.0"

service:
  type: LoadBalancer
  port: 443
  loadBalancerIP: ""
  nodePort: ""
  labels: {}
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ${data.aws_acm_certificate.cert.arn}
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
    service.beta.kubernetes.io/aws-load-balancer-alpn-policy: "HTTP2Preferred"
    ${var.global_domain}/dns-type: private
    external-dns.alpha.kubernetes.io/access: private
    external-dns.alpha.kubernetes.io/hostname: example.${var.domain}
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=${var.environment},Projet=${var.projet},TimestampLastUpdate=${var.timestamp}
  loadBalancerSourceRanges: []
  # 0.0.0.0/0
  httpPortName: http

ingress:
  enabled: false
  # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
  # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
  # ingressClassName: nginx
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          backend:
            serviceName: chart-example.local
            servicePort: 80
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources:
  requests:
    cpu: "100m"
    memory: "512M"
  limits:
    cpu: "100m"
    memory: "512M"

autoscaling:
  # This requires metrics server to be installed, to install use kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
  # See https://github.com/kubernetes-sigs/metrics-server
  enabled: false
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 80

updateStrategy:
  type: "Recreate"

nodeSelector:
  ${var.nodeSelector}

tolerations: []

affinity: {}

# -- Array of extra K8s manifests to deploy
extraObjects: []
  # - apiVersion: secrets-store.csi.x-k8s.io/v1
  #   kind: SecretProviderClass
  #   metadata:
  #     name: argocd-secrets-store
  #   spec:
  #     provider: aws
  #     parameters:
  #       objects: |
  #         - objectName: "argocd"
  #           objectType: "secretsmanager"
  #           jmesPath:
  #               - path: "client_id"
  #                 objectAlias: "client_id"
  #               - path: "client_secret"
  #                 objectAlias: "client_secret"
  #     secretObjects:
  #     - data:
  #       - key: client_id
  #         objectName: client_id
  #       - key: client_secret
  #         objectName: client_secret
  #       secretName: argocd-secrets-store
  #       type: Opaque
  #       labels:
  #         app.kubernetes.io/part-of: argocd

and here for opensearch :

clusterName: "opensearch-cluster"
nodeGroup: "master"

# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: "opensearch-cluster-master"

# OpenSearch roles that will be applied to this nodeGroup
# These will be set as environment variable "node.roles". E.g. node.roles=master,ingest,data,remote_cluster_client
roles:
  - master
  - ingest
  - data
  - remote_cluster_client

replicas: 3
minimumMasterNodes: 1

# if not set, falls back to parsing .Values.imageTag, then .Chart.appVersion.
majorVersion: ""

global:
  # Set if you want to change the default docker registry, e.g. a private one.
  dockerRegistry: ""

# Allows you to add any config files in {{ .Values.opensearchHome }}/config
opensearchHome: /usr/share/opensearch
# such as opensearch.yml and log4j2.properties
config:
  # Values must be YAML literal style scalar / YAML multiline string.
  # <filename>: |
  #   <formatted-value(s)>
  # log4j2.properties: |
  #   status = error
  #   appender.console.type = Console
  #   appender.console.name = console
  #   appender.console.layout.type = PatternLayout
  #   appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
  #   rootLogger.level = debug
  #   rootLogger.appenderRef.console.ref = console
  #   logger.securityjwt.name = com.amazon.dlic.auth.http.jwt
  #   logger.securityjwt.level = trace

  #   status = error
  #
  #   appender.console.type = Console
  #   appender.console.name = console
  #   appender.console.layout.type = PatternLayout
  #   appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
  #
  #   rootLogger.level = info
  #   rootLogger.appenderRef.console.ref = console
  opensearch.yml: |
    cluster.name: opensearch-cluster
    # Bind to all interfaces because we don't know what IP address Docker will assign to us.
    network.host: 0.0.0.0
    # Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
    # discovery.type: single-node
    # Start OpenSearch Security Demo Configuration
    # WARNING: revise all the lines below before you go into production
    plugins:
      security:
        ssl:
          transport:
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
            enforce_hostname_verification: false
          http:
            enabled: true
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
        allow_unsafe_democertificates: true
        allow_default_init_securityindex: true
        authcz:
          admin_dn:
            - CN=kirk,OU=client,O=client,L=test,C=de
        audit.type: internal_opensearch
        enable_snapshot_restore_privilege: true
        check_snapshot_restore_write_privileges: true
        restapi:
          roles_enabled: ["all_access", "security_rest_api_access"]
        system_indices:
          enabled: true
          indices:
            [
              ".opendistro-alerting-config",
              ".opendistro-alerting-alert*",
              ".opendistro-anomaly-results*",
              ".opendistro-anomaly-detector*",
              ".opendistro-anomaly-checkpoints",
              ".opendistro-anomaly-detection-state",
              ".opendistro-reports-*",
              ".opendistro-notifications-*",
              ".opendistro-notebooks",
              ".opendistro-asynchronous-search-response*",
            ]
    ######## End OpenSearch Security Demo Configuration ########
  # log4j2.properties:

# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
#  - name: MY_ENVIRONMENT_VAR
#    value: the_value_goes_here

# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
#     name: env-secret
# - configMapRef:
#     name: config-map

# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []

hostAliases: []
# - ip: "127.0.0.1"
#   hostnames:
#   - "foo.local"
#   - "bar.local"

image:
  repository: "opensearchproject/opensearch"
  # override image tag, which is .Chart.AppVersion by default
  tag: "${var.imagetag}"
  pullPolicy: "IfNotPresent"

podAnnotations: {}
  # iam.amazonaws.com/role: es-cluster

# additionals labels
labels: {}

opensearchJavaOpts: "-Xmx512M -Xms512M"

resources:
  requests:
    cpu: "1000m"
    memory: "100Mi"

initResources: {}
#  limits:
#     cpu: "25m"
#     memory: "128Mi"
#  requests:
#     cpu: "25m"
#     memory: "128Mi"

sidecarResources: {}
#   limits:
#     cpu: "25m"
#     memory: "128Mi"
#   requests:
#     cpu: "25m"
#     memory: "128Mi"

networkHost: "0.0.0.0"

rbac:
  create: false
  serviceAccountAnnotations: {}
  serviceAccountName: ""

podSecurityPolicy:
  create: false
  name: ""
  spec:
    privileged: true
    fsGroup:
      rule: RunAsAny
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
      - secret
      - configMap
      - persistentVolumeClaim
      - emptyDir

persistence:
  enabled: true
  # Set to false to disable the `fsgroup-volume` initContainer that will update permissions on the persistent disk.
  enableInitChown: true
  # override image, which is busybox by default
  # image: busybox
  # override image tag, which is latest by default
  # imageTag:
  labels:
    # Add default labels for the volumeClaimTemplate of the StatefulSet
    enabled: false
  # OpenSearch Persistent Volume Storage Class
  # If defined, storageClassName: <storageClass>
  # If set to "-", storageClassName: "", which disables dynamic provisioning
  # If undefined (the default) or set to null, no storageClassName spec is
  #   set, choosing the default provisioner.  (gp2 on AWS, standard on
  #   GKE, AWS & OpenStack)
  #
  storageClass: "${var.storageClass}"
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

extraVolumes: []
  # - name: extras
  #   emptyDir: {}

extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

extraContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

extraInitContainers: []
  # - name: do-somethings
  #   image: busybox
  #   command: ['do', 'something']

# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""

# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"

# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "soft"

# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}

# This is the pod topology spread constraints
# https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []

# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"

# The environment variables injected by service links are not used, but can lead to slow OpenSearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true

protocol: https
httpPort: 9200
transportPort: 9300

service:
  labels: {}
  labelsHeadless: {}
  headless:
    annotations: {}
  type: ClusterIP
  nodePort: ""
  annotations: {}
  httpPortName: http
  transportPortName: transport
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  externalTrafficPolicy: ""

updateStrategy: RollingUpdate

# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1

podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000

securityContext:
  capabilities:
    drop:
      - ALL
  # readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000

securityConfig:
  enabled: true
  path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
  actionGroupsSecret:
  configSecret:
  internalUsersSecret:
  rolesSecret:
  rolesMappingSecret:
  tenantsSecret:
  # The following option simplifies securityConfig by using a single secret and
  # specifying the config files as keys in the secret instead of creating
  # different secrets for for each config file.
  # Note that this is an alternative to the individual secret configuration
  # above and shouldn't be used if the above secrets are used.
  config:
    # There are multiple ways to define the configuration here:
    # * If you define anything under data, the chart will automatically create
    #   a secret and mount it.
    # * If you define securityConfigSecret, the chart will assume this secret is
    #   created externally and mount it.
    # * It is an error to define both data and securityConfigSecret.
    securityConfigSecret: ""
    dataComplete: true
    data:
      config.yml: |-
        _meta:
          type: "config"
          config_version: "2"
        config:
          dynamic:
            http:
              anonymous_auth_enabled: false
            authc:
              basic_internal_auth_domain:
                description: "Authenticate via HTTP Basic against internal users database"
                http_enabled: true
                transport_enabled: true
                order: 0
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: intern
              saml_auth_domain:
                order: 1
                description: "SAML provider"
                http_enabled: true
                transport_enabled: false
                http_authenticator:
                  type: saml
                  challenge: true
                  config:
                    idp:
                      metadata_file: "/usr/share/opensearch/plugins/opensearch-security/securityconfig/gsuite.xml"
                      entity_id: "https://accounts.google.com/o/saml2?idpid=XXXXXX"
                    sp:
                      entity_id: "kibana-saml"
                    kibana_url: "https://example.com"
                    exchange_key : "XXXX"
                    subject_key: NameID
                    roles_key: Role
                authentication_backend:
                  type: noop
      gsuite.xml: |-
        <?xml version="1.0" encoding="UTF-8"?><md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://accounts.google.com/o/saml2?idpid=XXXXXXX" validUntil="2027-01-26T23:13:54.000Z">
          <md:IDPSSODescriptor WantAuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
            <md:KeyDescriptor use="signing">
              <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
                <ds:X509Data>
                  <ds:X509Certificate>XXXX</ds:X509Certificate>
                </ds:X509Data>
              </ds:KeyInfo>
            </md:KeyDescriptor>
            <md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
            <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://accounts.google.com/o/saml2/idp?idpid=C018ua8xi"/>
            <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://accounts.google.com/o/saml2/idp?idpid=XXXXXXXX"/>
          </md:IDPSSODescriptor>
        </md:EntityDescriptor>
        
      internal_users.yml: |-
        _meta:
          type: "internalusers"
          config_version: 2
        admin:
          hash: "myhash"
          reserved: true
          backend_roles:
          - "admin"
          description: "Test admin user"

        kibanaserver:
          hash: "myhash"
          reserved: true
          backend_roles:
          - "kibanauser"
          description: "Test OpenSearch Dashboards user"
      # roles.yml: |-
      # roles_mapping.yml: |-
      # action_groups.yml: |-
      # tenants.yml: |-


# How long to wait for opensearch to stop gracefully
terminationGracePeriod: 120

sysctlVmMaxMapCount: 262144

startupProbe:
  tcpSocket:
    port: 9200
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 3
  failureThreshold: 30
readinessProbe:
  tcpSocket:
    port: 9200
  periodSeconds: 5
  timeoutSeconds: 3
  failureThreshold: 3

## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""

imagePullSecrets: []
nodeSelector: 
  ${var.nodeSelector}
tolerations: []

# Enabling this will publically expose your OpenSearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
  enabled: false
  # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
  # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
  # ingressClassName: nginx

  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

nameOverride: ""
fullnameOverride: ""

masterTerminationFix: false

lifecycle: {}
  # preStop:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
  # postStart:
  #   exec:
  #     command:
  #       - bash 
  #       - -c
  #       - |
  #         #!/bin/bash
  #         # Add a template to adjust number of shards/replicas1
  #         TEMPLATE_NAME=my_template
  #         INDEX_PATTERN="logstash-*"
  #         SHARD_COUNT=8
  #         REPLICA_COUNT=1
  #         ES_URL=http://localhost:9200
  #         while [[ "$(curl -s -o /dev/null -w '{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
  #         curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'

keystore: []
# To add secrets to the keystore:
#  - secretName: opensearch-encryption-key

networkPolicy:
  create: false
  ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
  ## In order for a Pod to access OpenSearch, it needs to have the following label:
  ## {{ template "uname" . }}-client: "true"
  ## Example for default configuration to access HTTP port:
  ## opensearch-master-http-client: "true"
  ## Example for default configuration to access transport port:
  ## opensearch-master-transport-client: "true"

  http:
    enabled: false

# Deprecated
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""

## Set optimal sysctl's. This requires privilege. Can be disabled if
## the system has already been preconfigured. (Ex: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html)
## Also see: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
sysctl:
  enabled: false

## Enable to add 3rd Party / Custom plugins not offered in the default OpenSearch image.
plugins:
  enabled: false
  installList: []
  # - example-fake-plugin

# -- Array of extra K8s manifests to deploy
extraObjects: []
  # - apiVersion: secrets-store.csi.x-k8s.io/v1
  #   kind: SecretProviderClass
  #   metadata:
  #     name: argocd-secrets-store
  #   spec:
  #     provider: aws
  #     parameters:
  #       objects: |
  #         - objectName: "argocd"
  #           objectType: "secretsmanager"
  #           jmesPath:
  #               - path: "client_id"
  #                 objectAlias: "client_id"
  #               - path: "client_secret"
  #                 objectAlias: "client_secret"
  #     secretObjects:
  #     - data:
  #       - key: client_id
  #         objectName: client_id
  #       - key: client_secret
  #         objectName: client_secret
  #       secretName: argocd-secrets-store
  #       type: Opaque
  #       labels:
  #         app.kubernetes.io/part-of: argocd

Here is the error i have :

opensearch-dashboards-7784f9d847-jtrvm dashboards {"type":"log","@timestamp":"2022-07-18T12:21:23Z","tags":["error","savedobjects-service"],"pid":1,"message":"Unable to retrieve version information from OpenSearch nodes."}
opensearch-dashboards-7784f9d847-jtrvm dashboards {"type":"log","@timestamp":"2022-07-18T12:21:22Z","tags":["info","savedobjects-service"],"pid":1,"message":"Waiting until all OpenSearch nodes are compatible with OpenSearch Dashboards before starting saved objects migrations..."}
opensearch-cluster-master-0 opensearch [2022-07-18T12:21:23,534][INFO ][o.o.s.p.PrivilegesEvaluator] [opensearch-cluster-master-0] No cluster-level perm match for User [name=kibanaserver, backend_roles=[kibanauser], requestedTenant=null] Resolved [aliases=[*], allIndices=[*], types=[*], originalRequested=[*], remoteIndices=[]] [Action [cluster:monitor/nodes/info]] [RolesChecked []]. No permissions for [cluster:monitor/nodes/info]
opensearch-dashboards-7784f9d847-jtrvm dashboards {"type":"log","@timestamp":"2022-07-18T12:21:23Z","tags":["error","opensearch","data"],"pid":1,"message":"[security_exception]: no permissions for [cluster:monitor/nodes/info] and User [name=kibanaserver, backend_roles=[kibanauser], requestedTenant=null]"}

@nomopo These errors are expected.
First appears when OpenSearch is not fully up and OpenSearch Dashboards can’t pull node information from OS.
The second is the continuation of the first one.
Third and fourth regard kibanaserver user which is verified against both basicauth and SAML authentication.
This failure regards SAML authentication and it won’t affect cluster functionality.

First two should disappear once OpenSearch is initialized. kibanaserver user errors will remain.

Do you have any other errors in the logs? Do you get a message in OpenSearch logs that the node has been initialized?

What do you get when you run the below command inside the OpenSearch Pod?

curl --insecure -u admin:admin -XGET https://localhost:9200

I managed to make it work with the version 1.3.4

I used the same configuration for the 2.1.0 and i have the error from the beginning :

Error: failed parsing SAML config

@nomopo As per OpenSearch documentation v 2.x. The location of the security config folder has changed to config/opensearch-security.
That’s why 1.3.x deployment works for you.

I suspect that your custom security config is not applied to the cluster during the deployment.

Ok i’m trying again the same config as 1.3.4 but in 2.1.0 except i’m changing :

From :       

securityConfig:
    enabled: true
    path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"

and 

config:
     idp:
         metadata_file: "/usr/share/opensearch/plugins/opensearch-security/securityconfig/gsuite.xml"

To :

securityConfig:
     enabled: true
     path: "/usr/share/opensearch/config/opensearch-security/securityconfig"

and

config:
     idp:
       metadata_file: "/usr/share/opensearch/config/opensearch-security/securityconfig/gsuite.xml"

So I redeploy in version 2.1.0 and saml parse error is happening again,

the config files are in

[opensearch@opensearch-cluster-master-0 securityconfig]$ ls
action_groups.yml  config.yml  gsuite.xml  internal_users.yml  nodes_dn.yml  roles.yml  roles_mapping.yml  tenants.yml  whitelist.yml
[opensearch@opensearch-cluster-master-0 securityconfig]$ pwd
/usr/share/opensearch/config/opensearch-security/securityconfig

Below i will show you all the logs :

Deploying opensearch-dashboards :

+ opensearch-dashboards-67d9db957c-w7lwf › dashboards
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:12:36Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"visTypeXy\" is disabled."}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:12:37Z","tags":["warning","config","deprecation"],"pid":1,"message":"\"cpu.cgroup.path.override\" is deprecated and has been replaced by \"ops.cGroupOverrides.cpuPath\""}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:12:37Z","tags":["warning","config","deprecation"],"pid":1,"message":"\"cpuacct.cgroup.path.override\" is deprecated and has been replaced by \"ops.cGroupOverrides.cpuAcctPath\""}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:12:37Z","tags":["warning","config","deprecation"],"pid":1,"message":"It is not recommended to disable xsrf protections for API endpoints via [server.xsrf.whitelist]. Instead, supply the \"osd-xsrf\" header."}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:12:39Z","tags":["info","plugins-system"],"pid":1,"message":"Setting up [45] plugins: [alertingDashboards,usageCollection,opensearchDashboardsUsageCollection,opensearchDashboardsLegacy,mapsLegacy,share,opensearchUiShared,embeddable,legacyExport,expressions,data,home,console,apmOss,management,indexPatternManagement,advancedSettings,savedObjects,securityDashboards,reportsDashboards,indexManagementDashboards,anomalyDetectionDashboards,dashboard,visualizations,visTypeVega,visTypeTimeline,timeline,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,ganttChartDashboards,visualize,queryWorkbenchDashboards,notificationsDashboards,charts,visTypeVislib,visTypeTimeseries,visTypeTagcloud,visTypeMetric,observabilityDashboards,discover,savedObjectsManagement,bfetch]"}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:12:48Z","tags":["info","savedobjects-service"],"pid":1,"message":"Waiting until all OpenSearch nodes are compatible with OpenSearch Dashboards before starting saved objects migrations..."}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:12:49Z","tags":["info","savedobjects-service"],"pid":1,"message":"Starting saved objects migrations"}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:12:49Z","tags":["info","plugins-system"],"pid":1,"message":"Starting [45] plugins: [alertingDashboards,usageCollection,opensearchDashboardsUsageCollection,opensearchDashboardsLegacy,mapsLegacy,share,opensearchUiShared,embeddable,legacyExport,expressions,data,home,console,apmOss,management,indexPatternManagement,advancedSettings,savedObjects,securityDashboards,reportsDashboards,indexManagementDashboards,anomalyDetectionDashboards,dashboard,visualizations,visTypeVega,visTypeTimeline,timeline,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,ganttChartDashboards,visualize,queryWorkbenchDashboards,notificationsDashboards,charts,visTypeVislib,visTypeTimeseries,visTypeTagcloud,visTypeMetric,observabilityDashboards,discover,savedObjectsManagement,bfetch]"}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:12:54Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:12:57Z","tags":["info","http","server","OpenSearchDashboards"],"pid":1,"message":"http server running at http://0.0.0.0:5601"}

Making a request from the browser :

opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"response","@timestamp":"2022-07-25T10:14:31Z","tags":[],"pid":1,"method":"get","statusCode":302,"req":{"url":"/","method":"get","headers":{"host":"kibana.mydomain.com","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"document","sec-fetch-mode":"navigate","sec-fetch-site":"none","sec-fetch-user":"?1","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"15.236.145.2","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"192.169.103.151","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36"},"res":{"statusCode":302,"responseTime":18,"contentLength":9},"message":"GET / 302 18ms - 9.0B"}
opensearch-dashboards-67d9db957c-w7lwf dashboards Error: failed parsing SAML config
opensearch-dashboards-67d9db957c-w7lwf dashboards     at SecurityClient.getSamlHeader (/usr/share/opensearch-dashboards/plugins/securityDashboards/server/backend/opensearch_security_client.ts:176:15)
opensearch-dashboards-67d9db957c-w7lwf dashboards     at processTicksAndRejections (internal/process/task_queues.js:95:5)
opensearch-dashboards-67d9db957c-w7lwf dashboards     at /usr/share/opensearch-dashboards/plugins/securityDashboards/server/auth/types/saml/routes.ts:62:30
opensearch-dashboards-67d9db957c-w7lwf dashboards     at Router.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:163:44)
opensearch-dashboards-67d9db957c-w7lwf dashboards     at handler (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:124:50)
opensearch-dashboards-67d9db957c-w7lwf dashboards     at exports.Manager.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/toolkit.js:60:28)
opensearch-dashboards-67d9db957c-w7lwf dashboards     at Object.internals.handler (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:46:20)
opensearch-dashboards-67d9db957c-w7lwf dashboards     at exports.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:31:20)
opensearch-dashboards-67d9db957c-w7lwf dashboards     at Request._lifecycle (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:371:32)
opensearch-dashboards-67d9db957c-w7lwf dashboards     at Request._execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:281:9)
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"log","@timestamp":"2022-07-25T10:14:32Z","tags":["error","plugins","securityDashboards"],"pid":1,"message":"Failed to get saml header: Error: Error: failed parsing SAML config"}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"error","@timestamp":"2022-07-25T10:14:31Z","tags":[],"pid":1,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Error: Internal Server Error\n    at HapiResponseAdapter.toError (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:143:19)\n    at HapiResponseAdapter.toHapiResponse (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:97:19)\n    at HapiResponseAdapter.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:92:17)\n    at Router.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:164:34)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)\n    at handler (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:124:50)\n    at exports.Manager.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/toolkit.js:60:28)\n    at Object.internals.handler (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:46:20)\n    at exports.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:31:20)\n    at Request._lifecycle (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:371:32)\n    at Request._execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:281:9)"},"url":"http://kibana.mydomain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","message":"Internal Server Error"}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"response","@timestamp":"2022-07-25T10:14:31Z","tags":[],"pid":1,"method":"get","statusCode":500,"req":{"url":"/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","method":"get","headers":{"host":"kibana.mydomain.com","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"document","sec-fetch-mode":"navigate","sec-fetch-site":"none","sec-fetch-user":"?1","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"15.236.145.2","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"192.169.103.28","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36"},"res":{"statusCode":500,"responseTime":501,"contentLength":9},"message":"GET /auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards 500 501ms - 9.0B"}
opensearch-dashboards-67d9db957c-w7lwf dashboards {"type":"response","@timestamp":"2022-07-25T10:14:32Z","tags":[],"pid":1,"method":"get","statusCode":401,"req":{"url":"/favicon.ico","method":"get","headers":{"host":"kibana.mydoamin.com","accept":"image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","referer":"https://kibana.mydomain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"image","sec-fetch-mode":"no-cors","sec-fetch-site":"same-origin","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"15.236.145.2","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"192.169.103.178","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","referer":"https://kibana.mydomain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards"},"res":{"statusCode":401,"responseTime":2,"contentLength":9},"message":"GET /favicon.ico 401 2ms - 9.0B"}

By the way i’m trying openid on the version 2.1.0 also and openid is almost working i see it trying to reach my Google’OAuth so i don’t think the problem is the location because for openid he is reading my config files.

@nomopo The docs state config/opensearch-securuty instead of config/opensearch-security/securityconfig

Files fall into opensearch-security folder directly.

my bad i’m trying again

Hi Pablo,

Guess what ? no more failed to parse saml error anymore !!

BUT i have another error now but coming from the saml conf i guess, but i don’t have the error in 1.3.4 version.

when i go on my opensearch dashboard domain it will pop up google authentication then i choose my gmail account and it will end in :

{"statusCode":404,"error":"Not Found","message":"Not Found"}

here are the logs :

opensearch-dashboards-67d9db957c-rdrgx dashboards {"type":"response","@timestamp":"2022-07-26T07:50:03Z","tags":[],"pid":1,"method":"get","statusCode":302,"req":{"url":"/","method":"get","headers":{"host":"kibana.mydomain.com","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"document","sec-fetch-mode":"navigate","sec-fetch-site":"none","sec-fetch-user":"?1","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"15.236.145.2","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"192.169.104.226","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36"},"res":{"statusCode":302,"responseTime":4,"contentLength":9},"message":"GET / 302 4ms - 9.0B"}
opensearch-dashboards-67d9db957c-rdrgx dashboards {"type":"response","@timestamp":"2022-07-26T07:50:03Z","tags":[],"pid":1,"method":"get","statusCode":302,"req":{"url":"/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","method":"get","headers":{"host":"kibana.mydomain.com","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"document","sec-fetch-mode":"navigate","sec-fetch-site":"none","sec-fetch-user":"?1","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"15.236.145.2","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"192.169.104.69","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36"},"res":{"statusCode":302,"responseTime":25,"contentLength":9},"message":"GET /auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards 302 25ms - 9.0B"}
opensearch-dashboards-67d9db957c-rdrgx dashboards {"type":"response","@timestamp":"2022-07-26T07:50:07Z","tags":[],"pid":1,"method":"post","statusCode":404,"req":{"url":"/_opendistro/_security/saml/acs","method":"post","headers":{"host":"kibana.mydomain.com","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","cache-control":"max-age=0","content-type":"application/x-www-form-urlencoded","origin":"https://accounts.google.com","referer":"https://accounts.google.com/","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"document","sec-fetch-mode":"navigate","sec-fetch-site":"cross-site","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"15.236.145.2","x-forwarded-port":"443","x-forwarded-proto":"https","content-length":"5759","connection":"keep-alive"},"remoteAddress":"192.169.104.150","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","referer":"https://accounts.google.com/"},"res":{"statusCode":404,"responseTime":2,"contentLength":9},"message":"POST /_opendistro/_security/saml/acs 404 2ms - 9.0B"}
opensearch-dashboards-67d9db957c-rdrgx dashboards {"type":"response","@timestamp":"2022-07-26T07:50:07Z","tags":[],"pid":1,"method":"get","statusCode":401,"req":{"url":"/favicon.ico","method":"get","headers":{"host":"kibana.mydomain.com","accept":"image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","referer":"https://kibana.mydomain.com/_opendistro/_security/saml/acs","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"image","sec-fetch-mode":"no-cors","sec-fetch-site":"same-origin","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"15.236.145.2","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"192.169.104.246","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","referer":"https://kibana.mydomain.com/_opendistro/_security/saml/acs"},"res":{"statusCode":401,"responseTime":2,"contentLength":9},"message":"GET /favicon.ico 401 2ms - 9.0B"}

This error is because version 2.1.0, it’s now working in version 2.0.1

Thanks a lot !

@nomopo I see https://kibana.mydomain.com/_opendistro/_security/saml/acs in your logs.
I haven’t tested yet, but maybe that has changed to https://kibana.mydomain.com/_plugins/_security/saml/acs in 2.1.0.
Some parts of OpenSearch are still in transition from OpenDistro. SAML was one of them.

@nomopo I’ve done some testing and 2.1.0 seems to have SAML authentication broken.
In 2.1.0 /_opendistro/_security/saml/acs is no longer valid and /_plugins/_security/saml/acsis used instead. That’s why you get HTTP 404 in 2.1.0.
This pull request introduced this change. Replace _opendistro route with _plugins by pc-jedi · Pull Request #895 · opensearch-project/security-dashboards-plugin · GitHub

I’ve found the bug 1941 describing this issue.
It looks like the dev team is working on reverting that pull request.