Vanilla Security configurations

Greetings! So i’ve been tasked with learning a bit about Opendistro and integrating it with keycloak (openID).

I’ve been able to do this with vanilla certs using the ‘standalone’ version of OSS. I’m able to get the redirects to work etc… so this is a plus. Now when I go to do the same thing on kubernets using the helm chart, then the fun begins. So to alleviate any ‘differences’ I used the same configs used for the standalone on the cluster. I decided to not use the secrets to load the configs, I opted to just put them in the helm chart so when deployed it’s working, and nothing else is left to do.

Now I’m at a place where I’m not sure how to get the certs to be loaded. Previously using filebeats I was able to set the secretMount. I tried that here, the chart seemed okay with it, but I run into the error it can’t find the certs.

java.lang.IllegalStateException: failed to load plugin class [com.amazon.opendistroforelastic
search.security.OpenDistroSecurityPlugin]
Likely root cause: ElasticsearchException[Unable to read /usr/share/elasticsearch/config/esno
de.pem (/usr/share/elasticsearch/config/esnode.pem).

  • values.yaml
# Copyright 2019 Viasat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.

kibana:
  enabled: true
  image: amazon/opendistro-for-elasticsearch-kibana
  imageTag: 1.13.2
  ## Specifies the image pull policy. Can be "Always" or "IfNotPresent" or "Never".
  ## Default to "Always".
  imagePullPolicy: ""
  replicas: 1
  port: 5601
  externalPort: 443
  resources: {}
  #  limits:
  #    cpu: 2500m
  #    memory: 2Gi
  #  requests:
  #    cpu: 500m
  #    memory: 512Mi
  readinessProbe: []
  livenessProbe: []
  startupProbe: []

  elasticsearchAccount:
    secret: ""
    keyPassphrase:
      enabled: false

  extraEnvs: []

  extraVolumes: []
  # - name: extras
  #   emptyDir: {}

  extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

  extraInitContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

  extraContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

  ssl:
    kibana:
      enabled: false
      existingCertSecret:
      existingCertSecretCertSubPath: kibana-crt.pem
      existingCertSecretKeySubPath: kibana-key.pem
      existingCertSecretRootCASubPath: kibana-root-ca.pem
    elasticsearch:
      enabled: false
      existingCertSecret:
      existingCertSecretCertSubPath: elk-rest-crt.pem
      existingCertSecretKeySubPath: elk-rest-key.pem
      existingCertSecretRootCASubPath: elk-rest-root-ca.pem



  configDirectory: "/usr/share/kibana/config"
  certsDirectory: "/usr/share/kibana/certs"

  ingress:
    ## Set to true to enable ingress record generation
    enabled: true
    annotations:
      kubernetes.io/ingress.class: nginx
    #  kubernetes.io/tls-acme: "true"
    labels: {}
    path: /
    hosts:
      - kibana.blaklabz.io
    tls:
      - secretName: blaklabz-io-tls
        hosts:
          - kibana.blaklabz.io

  service:
    type: ClusterIP
    annotations: {}
    config: |-
      elasticsearch.hosts: "https://opendistro-es-client-service:9200"
      elasticsearch.ssl.verificationMode: none
      elasticsearch.username: kibanaserver
      elasticsearch.password: kibanaserver
      elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]

      opendistro_security.multitenancy.enabled: true
      opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
      opendistro_security.readonly_mode.roles: ["kibana_read_only"]

      opendistro_security.auth.type: "openid"
      opendistro_security.openid.connect_url: "https://keycloak.blaklabz.io/auth/realms/master/.well-known/openid-configuration"
      opendistro_security.openid.client_id: "kibana-sso"
      opendistro_security.openid.client_secret: "5cef8fb0-c160-44ca-a12a-69537468fe63"


    ## Kibana TLS Config
    #server.ssl.enabled: true
    #server.ssl.key: /usr/share/kibana/certs/kibana-key.pem
    #server.ssl.certificate: /usr/share/kibana/certs/kibana-crt.pem
    #elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/certs/kibana-root-ca.pem

    # opendistro_security.cookie.secure: true
    # opendistro_security.cookie.password: ${COOKIE_PASS}



  ## Node labels for pod assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  #
  nodeSelector: {}

  ## Tolerations for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: []

  affinity: {}

  serviceAccount:
    ## Specifies whether a ServiceAccount should be created
    create: true
    ## The name of the ServiceAccount to use.
    ## If not set and create is true, a name is generated using the fullname template
    name:

  podAnnotations: {}


global:
  clusterName: elasticsearch

  psp:
    create: true

  rbac:
    enabled: true

  # Optionally override the docker registry to use for images
  imageRegistry: docker.io

  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  # imagePullSecrets:
  #   - myRegistryKeySecretName


elasticsearch:
  ## Used when deploying hot/warm architecture. Allows second aliased deployment to find cluster.
  ## Default {{ template opendistro-es.fullname }}-discovery.
  discoveryOverride: ""
  securityConfig:
    enabled: true
    path: "/usr/share/elasticsearch/plugins/opendistro_security/securityconfig"
      #actionGroupsSecret:
      #configSecret:
      #internalUsersSecret:
      #rolesSecret:
      #rolesMappingSecret:
      #tenantsSecret:
    #The following option simplifies securityConfig by using a single secret and specifying the respective secrets in the corresponding files instead of creating different secrets for config,internal users, roles, roles mapping and tenants
    #Note that this is an alternative to the above secrets and shouldn't be used if the above secrets are used
    config:
      securityConfigSecret:
      data:
        config.yml: |-
          _meta:
              type: "config"
              config_version: 2
          config:
            dynamic:
              # Set filtered_alias_mode to 'warn' to allow more than 2 filtered aliases per index but warns about it (default)
              # Set filtered_alias_mode to 'nowarn' to allow more than 2 filtered aliases per index silently
              #filtered_alias_mode: warn
              #do_not_fail_on_forbidden: false
              #kibana:
              # Kibana multitenancy
              #multitenancy_enabled: true
              #server_username: kibanaserver
              #index: '.kibana'
              http:
                anonymous_auth_enabled: false
                xff:
                  enabled: false
                  internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
                  #internalProxies: '.*' # trust all internal proxies, regex pattern
                  #remoteIpHeader:  'x-forwarded-for'
                  ###### see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html for regex help
                  ###### more information about XFF https://en.wikipedia.org/wiki/X-Forwarded-For
                  ###### and here https://tools.ietf.org/html/rfc7239
                  ###### and https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve
              authc:
                openid_auth_domain:
                  http_enabled: true
                  transport_enabled: true
                  order: 1
                  http_authenticator:
                    type: openid
                    challenge: false
                    config:
                      subject_key: preferred_username
                      roles_key: roles
                      openid_connect_url: http://keycloak.blaklabz.io/auth/realms/master/.well-known/openid-configuration
                  authentication_backend:
                    type: noop
                kerberos_auth_domain:
                  http_enabled: false
                  transport_enabled: false
                  order: 6
                  http_authenticator:
                    type: kerberos
                    challenge: true
                    config:
                      # If true a lot of kerberos/security related debugging output will be logged to standard out
                      krb_debug: false
                      # If true then the realm will be stripped from the user name
                      strip_realm_from_principal: true
                  authentication_backend:
                    type: noop
                basic_internal_auth_domain:
                  description: "Authenticate via HTTP Basic against internal users database"
                  http_enabled: true
                  transport_enabled: true
                  order: 4
                  http_authenticator:
                    type: basic
                    challenge: true
                  authentication_backend:
                    type: intern
                    proxy_auth_domain:
                  description: "Authenticate via proxy"
                  http_enabled: false
                  transport_enabled: false
                  order: 3
                  http_authenticator:
                    type: proxy
                    challenge: false
                    config:
                      user_header: "x-proxy-user"
                      roles_header: "x-proxy-roles"
                  authentication_backend:
                    type: noop
                jwt_auth_domain:
                  description: "Authenticate via Json Web Token"
                  http_enabled: false
                  transport_enabled: false
                  order: 0
                  http_authenticator:
                    type: jwt
                    challenge: false
                    config:
                      signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
                      jwt_header: "Authorization"
                      jwt_url_parameter: null
                      roles_key: null
                      subject_key: null
                  authentication_backend:
                    type: noop
                clientcert_auth_domain:
                  description: "Authenticate via SSL client certificates"
                  http_enabled: false
                  transport_enabled: false
                  order: 2
                  http_authenticator:
                    type: clientcert
                    config:
                      username_attribute: cn #optional, if omitted DN becomes username
                    challenge: false
                  authentication_backend:
                    type: noop
                ldap:
                  description: "Authenticate via LDAP or Active Directory"
                  http_enabled: false
                  transport_enabled: false
                  order: 5
                  http_authenticator:
                    type: basic
                    challenge: false
                  authentication_backend:
                    # LDAP authentication backend (authenticate users against a LDAP or Active Directory)
                    type: ldap
                    config:
                      # enable ldaps
                      enable_ssl: false
                      # enable start tls, enable_ssl should be false
                      enable_start_tls: false
                      # send client certificate
                      # verify ldap hostname
                      verify_hostnames: true
                      hosts:
                      - localhost:8389
                      bind_dn: null
                      password: null
                      userbase: 'ou=people,dc=example,dc=com'
                      # Filter to search for users (currently in the whole subtree beneath userbase)
                      # {0} is substituted with the username
                      usersearch: '(sAMAccountName={0})'
                      # Use this attribute from the user as username (if not set then DN is used)
                      username_attribute: null
              authz:
                roles_from_myldap:
                  description: "Authorize via LDAP or Active Directory"
                  http_enabled: false
                  transport_enabled: false
                  authorization_backend:
                    # LDAP authorization backend (gather roles from a LDAP or Active Directory, you have to configure the above LDAP authentication backend settings too)
                    type: ldap
                    config:
                      # enable ldaps
                      enable_ssl: false
                      # enable start tls, enable_ssl should be false
                      enable_start_tls: false
                      # send client certificate
                      enable_ssl_client_auth: false
                      # verify ldap hostname
                      verify_hostnames: true
                      hosts:
                      - localhost:8389
                      bind_dn: null
                      password: null
                      rolebase: 'ou=groups,dc=example,dc=com'
                      # Filter to search for roles (currently in the whole subtree beneath rolebase)
                      # {0} is substituted with the DN of the user
                      # {1} is substituted with the username
                      # {2} is substituted with an attribute value from user's directory entry, of the authenticated user. Use userroleattribute to specify the name of the attribute
                      rolesearch: '(member={0})'
                      # Specify the name of the attribute which value should be substituted with {2} above
                      userroleattribute: null
                      # Roles as an attribute of the user entry
                      userrolename: disabled
                      #userrolename: memberOf
                      # The attribute in a role entry containing the name of that role, Default is "name".
                      # Can also be "dn" to use the full DN as rolename.
                      rolename: cn
                      # Resolve nested roles transitive (roles which are members of other roles and so on ...)
                      resolve_nested_roles: true
                      userbase: 'ou=people,dc=example,dc=com'
                      # Filter to search for users (currently in the whole subtree beneath userbase)
                      # {0} is substituted with the username
                      usersearch: '(uid={0})'
                      # Skip users matching a user name, a wildcard or a regex pattern
                      #skip_users:
                      #  - 'cn=Michael Jackson,ou*people,o=TEST'
                      #  - '/\S*/'










        internal_users.yml: |-
          admin:
            hash: "$2a$12$RUFq1R6u7OXvb16PyCiomO7bwQpQq2cebjwimbLTKNimngZSuYXR2"
            reserved: true
            backend_roles:
            - "admin"
            description: "admin user"

        rolesMapping.yml: |-
          all_access:
            reserved: true
            hidden: false
            backend_roles:
            - "admin"
            - "ou=groups,dc=blaklabz,dc=io"
            hosts: []
            users: []
            and_backend_roles: []
            description: "this gonna work?"

  ## securityContext to apply to the pod. Allows for running as non-root
  securityContextCustom: {}
  #  fsGroup: 1000
  #  runAsUser: 1000
  #  runAsGroup: 1000

  extraEnvs:

  extraInitContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']


  initContainer:
    image: busybox
    imageTag: 1.27.2

  ## Set optimal sysctl's. This requires privilege. Can be disabled if
  ## the system has already been preconfigured.
  sysctl:
    enabled: true

  ## Give SYS_CHROOT cap to ES pods. This might not be neccesary
  sys_chroot:
    enabled: true

  ## init container to chown the mount volume. not neccesary if setting a
  ## fsGroup in the securityContext
  fixmount:
    enabled: true

  ssl:
    ## TLS is mandatory for the transport layer and can not be disabled
    transport:
      existingCertSecret:
      existingCertSecretCertSubPath: elk-transport-crt.pem
      existingCertSecretKeySubPath: elk-transport-key.pem
      existingCertSecretRootCASubPath: elk-transport-root-ca.pem
    rest:
      enabled: false
      existingCertSecret:
      existingCertSecretCertSubPath: elk-rest-crt.pem
      existingCertSecretKeySubPath: elk-rest-key.pem
      existingCertSecretRootCASubPath: elk-rest-root-ca.pem
    admin:
      enabled: false
      existingCertSecret:
      existingCertSecretCertSubPath: admin-crt.pem
      existingCertSecretKeySubPath: admin-key.pem
      existingCertSecretRootCASubPath: admin-root-ca.pem

  master:
    enabled: true
    replicas: 1
    updateStrategy: "RollingUpdate"

    ## Enable persistence using Persistent Volume Claims
    ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
    persistence:
      enabled: true
      ## A manually managed Persistent Volume and Claim
      ## Requires persistence.enabled: true
      ## If defined, PVC must be created manually before volume will be bound
      ##
      # existingClaim:

      ## The subdirectory of the volume to mount to, useful in dev environments
      ## and one PV for multiple services.
      ##
      subPath: ""

      ## Open Distro master Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
      ##   GKE, AWS & OpenStack)
      ##
      # storageClass: "-"
      accessModes:
        - ReadWriteOnce
      size: 8Gi
      annotations: {}

    resources: {}
    #  limits:
    #    cpu: 1
    #    memory: 1024Mi
    #  requests:
    #    cpu: 200m
    #    memory: 1024Mi
    javaOpts: "-Xms512m -Xmx512m"
    podDisruptionBudget:
      enabled: false
      minAvailable: 1
    readinessProbe: []
    livenessProbe:
      tcpSocket:
        port: transport
      initialDelaySeconds: 60
      periodSeconds: 10
    startupProbe: []
    nodeSelector: {}
    tolerations: []
    ## Anti-affinity to disallow deploying client and master nodes on the same worker node
    affinity: {}
    #  podAntiAffinity:
    #    requiredDuringSchedulingIgnoredDuringExecution:
    #      - topologyKey: "kubernetes.io/hostname"
    #        labelSelector:
    #          matchLabels:
    #            role: master
    podAnnotations: {}

    extraInitContainers: []
    # - name: do-something
    #   image: busybox
    #   command: ['do', 'something']

    extraContainers: []
    # - name: do-something
    #   image: busybox
    #   command: ['do', 'something']

  data:
    enabled: true
    ## Enables dedicated statefulset for data. Otherwise master nodes as data storage
    dedicatedPod:
      enabled: true
    replicas: 1
    updateStrategy: "RollingUpdate"

    ## Enable persistence using Persistent Volume Claims
    ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
    persistence:
      enabled: true
      ## A manually managed Persistent Volume and Claim
      ## Requires persistence.enabled: true
      ## If defined, PVC must be created manually before volume will be bound
      ##
      # existingClaim:

      ## The subdirectory of the volume to mount to, useful in dev environments
      ## and one PV for multiple services.
      ##
      subPath: ""

      ## Open Distro master Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
      ##   GKE, AWS & OpenStack)
      ##
      # storageClass: "-"
      accessModes:
        - ReadWriteOnce
      size: 8Gi
      annotations: {}

    resources: {}
    #  limits:
    #    cpu: 1
    #    memory: 1024Mi
    #  requests:
    #    cpu: 200m
    #    memory: 1024Mi
    javaOpts: "-Xms512m -Xmx512m"
    podDisruptionBudget:
      enabled: false
      minAvailable: 1
    readinessProbe: []
    livenessProbe:
      tcpSocket:
        port: transport
      initialDelaySeconds: 60
      periodSeconds: 10
    startupProbe: []
    nodeSelector: {}
    tolerations: []
    ## Anti-affinity to disallow deploying client and master nodes on the same worker node
    affinity: {}
    #  podAntiAffinity:
    #    preferredDuringSchedulingIgnoredDuringExecution:
    #      - weight: 1
    #        podAffinityTerm:
    #          topologyKey: "kubernetes.io/hostname"
    #          labelSelector:
    #            matchLabels:
    #              role: data
    podAnnotations: {}

  client:
    enabled: true
    ## Enables dedicated deployment for client/ingest. Otherwise master nodes as client/ingest
    dedicatedPod:
      enabled: true
    service:
      type: ClusterIP
      annotations: {}
        # # Defined ELB backend protocol as HTTPS to allow connection to Elasticsearch API
        # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https

        # # ARN of ACM certificate registered to the deployed ELB for handling connections over TLS
        # # ACM certificate should be issued to the DNS hostname defined earlier (elk.sec.example.com)
        # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:111222333444:certificate/c69f6022-b24f-43d9-b9c8-dfe288d9443d"
        # service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

        # service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
        # service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
        # service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"

        # # Annotation to create internal only ELB
        # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    replicas: 1
    javaOpts: "-Xms512m -Xmx512m"
    ingress:
      ## Set to true to enable ingress record generation
      enabled: false
      annotations: {}
      #  kubernetes.io/ingress.class: nginx
      #  kubernetes.io/tls-acme: "true"
      #  # Depending on your Ingress Controller you may need to set one of the two below annotations to have NGINX call the backend using HTTPS
      #  nginx.org/ssl-services:"{{ template "opendistro-es.fullname" . }}-client-service"
      #  nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
      labels: {}
      path: /
      hosts:
        - chart-example.local
      tls: []
      #  - secretName: chart-example-tls
      #    hosts:
      #      - chart-example.local
    resources: {}
    #  limits:
    #    cpu: 1
    #    memory: 1024Mi
    #  requests:
    #    cpu: 200m
    #    memory: 1024Mi
    podDisruptionBudget:
      enabled: false
      minAvailable: 1
    readinessProbe: []
    livenessProbe:
      tcpSocket:
        port: transport
      initialDelaySeconds: 60
      periodSeconds: 10
    startupProbe: []
    nodeSelector: {}
    tolerations: []
    ## Weighted anti-affinity to disallow deploying client node to the same worker node as master node
    affinity: {}
    #  podAntiAffinity:
    #    preferredDuringSchedulingIgnoredDuringExecution:
    #      - weight: 1
    #        podAffinityTerm:
    #          topologyKey: "kubernetes.io/hostname"
    #          labelSelector:
    #            matchLabels:
    #              role: client
    podAnnotations: {}

  secretMounts:
    - name: fbeat-secrets
      secretName: fbeat-secrets
      path: /usr/share/elasticsearch/config/certs

  config:

    ######## Start OpenDistro for Elasticsearch Security Demo Configuration ########
    # WARNING: revise all the lines below before you go into production
    opendistro_security.ssl.transport.pemcert_filepath: /usr/share/elasticsearch/config/certs/esnode.pem
    opendistro_security.ssl.transport.pemkey_filepath: /usr/share/elasticsearch/config/certs/esnode-key.pem
    opendistro_security.ssl.transport.pemtrustedcas_filepath: /usr/share/elasticsearch/config/certs/root-ca.pem
    opendistro_security.ssl.transport.enforce_hostname_verification: false
    opendistro_security.ssl.http.enabled: false
    opendistro_security.ssl.http.pemcert_filepath: /usr/share/elasticsearch/config/certs/esnode.pem
    opendistro_security.ssl.http.pemkey_filepath: /usr/share/elasticsearch/config/certs/esnode-key.pem
    opendistro_security.ssl.http.pemtrustedcas_filepath: /usr/share/elasticsearch/config/certs/root-ca.pem
    opendistro_security.allow_unsafe_democertificates: true
    opendistro_security.allow_default_init_securityindex: true
    opendistro_security.authcz.admin_dn:
      - CN=kirk,OU=client,O=client,L=test, C=de

    opendistro_security.audit.type: internal_elasticsearch
    opendistro_security.enable_snapshot_restore_privilege: true
    opendistro_security.check_snapshot_restore_write_privileges: true
    opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
    opendistro_security.system_indices.enabled: true
    opendistro_security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opendistro-asynchronous-search-response*"]
    cluster.routing.allocation.disk.threshold_enabled: false
    node.max_local_storage_nodes: 3
    ######## End OpenDistro for Elasticsearch Security Demo Configuration ########
    ## Example Config
    # opendistro_security.audit.type: internal_elasticsearch
    # opendistro_security.enable_snapshot_restore_privilege: true
    # opendistro_security.check_snapshot_restore_write_privileges: true
    # cluster.routing.allocation.disk.threshold_enabled: false
    # opendistro_security.audit.config.disabled_rest_categories: NONE
    # opendistro_security.audit.config.disabled_transport_categories: NONE
    # cluster:
    #   name: ${CLUSTER_NAME}
    # node:
    #   master: ${NODE_MASTER}
    #   data: ${NODE_DATA}
    #   name: ${NODE_NAME}
    #   ingest: ${NODE_INGEST}
    #   max_local_storage_nodes: 1
    #   attr.box_type: hot

    # processors: ${PROCESSORS:1}

    # network.host: ${NETWORK_HOST}

    # thread_pool.bulk.queue_size: 800

    # path:
    #   data: /usr/share/elasticsearch/data
    #   logs: /usr/share/elasticsearch/logs

    # http:
    #   enabled: ${HTTP_ENABLE}
    #   compression: true

    # discovery:
    #   zen:
    #     ping.unicast.hosts: ${DISCOVERY_SERVICE}
    #     minimum_master_nodes: ${NUMBER_OF_MASTERS}

    # # TLS Configuration Transport Layer
    #opendistro_security.ssl.transport.pemcert_filepath: elk-transport-crt.pem
    #opendistro_security.ssl.transport.pemkey_filepath: elk-transport-key.pem
    #opendistro_security.ssl.transport.pemtrustedcas_filepath: elk-transport-root-ca.pem
    #opendistro_security.ssl.transport.enforce_hostname_verification: false

    # # TLS Configuration REST Layer
    # opendistro_security.ssl.http.enabled: true
    # opendistro_security.ssl.http.pemcert_filepath: elk-rest-crt.pem
    # opendistro_security.ssl.http.pemkey_filepath: elk-rest-key.pem
    # opendistro_security.ssl.http.pemtrustedcas_filepath: elk-rest-root-ca.pem

  log4jConfig: ""

  loggingConfig:
    ## Default config
    ## you can override this using by setting a system property, for example -Des.logger.level=DEBUG
    es.logger.level: DEBUG
    rootLogger: ${es.logger.level}, console
    logger:
      ## log action execution errors for easier debugging
      action: DEBUG
      ## reduce the logging for aws, too much is logged under the default INFO
      com.amazonaws: WARN
    appender:
      console:
        type: console
        layout:
          type: consolePattern
          conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  transportKeyPassphrase:
    enabled: false
    passPhrase:

  sslKeyPassphrase:
    enabled: false
    passPhrase:

  maxMapCount: 262144

  image: amazon/opendistro-for-elasticsearch
  imageTag: 1.13.2
  ## Specifies the image pull policy. Can be "Always" or "IfNotPresent" or "Never".
  ## Default to "Always".
  imagePullPolicy: ""

  configDirectory: /usr/share/elasticsearch/config

  serviceAccount:
    ## Specifies whether a ServiceAccount should be created
    create: true
    ## The name of the ServiceAccount to use.
    ## If not set and create is true, a name is generated using the fullname template
    name:


nameOverride: ""
fullnameOverride: ""

Okay so i figured this bit out and now i’m past this current error. I have no actual fix, but after a fresh install and moving over the needed lines, I was able to get it to the point where it’s now rejecting from the IDP. So… progress…

@blaklabz Are you able to verify that all the configs have been loaded into the index as previously?

You can exec into any of the containers and retrieve the configuration using below command:

/usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert /usr/share/elasticsearch/config/root-ca.pem -cert /usr/share/elasticsearch/config/admin.pem -key /usr/share/elasticsearch/config/admin-key -h localhost -r

What error are you getting from IDP?

Hey, yeah i’m able to drop into the container and see the default keys. This is the error i’m getting currently.

["error","plugins","opendistroSecurityKibana"],"pid":1,"message":"{ Error: getaddrinfo
ENOTFOUND keycloak.blaklabz.io keycloak.blaklabz.io:443\n    at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:56:26)\n  errno: 'ENOTFOUND'
,\n  code: 'ENOTFOUND',\n  syscall: 'getaddrinfo',\n  hostname: 'keycloak.blaklabz.io',\n  host: 'keycloak.blaklabz.io',\n  port: 443,\n  trace:
\n   [ { method: 'GET',\n       url:\n        'https://keycloak.blaklabz.io/auth/realms/master/.well-known/openid-configuration' } ],\n  isBoom:
 true,\n  isServer: true,\n  data: null,\n  output:\n   { statusCode: 502,\n     payload:\n      { message:\n         'Client request error: get
addrinfo ENOTFOUND keycloak.blaklabz.io keycloak.blaklabz.io:443',\n        statusCode: 502,\n        error: 'Bad Gateway' },\n     headers: {}

I’m also able to get to the endpoint without problem and see the metadata. Seems like a cert issue…?

Couple of question, so you managed to verify that the config.yml (with IDP information) was loaded into the security index?
Are you using a self signed cert for keycloak, if so, it will need to be “whitelisted” under kibana.yml and config.yml.

kibana.yml:
opendistro_security.openid.root_ca:

config.yml:
openid_connect_idp.pemtrustedcas_filepath:

But you mentioned you had this working outside k8s?

I want to say yes to the first question, but how can i confirm? Yes I’m using a self-signed cert for keycloak. Yeah outside of my k3’s on my ubuntu servers, this worked without a hitch with vanilla settings, using the same configs.

here is my current yaml.

# Copyright 2019 Viasat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.

kibana:
  enabled: true
  image: amazon/opendistro-for-elasticsearch-kibana
  imageTag: 1.13.2
  ## Specifies the image pull policy. Can be "Always" or "IfNotPresent" or "Never".
  ## Default to "Always".
  imagePullPolicy: ""
  replicas: 1
  port: 5601
  externalPort: 443
  resources: {}
  #  limits:
  #    cpu: 2500m
  #    memory: 2Gi
  #  requests:
  #    cpu: 500m
  #    memory: 512Mi
  readinessProbe: []
  livenessProbe: []
  startupProbe: []

  elasticsearchAccount:
    secret: ""
    keyPassphrase:
      enabled: false

  extraEnvs: []

  extraVolumes: []
  # - name: extras
  #   emptyDir: {}

  extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

  extraInitContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

  extraContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

  ssl:
    kibana:
      enabled: false
      existingCertSecret:
      existingCertSecretCertSubPath: kibana-crt.pem
      existingCertSecretKeySubPath: kibana-key.pem
      existingCertSecretRootCASubPath: kibana-root-ca.pem
    elasticsearch:
      enabled: false
      existingCertSecret:
      existingCertSecretCertSubPath: elk-rest-crt.pem
      existingCertSecretKeySubPath: elk-rest-key.pem
      existingCertSecretRootCASubPath: elk-rest-root-ca.pem



  configDirectory: "/usr/share/kibana/config"
  certsDirectory: "/usr/share/kibana/certs"

  ingress:
    ## Set to true to enable ingress record generation
    enabled: true
    annotations:
      kubernetes.io/ingress.class: nginx
    #  kubernetes.io/tls-acme: "true"
    labels: {}
    path: /
    hosts:
      - kibana.blaklabz.io
    tls:
      - secretName: blaklabz-io-tls
        hosts:
          - kibana.blaklabz.io

  service:
    type: ClusterIP
    annotations: {}

  config:
    elasticsearch.hosts: "https://opendistro-es-client-service:9200"
    elasticsearch.ssl.verificationMode: none
    elasticsearch.username: kibanaserver
    elasticsearch.password: kibanaserver
    elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]

    opendistro_security.multitenancy.enabled: true
    opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
    opendistro_security.readonly_mode.roles: ["kibana_read_only"]

    opendistro_security.auth.type: "openid"
    opendistro_security.openid.connect_url: "https://keycloak.blaklabz.io/auth/realms/master/.well-known/openid-configuration"
    opendistro_security.openid.client_id: "kibana-sso"
    opendistro_security.openid.client_secret: "5cef8fb0-c160-44ca-a12a-69537468fe63"
    opendistro_security.openid.verify_hostnames: false

    ## Default Kibana configuration from kibana-docker.
    # server.name: kibana
    # server.host: "0"

    ## Replace with Elasticsearch DNS name picked during Service deployment
    # elasticsearch.hosts: ${ELASTIC_URL}
    # elasticsearch.requestTimeout: 360000

    ## Kibana TLS Config
    #server.ssl.enabled: true
    #server.ssl.key: /usr/share/kibana/certs/kibana-key.pem
    #server.ssl.certificate: /usr/share/kibana/certs/kibana-crt.pem
    #elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/certs/kibana-root-ca.pem

    # opendistro_security.cookie.secure: true
    # opendistro_security.cookie.password: ${COOKIE_PASS}



  ## Node labels for pod assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  #
  nodeSelector: {}

  ## Tolerations for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: []

  affinity: {}

  serviceAccount:
    ## Specifies whether a ServiceAccount should be created
    create: true
    ## The name of the ServiceAccount to use.
    ## If not set and create is true, a name is generated using the fullname template
    name:

  podAnnotations: {}


global:
  clusterName: elasticsearch

  psp:
    create: true

  rbac:
    enabled: true

  # Optionally override the docker registry to use for images
  imageRegistry: docker.io

  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  # imagePullSecrets:
  #   - myRegistryKeySecretName


elasticsearch:
  ## Used when deploying hot/warm architecture. Allows second aliased deployment to find cluster.
  ## Default {{ template opendistro-es.fullname }}-discovery.
  discoveryOverride: ""
  securityConfig:
    enabled: true
    path: "/usr/share/elasticsearch/plugins/opendistro_security/securityconfig"
    actionGroupsSecret:
    configSecret:
    internalUsersSecret:
    rolesSecret:
    rolesMappingSecret:
    tenantsSecret:
    #The following option simplifies securityConfig by using a single secret and specifying the respective secrets in the corresponding files instead of creating different secrets for config,internal users, roles, roles mapping and tenants
    #Note that this is an alternative to the above secrets and shouldn't be used if the above secrets are used
    config:
      securityConfigSecret:
      data:
        config.yml: |-
          _meta:
              type: "config"
              config_version: 2
          config:
            dynamic:
              # Set filtered_alias_mode to 'warn' to allow more than 2 filtered aliases per index but warns about it (default)
              # Set filtered_alias_mode to 'nowarn' to allow more than 2 filtered aliases per index silently
              #filtered_alias_mode: warn
              #do_not_fail_on_forbidden: false
              #kibana:
              # Kibana multitenancy
              #multitenancy_enabled: true
              #server_username: kibanaserver
              #index: '.kibana'
              http:
                anonymous_auth_enabled: false
                xff:
                  enabled: false
                  internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
                  #internalProxies: '.*' # trust all internal proxies, regex pattern
                  #remoteIpHeader:  'x-forwarded-for'
                  ###### see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html for regex help
                  ###### more information about XFF https://en.wikipedia.org/wiki/X-Forwarded-For
                  ###### and here https://tools.ietf.org/html/rfc7239
                  ###### and https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve
              authc:
                openid_auth_domain:
                  http_enabled: true
                  transport_enabled: true
                  order: 1
                  http_authenticator:
                    type: openid
                    challenge: false
                    config:
                      subject_key: preferred_username
                      roles_key: roles
                      openid_connect_url: https://keycloak.blaklabz.io/auth/realms/master/.well-known/openid-configuration
                  authentication_backend:
                    type: noop
                kerberos_auth_domain:
                  http_enabled: false
                  transport_enabled: false
                  order: 6
                  http_authenticator:
                    type: kerberos
                    challenge: true
                    config:
                      # If true a lot of kerberos/security related debugging output will be logged to standard out
                      krb_debug: false
                      # If true then the realm will be stripped from the user name
                      strip_realm_from_principal: true
                  authentication_backend:
                    type: noop
                basic_internal_auth_domain:
                  description: "Authenticate via HTTP Basic against internal users database"
                  http_enabled: true
                  transport_enabled: true
                  order: 4
                  http_authenticator:
                    type: basic
                    challenge: true
                  authentication_backend:
                    type: intern
                    proxy_auth_domain:
                  description: "Authenticate via proxy"
                  http_enabled: false
                  transport_enabled: false
                  order: 3
                  http_authenticator:
                    type: proxy
                    challenge: false
                    config:
                      user_header: "x-proxy-user"
                      roles_header: "x-proxy-roles"
                  authentication_backend:
                    type: noop
                jwt_auth_domain:
                  description: "Authenticate via Json Web Token"
                  http_enabled: false
                  transport_enabled: false
                  order: 0
                  http_authenticator:
                    type: jwt
                    challenge: false
                    config:
                      signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
                      jwt_header: "Authorization"
                      jwt_url_parameter: null
                      roles_key: null
                      subject_key: null
                  authentication_backend:
                    type: noop
                clientcert_auth_domain:
                  description: "Authenticate via SSL client certificates"
                  http_enabled: false
                  transport_enabled: false
                  order: 2
                  http_authenticator:
                    type: clientcert
                    config:
                      username_attribute: cn #optional, if omitted DN becomes username
                    challenge: false
                  authentication_backend:
                    type: noop
                ldap:
                  description: "Authenticate via LDAP or Active Directory"
                  http_enabled: false
                  transport_enabled: false
                  order: 5
                  http_authenticator:
                    type: basic
                    challenge: false
                  authentication_backend:
                    # LDAP authentication backend (authenticate users against a LDAP or Active Directory)
                    type: ldap
                    config:
                      # enable ldaps
                      enable_ssl: false
                      # enable start tls, enable_ssl should be false
                      enable_start_tls: false
                      # send client certificate
                      # verify ldap hostname
                      verify_hostnames: true
                      hosts:
                      - localhost:8389
                      bind_dn: null
                      password: null
                      userbase: 'ou=people,dc=example,dc=com'
                      # Filter to search for users (currently in the whole subtree beneath userbase)
                      # {0} is substituted with the username
                      usersearch: '(sAMAccountName={0})'
                      # Use this attribute from the user as username (if not set then DN is used)
                      username_attribute: null
              authz:
                roles_from_myldap:
                  description: "Authorize via LDAP or Active Directory"
                  http_enabled: false
                  transport_enabled: false
                  authorization_backend:
                    # LDAP authorization backend (gather roles from a LDAP or Active Directory, you have to configure the above LDAP authentication backend settings too)
                    type: ldap
                    config:
                      # enable ldaps
                      enable_ssl: false
                      # enable start tls, enable_ssl should be false
                      enable_start_tls: false
                      # send client certificate
                      enable_ssl_client_auth: false
                      # verify ldap hostname
                      verify_hostnames: true
                      hosts:
                      - localhost:8389
                      bind_dn: null
                      password: null
                      rolebase: 'ou=groups,dc=example,dc=com'
                      # Filter to search for roles (currently in the whole subtree beneath rolebase)
                      # {0} is substituted with the DN of the user
                      # {1} is substituted with the username
                      # {2} is substituted with an attribute value from user's directory entry, of the authenticated user. Use userroleattribute to specify the name of the attribute
                      rolesearch: '(member={0})'
                      # Specify the name of the attribute which value should be substituted with {2} above
                      userroleattribute: null
                      # Roles as an attribute of the user entry
                      userrolename: disabled
                      #userrolename: memberOf
                      # The attribute in a role entry containing the name of that role, Default is "name".
                      # Can also be "dn" to use the full DN as rolename.
                      rolename: cn
                      # Resolve nested roles transitive (roles which are members of other roles and so on ...)
                      resolve_nested_roles: true
                      userbase: 'ou=people,dc=example,dc=com'
                      # Filter to search for users (currently in the whole subtree beneath userbase)
                      # {0} is substituted with the username
                      usersearch: '(uid={0})'
                      # Skip users matching a user name, a wildcard or a regex pattern
                      #skip_users:
                      #  - 'cn=Michael Jackson,ou*people,o=TEST'
                      #  - '/\S*/'
       # config.yml: |-
       # internal_users.yml: |-
       # roles.yml: |-
       # rolesMapping.yml: |-
       # tenants.yml: |-


  ## securityContext to apply to the pod. Allows for running as non-root
  securityContextCustom: {}
  #  fsGroup: 1000
  #  runAsUser: 1000
  #  runAsGroup: 1000

  extraEnvs: []

  extraInitContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

  extraVolumes: []
  # - name: extras
  #   emptyDir: {}

  extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

  initContainer:
    image: busybox
    imageTag: 1.27.2

  ## Set optimal sysctl's. This requires privilege. Can be disabled if
  ## the system has already been preconfigured.
  sysctl:
    enabled: true

  ## Give SYS_CHROOT cap to ES pods. This might not be neccesary
  sys_chroot:
    enabled: true

  ## init container to chown the mount volume. not neccesary if setting a
  ## fsGroup in the securityContext
  fixmount:
    enabled: true

  ssl:
    ## TLS is mandatory for the transport layer and can not be disabled
    transport:
      existingCertSecret:
      existingCertSecretCertSubPath: elk-transport-crt.pem
      existingCertSecretKeySubPath: elk-transport-key.pem
      existingCertSecretRootCASubPath: elk-transport-root-ca.pem
    rest:
      enabled: false
      existingCertSecret:
      existingCertSecretCertSubPath: elk-rest-crt.pem
      existingCertSecretKeySubPath: elk-rest-key.pem
      existingCertSecretRootCASubPath: elk-rest-root-ca.pem
    admin:
      enabled: false
      existingCertSecret:
      existingCertSecretCertSubPath: admin-crt.pem
      existingCertSecretKeySubPath: admin-key.pem
      existingCertSecretRootCASubPath: admin-root-ca.pem

  master:
    enabled: true
    replicas: 1
    updateStrategy: "RollingUpdate"

    ## Enable persistence using Persistent Volume Claims
    ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
    persistence:
      enabled: true
      ## A manually managed Persistent Volume and Claim
      ## Requires persistence.enabled: true
      ## If defined, PVC must be created manually before volume will be bound
      ##
      # existingClaim:

      ## The subdirectory of the volume to mount to, useful in dev environments
      ## and one PV for multiple services.
      ##
      subPath: ""

      ## Open Distro master Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
      ##   GKE, AWS & OpenStack)
      ##
      # storageClass: "-"
      accessModes:
        - ReadWriteOnce
      size: 8Gi
      annotations: {}

    resources: {}
    #  limits:
    #    cpu: 1
    #    memory: 1024Mi
    #  requests:
    #    cpu: 200m
    #    memory: 1024Mi
    javaOpts: "-Xms512m -Xmx512m"
    podDisruptionBudget:
      enabled: false
      minAvailable: 1
    readinessProbe: []
    livenessProbe:
      tcpSocket:
        port: transport
      initialDelaySeconds: 60
      periodSeconds: 10
    startupProbe: []
    nodeSelector: {}
    tolerations: []
    ## Anti-affinity to disallow deploying client and master nodes on the same worker node
    affinity: {}
    #  podAntiAffinity:
    #    requiredDuringSchedulingIgnoredDuringExecution:
    #      - topologyKey: "kubernetes.io/hostname"
    #        labelSelector:
    #          matchLabels:
    #            role: master
    podAnnotations: {}

    extraInitContainers: []
    # - name: do-something
    #   image: busybox
    #   command: ['do', 'something']

    extraContainers: []
    # - name: do-something
    #   image: busybox
    #   command: ['do', 'something']

  data:
    enabled: true
    ## Enables dedicated statefulset for data. Otherwise master nodes as data storage
    dedicatedPod:
      enabled: true
    replicas: 1
    updateStrategy: "RollingUpdate"

    ## Enable persistence using Persistent Volume Claims
    ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
    persistence:
      enabled: true
      ## A manually managed Persistent Volume and Claim
      ## Requires persistence.enabled: true
      ## If defined, PVC must be created manually before volume will be bound
      ##
      # existingClaim:

      ## The subdirectory of the volume to mount to, useful in dev environments
      ## and one PV for multiple services.
      ##
      subPath: ""

      ## Open Distro master Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
      ##   GKE, AWS & OpenStack)
      ##
      # storageClass: "-"
      accessModes:
        - ReadWriteOnce
      size: 8Gi
      annotations: {}

    resources: {}
    #  limits:
    #    cpu: 1
    #    memory: 1024Mi
    #  requests:
    #    cpu: 200m
    #    memory: 1024Mi
    javaOpts: "-Xms512m -Xmx512m"
    podDisruptionBudget:
      enabled: false
      minAvailable: 1
    readinessProbe: []
    livenessProbe:
      tcpSocket:
        port: transport
      initialDelaySeconds: 60
      periodSeconds: 10
    startupProbe: []
    nodeSelector: {}
    tolerations: []
    ## Anti-affinity to disallow deploying client and master nodes on the same worker node
    affinity: {}
    #  podAntiAffinity:
    #    preferredDuringSchedulingIgnoredDuringExecution:
    #      - weight: 1
    #        podAffinityTerm:
    #          topologyKey: "kubernetes.io/hostname"
    #          labelSelector:
    #            matchLabels:
    #              role: data
    podAnnotations: {}

  client:
    enabled: true
    ## Enables dedicated deployment for client/ingest. Otherwise master nodes as client/ingest
    dedicatedPod:
      enabled: true
    service:
      type: ClusterIP
      annotations: {}
        # # Defined ELB backend protocol as HTTPS to allow connection to Elasticsearch API
        # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https

        # # ARN of ACM certificate registered to the deployed ELB for handling connections over TLS
        # # ACM certificate should be issued to the DNS hostname defined earlier (elk.sec.example.com)
        # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:111222333444:certificate/c69f6022-b24f-43d9-b9c8-dfe288d9443d"
        # service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

        # service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
        # service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
        # service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"

        # # Annotation to create internal only ELB
        # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    replicas: 1
    javaOpts: "-Xms512m -Xmx512m"
    ingress:
      ## Set to true to enable ingress record generation
      enabled: false
      annotations: {}
      #  kubernetes.io/ingress.class: nginx
      #  kubernetes.io/tls-acme: "true"
      #  # Depending on your Ingress Controller you may need to set one of the two below annotations to have NGINX call the backend using HTTPS
      #  nginx.org/ssl-services:"{{ template "opendistro-es.fullname" . }}-client-service"
      #  nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
      labels: {}
      path: /
      hosts:
        - chart-example.local
      tls: []
      #  - secretName: chart-example-tls
      #    hosts:
      #      - chart-example.local
    resources: {}
    #  limits:
    #    cpu: 1
    #    memory: 1024Mi
    #  requests:
    #    cpu: 200m
    #    memory: 1024Mi
    podDisruptionBudget:
      enabled: false
      minAvailable: 1
    readinessProbe: []
    livenessProbe:
      tcpSocket:
        port: transport
      initialDelaySeconds: 60
      periodSeconds: 10
    startupProbe: []
    nodeSelector: {}
    tolerations: []
    ## Weighted anti-affinity to disallow deploying client node to the same worker node as master node
    affinity: {}
    #  podAntiAffinity:
    #    preferredDuringSchedulingIgnoredDuringExecution:
    #      - weight: 1
    #        podAffinityTerm:
    #          topologyKey: "kubernetes.io/hostname"
    #          labelSelector:
    #            matchLabels:
    #              role: client
    podAnnotations: {}

  config: {}

    ## Example Config
    # opendistro_security.allow_unsafe_democertificates: false
    # opendistro_security.allow_default_init_securityindex: true
    # opendistro_security.audit.type: internal_elasticsearch
    # opendistro_security.enable_snapshot_restore_privilege: true
    # opendistro_security.check_snapshot_restore_write_privileges: true
    # cluster.routing.allocation.disk.threshold_enabled: false
    # opendistro_security.audit.config.disabled_rest_categories: NONE
    # opendistro_security.audit.config.disabled_transport_categories: NONE
    # cluster:
    #   name: ${CLUSTER_NAME}
    # node:
    #   master: ${NODE_MASTER}
    #   data: ${NODE_DATA}
    #   name: ${NODE_NAME}
    #   ingest: ${NODE_INGEST}
    #   max_local_storage_nodes: 1
    #   attr.box_type: hot

    # processors: ${PROCESSORS:1}

    # network.host: ${NETWORK_HOST}

    # thread_pool.bulk.queue_size: 800

    # path:
    #   data: /usr/share/elasticsearch/data
    #   logs: /usr/share/elasticsearch/logs

    # http:
    #   enabled: ${HTTP_ENABLE}
    #   compression: true

    # discovery:
    #   zen:
    #     ping.unicast.hosts: ${DISCOVERY_SERVICE}
    #     minimum_master_nodes: ${NUMBER_OF_MASTERS}

    # # TLS Configuration Transport Layer
    # opendistro_security.ssl.transport.pemcert_filepath: elk-transport-crt.pem
    # opendistro_security.ssl.transport.pemkey_filepath: elk-transport-key.pem
    # opendistro_security.ssl.transport.pemtrustedcas_filepath: elk-transport-root-ca.pem
    # opendistro_security.ssl.transport.enforce_hostname_verification: false

    # # TLS Configuration REST Layer
    # opendistro_security.ssl.http.enabled: true
    # opendistro_security.ssl.http.pemcert_filepath: elk-rest-crt.pem
    # opendistro_security.ssl.http.pemkey_filepath: elk-rest-key.pem
    # opendistro_security.ssl.http.pemtrustedcas_filepath: elk-rest-root-ca.pem

  log4jConfig: ""

  loggingConfig:
    ## Default config
    ## you can override this using by setting a system property, for example -Des.logger.level=DEBUG
    es.logger.level: INFO
    rootLogger: ${es.logger.level}, console
    logger:
      ## log action execution errors for easier debugging
      action: DEBUG
      ## reduce the logging for aws, too much is logged under the default INFO
      com.amazonaws: WARN
    appender:
      console:
        type: console
        layout:
          type: consolePattern
          conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  transportKeyPassphrase:
    enabled: false
    passPhrase:

  sslKeyPassphrase:
    enabled: false
    passPhrase:

  maxMapCount: 262144

  image: amazon/opendistro-for-elasticsearch
  imageTag: 1.13.2
  ## Specifies the image pull policy. Can be "Always" or "IfNotPresent" or "Never".
  ## Default to "Always".
  imagePullPolicy: ""

  configDirectory: /usr/share/elasticsearch/config

  serviceAccount:
    ## Specifies whether a ServiceAccount should be created
    create: true
    ## The name of the ServiceAccount to use.
    ## If not set and create is true, a name is generated using the fullname template
    name:


nameOverride: ""
fullnameOverride: ""

You can verify by connecting to a node and retrieving the config via the command I sent, if the retrieved config.yml matches the relevant section in your values.yml then all is good from that end.

It’s strange that it worked previously without whitelisting the self signed certificate.
If the above config does match, you can try to add the whitelisting and see if that works.

The config I have in config.yml for keycloak is:

config:
  subject_key: "preferred_name"
  roles_key: "roles"
  openid_connect_url: "https://..."
  openid_connect_idp.pemtrustedcas_filepath: ...
  openid_connect_idp.enable_ssl: true

Don’t forget entry in kibana.yml.

Hey! Thanks for you help BTW! So the command above, yep I was able to confirm that my config is in there. As for your question on the standalones, no certs were used, so I can see that being why it didn’t matter.

2 questions.

  1. How/Where - would I get the certs from keycloak into the container, without copying it there after the fact. (this is assuming that i can)

  2. How would I set the whitelist? would I just put in the domain of my keycloak server? Or the tls that i’m using?