Troubleshooting Security Plugin Issues

Hi, I’m trying to get setup with OIDC running, however, something’s apparently wrongly configured, and I am getting no information from either Kibana nor elasticsearch logs. I’m running Opendistro 1.1.0.0 on Amazon Linux 2.

I’m able to start ES and Kibana just fine with basic auth settings on, but as soon as I’m trying to enable OIDC, it’s broken.

What I’m seeing on the Kibana page is:
Red plugin:opendistro_security@7.1.1 An error occurred during initialisation, please check the logs.

The Kibana logs look like this:
{"type":"log","@timestamp":"2019-07-30T13:18:53Z","tags":["status","plugin:opendistro_security@7.1.1","error"],"pid":7039,"state":"red","message":"Status changed from yellow to red - An error occurred during initialisation, please check the logs.","prevState":"yellow","prevMsg":"'opendistro_security.cookie.secure' is set to false, cookies are transmitted over unsecure HTTP connection. Consider using HTTPS and set this key to 'true'"}

I’ve tried to increase the log4j log levels for the security plugin in elasticsearch, but it’s not giving me any details:
logger.opendistro_security.name = com.amazon.opendistroforelasticsearch.security logger.opendistro_security.level = trace logger.opendistro_security.appenderRef.rolling.ref = rolling logger.opendistro_security.rolling_old.ref = rolling_old logger.opendistro_security.additivity = false

The elasticsearch logs:
[2019-07-30T13:32:18,716][INFO ][o.e.c.r.a.AllocationService] [i-0c13e256e02e7f817] updating number_of_replicas to [1] for indices [.opendistro_security, .kibana_1] [2019-07-30T13:32:18,749][INFO ][o.e.c.s.MasterService ] [i-0c13e256e02e7f817] node-join[{i-06862dd041ed4261f}{Drx_0QEjSpa0D7f-kCt22Q}{SBa4JhzZRviEuiuSkq3dpQ}{10.3.12.77}{10.3.12.77:9300}{rack=eu-central-1b} join existing leader], term: 12, version: 140, reason: added {{i-06862dd041ed4261f}{Drx_0QEjSpa0D7f-kCt22Q}{SBa4JhzZRviEuiuSkq3dpQ}{10.3.12.77}{10.3.12.77:9300}{rack=eu-central-1b},} [2019-07-30T13:32:18,909][INFO ][o.e.c.s.ClusterApplierService] [i-0c13e256e02e7f817] added {{i-06862dd041ed4261f}{Drx_0QEjSpa0D7f-kCt22Q}{SBa4JhzZRviEuiuSkq3dpQ}{10.3.12.77}{10.3.12.77:9300}{rack=eu-central-1b},}, term: 12, version: 140, reason: Publication{term=12, version=140} [2019-07-30T13:32:20,322][INFO ][o.e.c.r.a.AllocationService] [i-0c13e256e02e7f817] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.opendistro_security][0]] ...]).

I’m suspecting that something is wrong in my config, but I’ve made sure to double and triple-check with what other’s have configured in both kibana.yml and security configuration. Also, since I’m not even seeing any logs wrt. to the failure, I won’t bother you with config details.

My question is this: where are the logs I am asked to look into by Kibana?

Best,
Andreas

I encountered the same issue. Looking a the source code of the Kibana plugin the error “Red plugin:opendistro_security@7.1.1 An error occurred during initialisation, please check the logs.” is related to having a wrong URL for the OIDC discovery, or not reachable URL, or some issues related to the certificate used by your OIDC (trust).

Thanks, yes, I suspect as much, wasn’t able to verify however without any logs. I have now (a lot of) logs since I fixed my log4j.properties:

logger.opendistro_security.name = com.amazon.opendistroforelasticsearch.security logger.opendistro_security.level = trace logger.opendistro_security.appenderRef.rolling.ref = rolling logger.opendistro_security.appenderRef.rolling_old.ref = rolling_old logger.opendistro_security.additivity = false

By the way there is not so much logging on the Kibana side (Node JS), when the setup of the SSO plugin occurs and fails at the stage you mentioned.

I figured as much, sadly. I’ve now tried several configuration changes, but all to no avail. I’m posting to you now the config I am trying. The kibana.yml is:

server.host: 0.0.0.0
elasticsearch.hosts: https://localhost:9200
elasticsearch.ssl.verificationMode: none # to exclude hostname verification issues for now.
elasticsearch.username: ...
elasticsearch.password: ...
elasticsearch.requestHeadersWhitelist: ["Authorization", "security_tenant", "securitytenant"] # having both whitelisted for now.

opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]

opendistro_security.auth.type: "openid"
# Kibana is running in a private VPC that can access that endpoint and
# would also accept the certificate - it's properly signed.
opendistro_security.openid.connect_url: "https://properly-signed-domain.net/the-realm/.well-known/openid-configuration"
opendistro_security.openid.client_id: "..."
opendistro_security.openid.client_secret: "..."
opendistro_security.openid.scope: "openid profile email"
opendistro_security.openid.base_redirect_url: "..."
opendistro_security.cookie.password: "..."

the security configuration is:

---

_meta:
  type: "config"
  config_version: 2

config:
  dynamic:
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # left as-is for now.
        remoteIpHeader:  'x-forwarded-for'
    authc:
      basic_internal_auth_domain:
        description: "Authenticate via HTTP Basic against internal users database"
        http_enabled: true
        transport_enabled: true
        order: 0
        http_authenticator:
          type: basic
          challenge: false
        authentication_backend:
          type: internal
      openid_auth_domain:
        description: "Authenticate via OpenID"
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          type: openid
          challenge: false
          config:
            openid_connect_url: https://properly-signed-domain.net/the-realm/.well-known/openid-configuration
            enable_ssl: true
            verify_hostnames: false # to exclude hostname verification issues for now.
        authentication_backend:
          type: noop

I’m using the demo security setup from ODES, so my opendistro-relevant section of elasticsearch.yml looks like this:

opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de
opendistro_security.nodes_dn:
  - "CN=node-0.example.com,OU=node,O=node,L=test,DC=de"
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]

cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3

Hi the way to control the connectivity between Kibana and your IDP is to test a curl command from the Kibana host / container.

curl https://properly-signed-domain.net/the-realm/.well-known/openid-configuration

If the CA is not trusted by default in your Kibana host / container you can add the following property opendistro_security.openid.root_ca: to locate your CA perm file, or add the CA to /etc/pki/ca-trust/source/anchors/ and execute an update-ca-trust for Node JS.

1 Like

How can you verify that it is indeed a certificate error? I have the same @andreas-schroeder configuration in log4j2.properties and I still can’t see the logs per console … even when starting the Elasticsearch container this error is shown:
2019-10-24 15:46:31,854 main ERROR Unable to locate appender "rolling " for logger config “com.amazon.opendistroforelasticsearch.security”
2019-10-24 15:46:31,856 main ERROR Unable to locate appender "rolling_old " for logger config

I really need to see the logs to find out what’s the problem with my OpenID integration, the only thing that I can see is this message “plugin:opendistro_security@7.2.0
An error occurred during initialisation, please check the logs.” when I start Kibana, but in the Elasticsearch console and Kibana console nothing appears… I’d appreciate a lot your help… Thanks…

I was using the curl command to check if the TLS could be established properly.

Regards.

1 Like

Hello brvalenc, did you get a response to how to elevate the elasticsearch debugging output in the logs?

I too am having the same issue of seeing nothing in the logs regarding my OpenID Connect troubleshooting, even after following the documentation to update /etc/elasticsearch/log4j2.properties.

I’m also facing the same issue. could anyone find the solution for this issue?

Here is what i used in /etc/elasticsearch/log4j2.properties to get SOME insight into what was going on. Enough to figure out the issues for me on Elasticsearch with integrating to OpenID Connect.

#
# Debug Kibana single sign-on OpenID Connect authentication with a Keycloak Identity Provider
# Log SAML responses to Elasticsearch log file /var/log/elasticsearch/spartanlync-cluster.log for inspection
#
logger.token.name = com.amazon.dlic.auth.http.saml.Token
logger.token.level = debug

logger.opendistro_security.name = com.amazon.opendistroforelasticsearch.security
logger.opendistro_security.level = trace
logger.opendistro_security.appenderRef.rolling.ref = rolling
logger.opendistro_security.appenderRef.rolling_old.ref = rolling_old
logger.opendistro_security.additivity = false

logger.opendistro_security.name = com.amazon.dlic.auth.http.jwt
logger.opendistro_security.level = trace

My Root problem turned out to be CA certs NOT being included / Appended to the Certificate,

Node Certificate + Intermediate + Root CA Certificate = New Node Certificate

ON TOP OF specifying;

  1. opendistro_security.ssl.transport.pemtrustedcas_filepath and opendistro_security.ssl.http.pemtrustedcas_filepath in /etc/elasticsearch/|elasticsearch.yml

  2. pemtrustedcas_filepath in /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml

Hope it helps someone!

1 Like

Thanks for the response @lmit.
i have configured your above mentioned settings, but still no luck.

i’m just ending up with the error “An error occurred during initialisation, please check the logs” for opendistro_security plugin.

any help on this would be appreciated!

Hmm, my Thoughs

  • On the Kibana side, did you enable verbose logging with logging.verbose: true
  • Can you include a more fuller snapshot of these logs?
    Clear both Kibana & Elasticsearch logs before attempting your login, so we get a snapshot of ONLY logs relevant to that auth attempt on client and server side.
  • Can you include a copy of your /etc/kibana/|kibana.yml, /etc/elasticsearch/|elasticsearch.yml, and /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml? (MASKING ANY SENSITIVE PERSONAL INFO OF COURSE)
  • Forgive my ignorance…but the lastest distribution of Opendistro is 1.3.0 (Elasticsearch version 7.3.2)(Version History - Open Distro Documentation) Why are you using such an old version as Opendistro 1.1.0.0? I’m not familiar with Amazon Linux 2, but the latest version supports RPM, Debian, Tarballs, … so are we talking about the same product???

Yes Mitchell. i have set logging.verbose: true in kibana YML.
i’m also using the same version of elasticsearch and kibana of 7.3.2 only & security plugin version is 1.3.0.1.

elasticsearch doesn’t have any error logs related to this. and from kibana side i get this error "“An error occurred during initialisation, please check the logs” for opendistro_security plugin.

and moreover i am trying to enable openid connect for SSO with opendistro for my company for both elasticsearch and kibana. also i have mentioned order zero for openid to connect in first place to authenticate, but still it’s taking me to basic auth provider.

here is my Kibana configuration: (running with https)

    server.port: 5601
    server.host: "localhost"
    elasticsearch.hosts: ["https://localhost:9200"]
    elasticsearch.ssl.verificationMode: none
    elasticsearch.username: kibanaserver
    elasticsearch.password: kibanaserver
    elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization","security_tenant"]

    opendistro_security.multitenancy.enabled: true
    opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
    opendistro_security.readonly_mode.roles: ["kibana_read_only"]

    server.ssl.enabled: true
    server.ssl.key: /etc/kibana/oid-key.pem
    server.ssl.certificate: /etc/kibana/oid.pem
    opendistro_security.cookie.secure: true
    opendistro_security.cookie.password: "XXXXXXXX"
    opendistro_security.auth.type: "openid"      
    opendistro_security.openid.connect_url: "https://xxxxxxx/.well-known/openid-configuration"      
    opendistro_security.openid.client_id: "xxxxxxx"    
    opendistro_security.openid.client_secret: "xxxxxxx"  
    opendistro_security.openid.base_redirect_url: "https://www.yahoo.com"

my Elasticsearch config file:

    _meta:
  type: "config"
  config_version: 2

config:
  dynamic:
    
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern     
      basic_internal_auth_domain:
        description: "Authenticate via HTTP Basic against internal users database"
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          type: basic
          challenge: false
        authentication_backend:
          type: internal
      openid_auth_domain:
        description: "openid conncect"
        http_enabled: true
        transport_enabled: true
        order: 0
        http_authenticator:
          type: "openid"
          challenge: false
          config:
            openid_connect_url: "https://XXXXXXXX/.well-known/openid-configuration"
            pemtrustedcas_filepath: "/etc/elasticsearch/oid.pem"
        authentication_backend:
          type: internal

my elasticsearch YML having:

opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true

opendistro_security.ssl.transport.pemcert_filepath: /etc/elasticsearch/oid.pem
opendistro_security.ssl.transport.pemkey_filepath: /etc/elasticsearch/oid-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: /etc/elasticsearch/oid-rca.pem
opendistro_security.ssl.http.pemcert_filepath: /etc/elasticsearch/oid.pem
opendistro_security.ssl.http.pemkey_filepath: /etc/elasticsearch/oid-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: /etc/elasticsearch/oid-rca.pem

opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de

opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3

opendistro_security.disabled: false
network.host: localhost
network.bind_host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["127.0.0.1", "[::1]"]
discovery.zen.master_election.ignore_non_master_pings: true

Based on your symptoms, I think the initialization is caused by elasticsearch being unable to properly parse the YAML in your Elasticsearch config file.

YAML is VERY space sensitive.
Make sure you have NO TABS in your config file.
Make sure its properly indented.
Right away I can see that your _meta is not indented properly
Use the original config file that came as a reference for format
Here is my working config.yml file

---
_meta:
  type: "config"
  config_version: 2

config:
  dynamic:
    do_not_fail_on_forbidden: true
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
    authc:
      basic_internal_auth_domain:
        description: "Authenticate via HTTP Basic against internal users database"
        http_enabled: true
        transport_enabled: true
        order: 0
        http_authenticator:
          type: basic
          challenge: false
        authentication_backend:
          type: intern
      openid_auth_domain:
        description: "Authenticate via Keycloak Identity Provider"
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          type: openid
          challenge: false
          config:
            subject_key: preferred_username
            roles_key: roles
            openid_connect_url: "https://my.IDP.com:8443/auth/realms/myrealm/.well-known/openid-configuration"
            jwks_uri: "https://my.IDP.com:8443/auth/realms/myrealm/protocol/openid-connect/certs"
            enable_ssl_client_auth: true
            pemkey_filepath: "/etc/elasticsearch/ssl/my.domain.com.key"
            pemcert_filepath: "/etc/elasticsearch/ssl/my.domain.com.crt"
            pemtrustedcas_filepath: "/etc/elasticsearch/ssl/my.domain.com.ca.crt"
            enable_ssl: true
            verify_hostnames: true
        authentication_backend:
          type: noop
      kerberos_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 2
        http_authenticator:
          type: kerberos
          challenge: true
          config:
            krb_debug: false
            strip_realm_from_principal: true
        authentication_backend:
          type: noop
      proxy_auth_domain:
        description: "Authenticate via proxy"
        http_enabled: false
        transport_enabled: false
        order: 3
        http_authenticator:
          type: proxy
          challenge: false
          config:
            user_header: "x-proxy-user"
            roles_header: "x-proxy-roles"
        authentication_backend:
          type: noop
      jwt_auth_domain:
        description: "Authenticate via Json Web Token"
        http_enabled: false
        transport_enabled: false
        order: 4
        http_authenticator:
          type: jwt
          challenge: false
          config:
            signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
            jwt_header: "Authorization"
            jwt_url_parameter: null
            roles_key: null
            subject_key: null
        authentication_backend:
          type: noop
      clientcert_auth_domain:
        description: "Authenticate via SSL client certificates"
        http_enabled: false
        transport_enabled: false
        order: 5
        http_authenticator:
          type: clientcert
          config:
            username_attribute: cn #optional, if omitted DN becomes username
          challenge: false
        authentication_backend:
          type: noop
      ldap:
        description: "Authenticate via LDAP or Active Directory"
        http_enabled: false
        transport_enabled: false
        order: 6
        http_authenticator:
          type: basic
          challenge: false
        authentication_backend:
          type: ldap
          config:
            # enable ldaps
            enable_ssl: false
            # enable start tls, enable_ssl should be false
            enable_start_tls: false
            # send client certificate
            enable_ssl_client_auth: false
            # verify ldap hostname
            verify_hostnames: true
            hosts:
            - localhost:8389
            bind_dn: null
            password: null
            userbase: 'ou=people,dc=example,dc=com'
            usersearch: '(sAMAccountName={0})'
            username_attribute: null
    authz:
      roles_from_myldap:
        description: "Authorize via LDAP or Active Directory"
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: ldap
          config:
            # enable ldaps
            enable_ssl: false
            # enable start tls, enable_ssl should be false
            enable_start_tls: false
            # send client certificate
            enable_ssl_client_auth: false
            # verify ldap hostname
            verify_hostnames: true
            hosts:
            - localhost:8389
            bind_dn: null
            password: null
            rolebase: 'ou=groups,dc=example,dc=com'
            rolesearch: '(member={0})'
            userroleattribute: null
            userrolename: disabled
            rolename: cn
            resolve_nested_roles: true
            userbase: 'ou=people,dc=example,dc=com'
            usersearch: '(uid={0})'
      roles_from_another_ldap:
        description: "Authorize via another Active Directory"
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: ldap

Thanks Mitchell for providing your complete config file.

now i have set the config settings of elasticsearch config file. but still no luck.
after that i ran the securityadmin.sh file to check my config- it gives no errors.

Open Distro Security Admin v7
Will connect to localhost:9300 ... done
Connected as CN=XXXXX.com,OU=UNIT,O=XXXXX,L=TORONTO,ST=ONTARIO,C=CA
Elasticsearch Version: 7.3.2
Open Distro Security Version: 1.3.0.0
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Clustername: elasticsearch
Clusterstate: YELLOW
Number of nodes: 1
Number of data nodes: 1
.opendistro_security index already exists, so we do not need to create one.
Populate config from /usr/share/elasticsearch/plugins/opendistro_security/securityconfig
Will update '_doc/config' with ../securityconfig/config.yml
   SUCC: Configuration for 'config' created or updated
Will update '_doc/roles' with ../securityconfig/roles.yml
   SUCC: Configuration for 'roles' created or updated
Will update '_doc/rolesmapping' with ../securityconfig/roles_mapping.yml
   SUCC: Configuration for 'rolesmapping' created or updated
Will update '_doc/internalusers' with ../securityconfig/internal_users.yml
   SUCC: Configuration for 'internalusers' created or updated
Will update '_doc/actiongroups' with ../securityconfig/action_groups.yml
   SUCC: Configuration for 'actiongroups' created or updated
Will update '_doc/tenants' with ../securityconfig/tenants.yml
   SUCC: Configuration for 'tenants' created or updated
Done with success

and now kibana throws some random error like below: but i’m using elasticsearch and kibana of 7.3.2 version and security version 1.3.0.0

kibana[123]: {“type”:“log”,“@timestamp”:“2020-01-23T12:46:37Z”,“tags”:[“plugin”,“debug”],“pid”:1234,“message”:“Checking Elasticsearch version”}

and now elasticsearch cluster says: “Authentication finally failed” from browser.

i dont know what i’m missing over here.

I think you’re making progress because its no longer giving you a “An error occurred during initialisation, please check the logs” error. Additionally, its no longer failing on Basic Auth, but attempting to perform the OpenID Connect flow.

So this is good. Now you should be seeing more details in /var/log/elasticsearch/spartanlync-cluster.log because the elasticsearch backend is now communicating (and failing with your IDP provider). Make sure the debug settings are enabled in /etc/elasticsearch/log4j2.properties. Like a mentioned before…clear your logs first b4 attempting a login. then examine the logs after your get the “ Authentication finally failed”. You should be seeing the reason for failure in elasticsearch log.

I certainly did when I got that error message. For me the reason was elasticsearch was failing while trying to fetch the Auth token from IDP with URL
https://my.IDP.com:8443/auth/realms/myrealm/.well-known/openid-configuration, because my cert file did not include root/intermediate certs in the chain, as explained earlier. Your reason might be different. Send a full dump of that log as i mentioned including everything during kibana auth.

At this point you are also verifying your IDP setting. Make sure your IDP configuration has Valid Redirect URIs set to the following URLs:
https://my.domain.com:5601/auth/openid/login
https://my.domain.com:5601
https://my.domain.com:5601/app/kibana

Thanks. glad to hear about something is working for me.

as you suggested i have added your suggested log4j properties. and my IDP configuartion is having proper above redirect url’s. it seems like elasticsearch is working fine without any errors(above mentioned errors are gone now). but still it’s taking me to basic auth login page only.

here are the elasticsearch log insights: (i hope below bad certificate is not a blocker for elasticsearch running and it will give you some other clues)

[2020-01-24T09:08:28,237][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [myhost] Exception during establishing a SSL connection: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]     
        at java.lang.Thread.run(Thread.java:835) [?:?]
[2020-01-24T09:08:28,241][WARN ][o.e.h.AbstractHttpServerTransport] [myhost] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=0.0.0.0/0.0.0.0:9200, remoteAddress=null}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-codec-4.1.36.Final.jar:4.1.36.Final]       
        at java.lang.Thread.run(Thread.java:835) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]        
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-codec-4.1.36.Final.jar:4.1.36.Final]
        ... 16 more
[2020-01-24T09:08:28,477][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [myhost] Exception during establishing a SSL connection: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]        
        at java.lang.Thread.run(Thread.java:835) [?:?]
[2020-01-24T09:08:28,479][WARN ][o.e.h.AbstractHttpServerTransport] [myhost] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=0.0.0.0/0.0.0.0:9200, remoteAddress=null}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-codec-4.1.36.Final.jar:4.1.36.Final]      
        at java.lang.Thread.run(Thread.java:835) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]      
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-codec-4.1.36.Final.jar:4.1.36.Final]
        ... 16 more
[2020-01-24T09:08:30,752][INFO ][stats_log                ] [myhost] ------------------------------------------------------------------------
Program=PerformanceAnalyzerPlugin
StartTime=1579856850.737
EndTime=Fri, 24 Jan 2020 09:08:30 UTC
Time=60012 msecs
Timing=total-time:60012.0/1
Counters=TotalError=0
EOE

and where as kibana log says:

Jan 24 09:15:03 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:03Z","tags":["plugin","debug"],"pid":12345,"message":"Checking Elasticsearch version"}
Jan 24 09:15:03 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:03Z","tags":["debug","http"],"pid":12345,"message":"Kibana server is not ready yet get:[object Object]."}
Jan 24 09:15:30 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:30Z","tags":["status","plugin:opendistro_security@7.3.2","error"],"pid":12345,"state":"red","message":"Status changed from yellow to red - An error occurred during initialisation, please check the logs.","prevState":"yellow","prevMsg":"Initialising Security authentication plugin."}
Jan 24 09:15:30 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:30Z","tags":["plugins","debug"],"pid":12345,"plugin":{"name":"apm_oss","version":"kibana"},"message":"Initializing plugin apm_oss@kibana"}
Jan 24 09:15:30 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:30Z","tags":["status","plugin:apm_oss@7.3.2","info"],"pid":12345,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
JJan 24 09:15:30 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:30Z","tags":["listening","info"],"pid":12345,"message":"Server running at https://localhost:5601"}
Jan 24 09:15:30 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:30Z","tags":["debug","http"],"pid":12345,"message":"stopping NotReady server"}
Jan 24 09:15:30 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:30Z","tags":["plugin","debug"],"pid":12345,"message":"Checking Elasticsearch version"}
Jan 24 09:15:30 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:30Z","tags":["debug","http","server","Kibana"],"pid":12345,"message":"starting http server"}
Jan 24 09:15:30 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:30Z","tags":["info","http","server","Kibana"],"pid":12345,"message":"http server running"}
Jan 24 09:15:30 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:30Z","tags":["debug","http","server","Kibana"],"pid":12345,"message":"http server listening on https://localhost:5601"}
Jan 24 09:15:33 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:33Z","tags":["plugin","debug"],"pid":12345,"message":"Checking Elasticsearch version"}
Jan 24 09:15:35 myhost kibana[12345]: {"type":"ops","@timestamp":"2020-01-24T09:15:35Z","tags":[],"pid":12345,"os":{"load":[0.06005859375,0.07958984375,0.07080078125],"mem":{"total":270258651136,"free":248339546112},"uptime":7598501},"proc":{"uptime":137.344,"mem":{"rss":133566464,"heapTotal":101199872,"heapUsed":86302056,"external":320401},"delay":0.07013511657714844},"load":{"requests":{},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 82.3MB uptime: 0:02:17 load: [0.06 0.08 0.07] delay: 0.070"}
Jan 24 09:15:38 myhost kibana[12345]: {"type":"log","@timestamp":"2020-01-24T09:15:38Z","tags":["plugin","debug"],"pid":12345,"message":"Checking Elasticsearch version"}

and Kibana browser side i get error for security plugin.

i have setup verbose logging for kibana to check what’s going on here. then it keep on throws “Checking Elasticsearch version” in kibana log.

can someone get me to below path in kibana 7.3.2 V?
lib/auth/types/openid/OpenId.js

i have got from
https://github.com/opendistro-for-elasticsearch/security-kibana-plugin/issues/114