Kibana dash board issue

Hi Team,

I have installed the kibana and elastic search on stand alone With version 6.7.1.

After restarting the elastic service i can able to see the status, but when ever i try to refresh or go to other options the kibana status is not showing up. Please help me.

log file:

Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: GET https://127.0.0.1:9200/_opendistro/_security/authinfo => connect ECONNREFUSED 127.0.0.1:9200
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: at Log.error (/usr/share/kibana/node_modules/elasticsearch/src/lib/log.js:226:56)
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:259:18)
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:164:7)
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: at ClientRequest.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: at ClientRequest.emit (events.js:189:13)
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: at TLSSocket.socketErrorListener (_http_client.js:392:9)
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: at TLSSocket.emit (events.js:189:13)
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: at emitErrorNT (internal/streams/destroy.js:82:8)
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: at process._tickCallback (internal/process/next_tick.js:63:19)
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: Elasticsearch WARNING: 2020-08-09T16:46:49Z
Aug 9 16:46:49 ip-172-31-0-234 kibana[10473]: Unable to revive connection: https://127.0.0.1:9200/

What does elasticsearch logs say when the issue happens?, I’m assuming ES cluster is going down soon after the restart

Aug 15 16:14:22 ip-172-31-16-161.us-east-2.compute.internal systemd[1]: Started Elasticsearch.
Aug 15 16:14:23 ip-172-31-16-161.us-east-2.compute.internal elasticsearch[1538]: java.security.policy: error adding Entry:
Aug 15 16:14:23 ip-172-31-16-161.us-east-2.compute.internal elasticsearch[1538]: java.net.MalformedURLException: unknown protocol: jrt
Aug 15 16:14:23 ip-172-31-16-161.us-east-2.compute.internal elasticsearch[1538]: java.security.policy: error adding Entry:
Aug 15 16:14:23 ip-172-31-16-161.us-east-2.compute.internal elasticsearch[1538]: java.net.MalformedURLException: unknown protocol: jrt
[root@ip-172-31-16-161 elasticsearch]# tail -f elasticsearch.log
[2020-08-15T16:14:35,740][INFO ][o.e.c.s.MasterService ] [uXE3SEd] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {uXE3SEd}{uXE3SEdIRnSWxwXPI4Nv4A}{vjXqopyqSB6jrRHCrRWshA}{172.31.16.161}{172.31.16.161:9300}
[2020-08-15T16:14:35,746][INFO ][o.e.c.s.ClusterApplierService] [uXE3SEd] new_master {uXE3SEd}{uXE3SEdIRnSWxwXPI4Nv4A}{vjXqopyqSB6jrRHCrRWshA}{172.31.16.161}{172.31.16.161:9300}, reason: apply cluster state (from master [master {uXE3SEd}{uXE3SEdIRnSWxwXPI4Nv4A}{vjXqopyqSB6jrRHCrRWshA}{172.31.16.161}{172.31.16.161:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2020-08-15T16:14:35,837][INFO ][o.e.h.n.Netty4HttpServerTransport] [uXE3SEd] publish_address {172.31.16.161:9200}, bound_addresses {[::]:9200}
[2020-08-15T16:14:35,837][INFO ][o.e.n.Node ] [uXE3SEd] started
[2020-08-15T16:14:35,838][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [uXE3SEd] 4 Open Distro Security modules loaded so far: [Module [type=MULTITENANCY, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.PrivilegesInterceptorImpl], Module [type=DLSFLS, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper], Module [type=AUDITLOG, implementing class=com.amazon.opendistroforelasticsearch.security.auditlog.impl.AuditLogImpl], Module [type=REST_MANAGEMENT_API, implementing class=com.amazon.opendistroforelasticsearch.security.dlic.rest.api.OpenDistroSecurityRestApiActions]]
[2020-08-15T16:14:36,267][INFO ][o.e.g.GatewayService ] [uXE3SEd] recovered [4] indices into cluster_state
[2020-08-15T16:14:36,923][ERROR][c.a.o.s.a.BackendRegistry] [uXE3SEd] Not yet initialized (you may need to run securityadmin)
[2020-08-15T16:14:36,952][ERROR][c.a.o.s.a.BackendRegistry] [uXE3SEd] Not yet initialized (you may need to run securityadmin)
[2020-08-15T16:14:37,113][INFO ][o.e.c.r.a.AllocationService] [uXE3SEd] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[security-auditlog-2020.08.12][0], [.opendistro_security][0]] …]).
[2020-08-15T16:14:37,206][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [uXE3SEd] Node ‘uXE3SEd’ initialized
[2020-08-15T16:14:40,005][INFO ][o.e.c.m.MetaDataIndexTemplateService] [uXE3SEd] adding template [kibana_index_template:.kibana_] for index patterns [.kibana_]
[2020-08-15T16:14:40,057][INFO ][o.e.c.m.MetaDataCreateIndexService] [uXE3SEd] [.kibana_92668751_admin_2] creating index, cause [api], templates [kibana_index_template:.kibana_], shards [1]/[1], mappings [doc]
[2020-08-15T16:14:40,060][INFO ][o.e.c.r.a.AllocationService] [uXE3SEd] updating number_of_replicas to [0] for indices [.kibana_92668751_admin_2]
[2020-08-15T16:14:40,150][INFO ][o.e.c.m.MetaDataCreateIndexService] [uXE3SEd] [.kibana_92668751_admin_1] creating index, cause [api], templates [kibana_index_template:.kibana_
], shards [1]/[1], mappings [doc]
[2020-08-15T16:14:40,151][INFO ][o.e.c.r.a.AllocationService] [uXE3SEd] updating number_of_replicas to [0] for indices [.kibana_92668751_admin_1]
[2020-08-15T16:14:40,546][INFO ][o.e.c.m.MetaDataCreateIndexService] [uXE3SEd] [.tasks] creating index, cause [auto(task api)], templates , shards [1]/[1], mappings [task]
[2020-08-15T16:14:40,550][INFO ][o.e.c.r.a.AllocationService] [uXE3SEd] updating number_of_replicas to [0] for indices [.tasks]
[2020-08-15T16:14:40,630][INFO ][o.e.t.LoggingTaskListener] [uXE3SEd] 62 finished with response BulkByScrollResponse[took=282.9ms,timed_out=false,sliceId=null,updated=0,created=1,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=,search_failures=]
[2020-08-15T16:14:40,840][INFO ][o.e.c.m.MetaDataDeleteIndexService] [uXE3SEd] [.kibana_92668751_admin/3smiGNt_Rl2_33LnE4C07w] deleting index
[2020-08-15T16:15:03,080][INFO ][o.e.c.m.MetaDataCreateIndexService] [uXE3SEd] [security-auditlog-2020.08.15] creating index, cause [auto(bulk api)], templates , shards [5]/[1], mappings
[2020-08-15T16:15:03,237][INFO ][o.e.c.m.MetaDataMappingService] [uXE3SEd] [security-auditlog-2020.08.15/0jgFXADUTT2BRl52u34U_g] create_mapping [auditlog]

@sreekanth: Above are the logs and iam new to this. Trying to install only on one mode.

Here is the elastic search file.

======================== Elasticsearch Configuration =========================

NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

Please consult the documentation for further information on configuration options:

Elasticsearch Guide | Elastic

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

#cluster.name: my-application

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

#node.name: node-1

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

path.data: /var/lib/elasticsearch

Path to log files:

path.logs: /var/log/elasticsearch

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: “0.0.0.0”

Set a custom port for HTTP:

#http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is [“127.0.0.1”, “[::1]”]

#discovery.zen.ping.unicast.hosts: [“host1”, “host2”]

Prevent the “split brain” by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

#discovery.zen.minimum_master_nodes:

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

#gateway.recover_after_nodes: 3

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

######## Start OpenDistro for Elasticsearch Security Demo Configuration ########

WARNING: revise all the lines below before you go into production

opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:

  • CN=kirk,OU=client,O=client,L=test, C=de

opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]
cluster.routing.allocation.disk.threshold_enabled: false
discovery.zen.minimum_master_nodes: 1
node.max_local_storage_nodes: 3
######## End OpenDistro for Elasticsearch Security Demo Configuration ########

And my kibana file.

Copyright <2019> Amazon.com, Inc. or its affiliates. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the “License”).

You may not use this file except in compliance with the License.

A copy of the License is located at

http://www.apache.org/licenses/LICENSE-2.0

or in the “license” file accompanying this file. This file is distributed

on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either

express or implied. See the License for the specific language governing

permissions and limitations under the License.

Release Note:

Default Kibana configuration for Open Distro.

elasticsearch.url: https://localhost:9200
server.host: “0.0.0.0”
server.port: 5601
logging.dest: /var/log/kibana/kibana.log
elasticsearch.ssl.verificationMode: none
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
elasticsearch.requestHeadersWhitelist: [“securitytenant”,“Authorization”]

opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: [“Private”, “Global”]
opendistro_security.readonly_mode.roles: [“kibana_read_only”]

Please let me know at the earliest.

Logs looks good. Are you able to query the cluster?
curl -kv https://localhost:9200/_cluster/health

What is the output of the following query.
https://127.0.0.1:9200/_opendistro/_security/authinfo

curl -kv https://127.0.0.1:9200/_opendistro/_security/authinfo

  • Trying 127.0.0.1…
  • TCP_NODELAY set
  • Connected to 127.0.0.1 (127.0.0.1) port 9200 (#0)
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • successfully set certificate verify locations:
  • CAfile: /etc/pki/tls/certs/ca-bundle.crt
    CApath: none
  • TLSv1.3 (OUT), TLS handshake, Client hello (1):
  • TLSv1.3 (IN), TLS handshake, Server hello (2):
  • TLSv1.2 (IN), TLS handshake, Certificate (11):
  • TLSv1.2 (IN), TLS handshake, Server key exchange (12):
  • TLSv1.2 (IN), TLS handshake, Request CERT (13):
  • TLSv1.2 (IN), TLS handshake, Server finished (14):
  • TLSv1.2 (OUT), TLS handshake, Certificate (11):
  • TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
  • TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
  • TLSv1.2 (OUT), TLS handshake, Finished (20):
  • TLSv1.2 (IN), TLS handshake, Finished (20):
  • SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
  • ALPN, server did not agree to a protocol
  • Server certificate:
  • subject: DC=de; L=test; O=node; OU=node; CN=node-0.example.com
  • start date: Apr 22 03:43:47 2018 GMT
  • expire date: Apr 19 03:43:47 2028 GMT
  • issuer: DC=com; DC=example; O=Example Com Inc.; OU=Example Com Inc. Root CA; CN=Example Com Inc. Root CA
  • SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.

GET /_opendistro/_security/authinfo HTTP/1.1
Host: 127.0.0.1:9200
User-Agent: curl/7.61.1
Accept: /

< HTTP/1.1 401 Unauthorized
< WWW-Authenticate: Basic realm=“Open Distro Security”
< content-type: text/plain; charset=UTF-8
< content-length: 12
<

  • Connection #0 to host 127.0.0.1 left intact
    Unauthorized