Opendistro for elasticsearch single-node cluster not working

Dear All,
I am new to Opendistro for Elasticsearch. Getting excited with this new opensource to try and I am unable to setup a single node cluster. I am using all default setting after following About - Open Distro Documentation , however I am not able to setup one

my elasticsearch.yml is as below:

cluster.name: my-application
node.name: elk1
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.1.1
http.port: 9200
discovery.zen.minimum_master_nodes: 1
opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: true
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test,C=DE
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
discovery.type: single-node
node.max_local_storage_nodes: 1

My kibana.yml looks as below. I copied the *.pem files from /etc/elasticsearch/ to /etc/kibana/

server.port: 5601
server.host: delk1
elasticsearch.hosts: ["https://elk1:9200"]
elasticsearch.ssl.verificationMode: none
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]

opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]

server.ssl.enabled: true
server.ssl.key: /etc/kibana/esnode-key.pem
server.ssl.certificate: /etc/kibana/esnode.pem

Error in Elasticsearch is as below:

[2020-03-09T18:57:08,052][INFO ][c.a.o.s.c.ConfigurationRepository] [elk1] Background init thread started. Install default config?: true
[2020-03-09T18:57:08,074][INFO ][c.a.o.s.c.ConfigurationRepository] [elk1] Will create .opendistro_security index so we can apply default config
[2020-03-09T18:57:08,208][INFO ][o.e.g.GatewayService     ] [elk1] recovered [2] indices into cluster_state
[2020-03-09T18:57:09,416][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:09,460][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for internalusers while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,461][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,461][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for config while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,461][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for roles while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,461][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for rolesmapping while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,462][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for tenants while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:09,483][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:09,595][INFO ][o.e.c.r.a.AllocationService] [elk1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[security-auditlog-2020.03.09][0]]]).
[2020-03-09T18:57:11,990][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:14,499][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:17,006][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:17,411][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for internalusers while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:17,412][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:17,412][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for config while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:17,412][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for roles while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:17,412][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for rolesmapping while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:17,412][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for tenants while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:19,512][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:22,021][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:24,533][ERROR][c.a.o.s.a.BackendRegistry] [elk1] Not yet initialized (you may need to run securityadmin)
[2020-03-09T18:57:25,413][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for internalusers while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:25,413][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:25,413][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for config while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:25,413][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for roles while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:25,414][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for rolesmapping while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)
[2020-03-09T18:57:25,414][WARN ][c.a.o.s.c.ConfigurationLoaderSecurity7] [elk1] No data for tenants while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS]  (index=.opendistro_security and type=null)

Error in Kibana.yml

{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"state_session_storage_redirect","version":"kibana","description":"When using the state:storeInSessionStorage setting with the short-urls, we need some way to get the full URL's hashed states into sessionStorage, this app will grab the URL from the injected state and and put the URL hashed states into sessionStorage before redirecting the user."},"message":"Initializing plugin state_session_storage_redirect@kibana"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"status_page","version":"kibana"},"message":"Initializing plugin status_page@kibana"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"tile_map","version":"kibana"},"message":"Initializing plugin tile_map@kibana"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:tile_map@7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"author":"Rashid Khan <rashid@elastic.co>","name":"timelion","version":"kibana"},"message":"Initializing plugin timelion@kibana"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:timelion@7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"ui_metric","version":"kibana"},"message":"Initializing plugin ui_metric@kibana"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:ui_metric@7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"markdown_vis","version":"kibana"},"message":"Initializing plugin markdown_vis@kibana"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:markdown_vis@7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"metric_vis","version":"kibana"},"message":"Initializing plugin metric_vis@kibana"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:metric_vis@7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"table_vis","version":"kibana"},"message":"Initializing plugin table_vis@kibana"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:table_vis@7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"name":"tagcloud","version":"kibana"},"message":"Initializing plugin tagcloud@kibana"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:tagcloud@7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugins","debug"],"pid":16997,"plugin":{"author":"Yuri Astrakhan<yuri@elastic.co>","name":"vega","version":"kibana"},"message":"Initializing plugin vega@kibana"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:vega@7.4.2","info"],"pid":16997,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["plugin","debug"],"pid":16997,"message":"Checking Elasticsearch version"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["error","elasticsearch","admin"],"pid":16997,"message":"Request error, retrying\nGET https://elk1:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => unable to verify the first certificate"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["server","uuid","uuid"],"pid":16997,"message":"Resuming persistent Kibana instance UUID: 329fdcc3-8105-489d-be69-c4e6397cb9a6"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"Unable to revive connection: https://elk1:9200/"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["status","plugin:elasticsearch@7.4.2","error"],"pid":16997,"state":"red","message":"Status changed from yellow to red - No Living connections","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-03-09T19:09:47Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"No living connections"}
{"type":"log","@timestamp":"2020-03-09T19:09:50Z","tags":["plugin","debug"],"pid":16997,"message":"Checking Elasticsearch version"}
{"type":"log","@timestamp":"2020-03-09T19:09:50Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"Unable to revive connection: https://elk1:9200/"}
{"type":"log","@timestamp":"2020-03-09T19:09:50Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"No living connections"}
{"type":"log","@timestamp":"2020-03-09T19:09:52Z","tags":["plugin","debug"],"pid":16997,"message":"Checking Elasticsearch version"}

....

{"type":"log","@timestamp":"2020-03-09T19:12:03Z","tags":["plugin","debug"],"pid":16997,"message":"Checking Elasticsearch version"}
{"type":"log","@timestamp":"2020-03-09T19:12:03Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"Unable to revive connection: https://elk1:9200/"}
{"type":"log","@timestamp":"2020-03-09T19:12:03Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"No living connections"}
{"type":"log","@timestamp":"2020-03-09T19:12:06Z","tags":["plugin","debug"],"pid":16997,"message":"Checking Elasticsearch version"}
{"type":"log","@timestamp":"2020-03-09T19:12:06Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"Unable to revive connection: https://elk1:9200/"}
{"type":"log","@timestamp":"2020-03-09T19:12:06Z","tags":["warning","elasticsearch","admin"],"pid":16997,"message":"No living connections"}

Not sure where am I going wrong. Any pointers with example will help. Thanks in advance

Hello !
Have you tried to pass the securityadmin.sh script ? It seems like you have errors with the .opendistro_security index. Errors on all the configuration files : No data for internalusers, actiongroups, config …

In /etc/elasticsearch/elasticsearch.yml,
network.host: 192.168.1.1 should be : less connection problems

network.host: 0.0.0.0

Last hope, re-install every thing, maybe you have some corrupt data in /var/lib/elasticsearch.

Hope it helps.
Thi

1 Like

Thanks for you response. I re-installed freshly and all looked good.
This opensource is exiting but I feel there should also a clean cut instructions / documentation / tips etc… along with conceptual detailed out on how this is different than the traditional licensed version etc… I hope my suggestions will be taken.