Hi.
I am attempting to stand up a cluster of ES nodes on docker. The nodes are all the same AMI; ubuntu on intel in aws. Every node spins up and the cluster assembles properly. The amazing cerebro tool can talk to a master using the default admin/admin
credentials and works as i expect. The same is true of Kibana: admin/admin
gets me in!
My problem comes when i try to configure anything security related. I am able to log in to kibana, but as soon as i try to manage anything in the security section, i am immediately logged out and then redirected to the URL: /login?type=basicauthLogout#?_g=()
.
The only change that I am attempting to make right now is to change the admin/admin
credentials, but when doing so, I am getting stuck with this error:
FAIL: Configuration for 'config' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]```
i am using the dummy certs that come with ODES.
when i try to run the securityadmin.sh
from within one of the master node container:
[root@com config]# pwd
/usr/share/elasticsearch/config
[root@com config]# ls -lah
total 60K
drwxrwxr-x 1 elasticsearch root 4.0K May 29 23:38 .
drwxrwxr-x 1 root root 4.0K May 29 22:41 ..
drwxr-x--- 2 elasticsearch root 4.0K May 29 22:41 discovery-ec2
-rw-rw---- 1 elasticsearch root 207 May 29 22:41 elasticsearch.keystore
-r--r----- 1 root root 5.0K May 29 22:26 elasticsearch.yml
-r--r----- 1 root root 1.7K May 29 22:26 esnode-key.pem
-r--r----- 1 root root 1.7K May 29 22:26 esnode.pem
-rw-rw---- 1 elasticsearch root 3.6K Apr 2 15:56 jvm.options
-r--r----- 1 root root 1.7K May 29 22:26 kirk-key.pem
-r--r----- 1 root root 1.6K May 29 22:26 kirk.pem
-rw-rw-r-- 1 elasticsearch root 285 Apr 15 21:30 log4j2.properties
drwxr-x--- 2 elasticsearch root 4.0K May 29 22:41 repository-s3
-r--r----- 1 root root 1.5K May 29 22:26 root-ca.pem
[root@com config]# /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd ../plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert ./root-ca.pem -cert ./kirk.pem -key ./kirk-key.pem
Open Distro Security Admin v6
Will connect to localhost:9300 ... done
Elasticsearch Version: 6.7.1
Open Distro Security Version: 0.9.0.0
Connected as CN=kirk,OU=client,O=client,L=test,C=de
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Clustername: MyClusterHere
Clusterstate: GREEN
Number of nodes: 7
Number of data nodes: 4
.opendistro_security index already exists, so we do not need to create one.
Populate config from /usr/share/elasticsearch/plugins/opendistro_security/securityconfig
Will update 'security/config' with ../plugins/opendistro_security/securityconfig/config.yml
FAIL: Configuration for 'config' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]
Will update 'security/roles' with ../plugins/opendistro_security/securityconfig/roles.yml
FAIL: Configuration for 'roles' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]
Will update 'security/rolesmapping' with ../plugins/opendistro_security/securityconfig/roles_mapping.yml
FAIL: Configuration for 'rolesmapping' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]
Will update 'security/internalusers' with ../plugins/opendistro_security/securityconfig/internal_users.yml
FAIL: Configuration for 'internalusers' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]
Will update 'security/actiongroups' with ../plugins/opendistro_security/securityconfig/action_groups.yml
FAIL: Configuration for 'actiongroups' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]
FAIL: Expected 7 nodes to return response, but got only 0
Done with failures
Whoami:
[root@com config]# /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd ../plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert ./root-ca.pem -cert ./kirk.pem -key ./kirk-key.pem -w
Open Distro Security Admin v6
Will connect to localhost:9300 ... done
Elasticsearch Version: 6.7.1
Open Distro Security Version: 0.9.0.0
Connected as CN=kirk,OU=client,O=client,L=test,C=de
{
"whoami" : {
"dn" : "CN=kirk,OU=client,O=client,L=test,C=de",
"is_admin" : true,
"is_authenticated" : true,
"is_node_certificate_request" : false
}
}
This would seem to indicate that CN=kirk,OU=client,O=client,L=test,C=de
is an admin user.
The docker-compose.yaml
file that configures docker on each node in the cluster.
version: "3.5"
services:
elasticsearch:
build:
context: .
dockerfile: Dockerfile
args:
ODES_VERSION: "0.9.0"
####
## Configuration for Elastic is split up into several places depending on what needs to change / when.
## Settings that are rarely changed are defined in elasticsearch.yaml
## the `environment` directive is for settings that are handy to change w/o baking a new docker image
## but a new AMI would be a good idea, and the `env_file` is for settings that are going to change per-cluster at run time.
## A cloudinit shell script will update these var files as directed by terraform...
####
env_file:
- ./cluster-id.env
- ./node-role.env
- ./jvm-opts.env
- ./network.env
ulimits:
# See: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
# We have a few TLS things to take care of...
# See: https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker-security/#sample-docker-compose-file
##
- ./tls/root-ca.pem:/usr/share/elasticsearch/config/root-ca.pem
- ./tls/esnode.pem:/usr/share/elasticsearch/config/esnode.pem
- ./tls/esnode-key.pem:/usr/share/elasticsearch/config/esnode-key.pem
- ./tls/kirk.pem:/usr/share/elasticsearch/config/kirk.pem
- ./tls/kirk-key.pem:/usr/share/elasticsearch/config/kirk-key.pem
# And don't forget the config file
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
# And the security config files
##
# Users/password hashes
- ./security/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
# Where we validate user creds against
- ./security/config.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
- type: volume
source: odes-data
target: /usr/share/elasticsearch/data
volume:
nocopy: true
- type: volume
source: odes-logs
target: /usr/share/elasticsearch/logs
volume:
nocopy: true
ports:
# ES API is over 9200
- target: 9200
published: 9200
protocol: tcp
mode: host
# Because we are not in a overlay swarm network, 9300 must also be exposed for inter-cluster comms
- target: 9300
published: 9300
protocol: tcp
mode: host
# ODES Performance Analyzer is 9600
- target: 9600
published: 9600
protocol: tcp
mode: host
volumes:
# Useful way to get bind behavior with volume features
# See: https://stackoverflow.com/questions/39496564/docker-volume-custom-mount-point
odes-data:
driver: local
driver_opts:
type: none
o: bind
device: /mnt/odes/data
odes-logs:
driver: local
driver_opts:
type: none
o: bind
device: /var/log/elasticsearch
The Dockerfile
:
# See: https://docs.docker.com/engine/reference/builder/
##
# We need to inject a few ES plugins into the ODES image
# discovery-ec2, repository-s3
#
# We do this here.
# See: https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker/#run-with-custom-plugins
##
ARG ODES_VERSION=latest
FROM amazon/opendistro-for-elasticsearch:${ODES_VERSION:-latest} AS base
# Plugins
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch discovery-ec2
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch repository-s3
the *env files mentioned in docker-compose.yaml
:
root@com:/opt/odes/docker# cat *.env
cluster.name=<myClusterNameHere.
discovery.ec2.tag.es-cluster=<myClusterNameHere.
discovery.ec2.availability_zones=us-west-1b,us-west-1c
discovery.ec2.groups=<some-sg-id-here>
discovery.zen.minimum_master_nodes=3
# Due to shitty documentation / ES code, this must be set for all regions not us-east-1
# See: https://discuss.elastic.co/t/discovery-ec2-plugin-always-tries-to-ping-localhost-never-finds-the-nodes-that-it-should/160433/9
# This will be necessary until this is merged: https://github.com/elastic/elasticsearch/pull/27925
discovery.ec2.endpoint=ec2.us-west-1.amazonaws.com
ES_JAVA_OPTS=-Xms3988m -Xmx3988m
network.publish_host=<internalIpHere>
network.bind_host=0.0.0.0
node.master=true
node.data=false
node.ingest=false
And the elasticsearch.yaml (this file is identical on all nodes in the cluster)
action.destructive_requires_name: true
indices.fielddata.cache.size: 1% # default is unbounded
discovery.ec2.host_type: private_dns
plugin.mandatory: discovery-ec2
discovery.zen.hosts_provider: ec2
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
discovery.ec2.protocol: https
bootstrap.memory_lock: true
opendistro_security.ssl.http.enabled_ciphers:
- "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"
- "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384"
- "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256"
- "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"
- "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384"
- "TLS_DHE_RSA_WITH_AES_128_CBC_SHA256"
- "TLS_DHE_RSA_WITH_AES_256_CBC_SHA256"
opendistro_security.ssl.http.enabled_protocols:
# The gold standard, aim for this, first!
- "TLSv1.3"
# 1.2 is needed for consul-agent doing the health-checks....
- "TLSv1.2"
# For now, we use the bundled certs that ship in the ODES images
opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
# Currently, using the same ODES demo cert for every node, so this needs to stay off
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
# For now, we use the bundled certs that ship in the ODES images
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
# For PoC, we allow the demo certs
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
# the securityadmin.sh script is how we interface with the cluster to change security related things
# We will need to present a certificate with this set of properties...
##
opendistro_security.authcz.admin_dn:
# root@com:/opt/odes/docker/tls# openssl x509 -subject -nameopt RFC2253 -noout -in kirk.pem
# subject=CN=kirk,OU=client,O=client,L=test,C=de
- "CN=kirk,OU=client,O=client,L=test,C=de"
opendistro_security.nodes_dn:
- 'CN=node-0.example.com,OU=node,O=node,L=test,DC=de'
I have made no changes to the securityconfig/config.yaml
and only updated the hashes in securityconfig/internal_users.yaml
. I was following the documentation here:
Docker info:
root@com:/opt/odes/docker/security# docker version
Client:
Version: 18.09.6
API version: 1.39
Go version: go1.10.8
Git commit: 481bc77
Built: Sat May 4 02:35:57 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.6
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 481bc77
Built: Sat May 4 01:59:36 2019
OS/Arch: linux/amd64
Experimental: false
environment info:
root@com:/opt/odes/docker/security# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic