Custom Internal Users

I am using opensearch with docker-compose.yml. I have created an internal_users.yml file and attached it to the node volumes section. I verified it is in place within the container and in the correct place with my custom users. However, when I log in and check the internal users via the GUI the users I created are not present. Is there something else I need to do for my custom users to be visible?

@michael.anderton Could you share an example of your internal users from the internal_users.yml file?

@pablo
Sure thing! I copied the default file and added just 1 (myself). I have already checked that this version is mounted correctly in the volumes block of the nodes and is in the correct place. I can see my entry when I docker exec into the nodes and check the files. I am using the default admin password as my temp password until I get this file to work, the I will change them all.

internal_users.yml

---
# This is the internal user database
# The hash value is a bcrypt hash and can be generated with plugin/tools/hash.sh

_meta:
  type: "internalusers"
  config_version: 2

# Define your internal users here

manderton:
  hash: "$2a$12$VcCDgh2NDk07JGN0rjGbM.Ad41qVR/YFJcgHp0UGns5JDymv..TOG"
  reserved: false
  hidden: false
  backend_roles:
  - "admin"
  - "all_access"
  description: "Me"

## Demo users

admin:
  hash: "$2a$12$VcCDgh2NDk07JGN0rjGbM.Ad41qVR/YFJcgHp0UGns5JDymv..TOG"
  reserved: true
  backend_roles:
  - "admin"
  description: "Demo admin user"

anomalyadmin:
  hash: "$2y$12$TRwAAJgnNo67w3rVUz4FIeLx9Dy/llB79zf9I15CKJ9vkM4ZzAd3."
  reserved: false
  opendistro_security_roles:
  - "anomaly_full_access"
  description: "Demo anomaly admin user, using internal role"

kibanaserver:
  hash: "$2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H."
  reserved: true
  description: "Demo OpenSearch Dashboards user"

kibanaro:
  hash: "$2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC"
  reserved: false
  backend_roles:
  - "kibanauser"
  - "readall"
  attributes:
    attribute1: "value1"
    attribute2: "value2"
    attribute3: "value3"
  description: "Demo OpenSearch Dashboards read only user, using external role mapping"

logstash:
  hash: "$2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2"
  reserved: false
  backend_roles:
  - "logstash"
  description: "Demo logstash user, using external role mapping"

readall:
  hash: "$2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2"
  reserved: false
  backend_roles:
  - "readall"
  description: "Demo readall user, using external role mapping"

snapshotrestore:
  hash: "$2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W"
  reserved: false
  backend_roles:
  - "snapshotrestore"
  description: "Demo snapshotrestore user, using external role mapping"

Did you run securityadmin.sh at all?

@michael.anderton I’ve tested your internal_users.yml file and it worked with no issues. I had that new user listed in OpenSearch Dashboards UI.

Just to be clear on your procedure.

  1. You’ve made the change in the internal_users.yml file
  2. Mapped the file in docker-compose.yml
  3. Executed docker-compose with docker-compose.yml file

Could you run the below command inside the OpenSearch container? This should tell you if new user has been added.

curl --insecure -u admin:admin -XGET https://localhost:9200/_plugins/_security/api/internalusers?pretty

@pablo @jasonrojas

I found this in the logs. The start up scripts are skipping the update. Can I remove that index and restart the docker-compose.yml file?

[INFO ][o.o.s.s.ConfigHelper     ] [opensearch-node1] Will update 'internalusers' with /usr/share/opensearch/config/opensearch-security/internal_users.yml and populate it with empty doc if file missing and populateEmptyIfFileMissing=false
[INFO ][o.o.s.s.ConfigHelper     ] [opensearch-node1] Index .opendistro_security already contains doc with id internalusers, skipping update.

@michael.anderton Could you share docker-compose command that you use to start and stop the containers?

@pablo

sudo docker-compose up -d
sudo docker-compose down

@michael.anderton When you run docker-compose (docker-compose up -d) for the first time it will create data volumes for the Opensearch containers.

    volumes:
      - opensearch-data1:/usr/share/opensearch/data
    volumes:
      - opensearch-data2:/usr/share/opensearch/data

These volumes will contain all the indices including .opendistro-security.
You can list existing volumes with docker volume ls.

Running docker-compose down will only stop the containers. Volumes and data will remain intact.
Running docker-compose up -d will only start containers. If the index .opendistro-security already exists it will skip the update and the new config won’t be applied.

I see two solutions here.

  1. Use the suggestion from @jasonrojas and run securityadmin.sh inside the OpenSearch container.
    i.e.
"/usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh" -cd "/usr/share/opensearch/config/opensearch-security" -icl -key "/usr/share/opensearch/config/kirk-key.pem" -cert "/usr/share/opensearch/config/kirk.pem" -cacert "/usr/share/opensearch/config/root-ca.pem" -nhnv

kirk.pem is demo admin cert. You’ll need to use the one you’ve defined in plugins.security.authcz.admin_dn in opensearch.yml file.

  1. Run docker-compose down -v and then docker-compose up
    The first command will delete the volumes after stopping the containers. The second command will start containers and create volumes and new .opendistro-secruity index

Please be aware that volume removal destroys all the data and it should be used in the dev/test scenarios.

@pablo @jasonrojas

That is what I have been reading. I can destroy the volumes in dev/test but that wont do any good if I have to then do the same for the production. I will attempt the admin cert in the mean time. I am currently not using an admin cert but will get that going.