How to set up certificates for OpenSearch on Kubernetes?

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

Describe the issue:
Note: this is a repost from here. Tagging @pablo for visibility. :slight_smile:

I’m trying to get security to work using .pem files. I’m currently having some issues that I don’t know how to resolve.

First I generated the certs as this page mentions.
Then I created a generic k8s secret called opensearch-certs, where I passed in each the following 5 files: admin.pem, admin-key.pem, node.pem, node-key.pem, root-ca.pem (although I’m not sure if I need the admin* stuff?).
I then added that secret to the secretMounts section of the opensearch.yml file:

118 secretMounts:
119   - name: opensearch-certs
120     secretName: opensearch-certs
121     path: /usr/share/opensearch/config/ssl

Finally, here’s the SSL section of opensearch.yml:

 60         ssl:
 61           transport:
 62             pemcert_filepath: ssl/node1.pem
 63             pemkey_filepath: ssl/node1-key.pem
 64             pemtrustedcas_filepath: ssl/root-ca.pem
 65             enforce_hostname_verification: false
 66           http:
 67             enabled: true
 68             pemcert_filepath: ssl/node1.pem
 69             pemkey_filepath: ssl/node1-key.pem
 70             pemtrustedcas_filepath: ssl/root-ca.pem
 71         allow_unsafe_democertificates: true
 72         allow_default_init_securityindex: true
 73         authcz:
 74           admin_dn:
 75             - "CN=AB,OU=CDE,O=Something,L=Somewhere,C=GF"
 76         nodes_dn:
 77           - "CN=AB,OU=CDE,O=Something,L=Somewhere,C=GF"

When I start up the OpenSearch pods, I get the following:

[opensearch-cluster-master-0] OpenSearchException[Transport client 
authentication no longer supported.]

And I also get

[2022-12-12T15:35:28,551][WARN ][o.o.d.HandshakingTransportAddressConnector] 
[opensearch-cluster-master-0] handshake failed for 
Caused by: org.opensearch.OpenSearchException: Transport client 
authentication no longer supported.

If I remove the security.ssl.transport section, I get the following error on startup:

java.lang.IllegalStateException: failed to load plugin class 
Likely root cause: OpenSearchException[ or 
and must be 
set if transport ssl is requested.]

I don’t know if this is important, but the pods are part of a k8s stateful set that’s running on a local kind cluster.

Let me know if you need more info. Any help is appreciated. :slight_smile:


Caused by: org.opensearch.OpenSearchException: Transport client 
authentication no longer supported.

This error is expected when you try to authenticate as a client through the Transport Layer (9300 - 9400) i.e. or curl. This is no longer supported. The only supported communication on the Transport layer is between the nodes.
To fix this error you need to add all the nodes from the cluster into nodes_dn in opensearch.yml. The entries have to correspond with CN or SAN of the node’s certificates.
If the node from the cluster is not listed in the node_dn, the security plugin will treat it as a client communication and will produce the above error.

java.lang.IllegalStateException: failed to load plugin class 
Likely root cause: OpenSearchException[ or 
and must be 
set if transport ssl is requested.]

This is also expected when you remove the transport configuration from opensearch.yml. Transport configuration is mandatory when you use the security plugin.

Also, try placing your certificates in the config folder directly.

1 Like

I’m using enable_hostname_verification: false in the transport section, so I should be able to use the same node cert for all the nodes.
Am I supposed to scp the node’s certificate from the host’s filesystem onto the node’s filesystem? If yes, where do I place the certificate on each node?
Since I want to use the same cert for all the nodes, do I need 1 or N entries in the nodes_dn section, where N is the number of nodes in the cluster?

Also, I tried to place the certs in the config folder, but that was giving me some kind of error. A google search helped me figure out that it’s a permissions error. That’s why I added a subfolder called ssl and placed the certs in there. Once I did that, that specific error was gone.

@powerful_clouds The certificates are accessed by opensearch user (UID 1000).
Check the below.

You don’t need to copy the files into the filesystem as they’re mounted through the secret. Once you’ll fix the permissions change the mapping to /usr/share/opensearch/config/ in secret mount and then leave just certificate file name in the opensearch.yml.

1 Like

Thanks for the tip. :slight_smile:
I had set runAsUser: 0 in several places in the opensearch config. Once I had replaced all of those with runAsUser: 1000, I got

ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, 
increase to at least [262144]

So, I set the following for the pod:

    - name: vm.max_map_count
      value: 262144

Then I got the following error:
forbidden sysctl: "vm.max_map_count" not allowlisted
To resolve that, I tried to follow this stackoverflow answer by adding an initContainer. That gave me CrashLoopBackOff on the pod. kubectl logs <OPENSEARCH_POD_NAME> -c init-sysctl shows sysctl: error setting key 'vm.max_map_count': Permission denied. I also tried to follow this, which is essentially the same proposed solution, and it gave me the same error.

I tried to follow this tutorial by @TheAlgo, but that gave me CrashLoopBackOff with the following text displayed on kubectl describe <OPENSEARCH_POD_NAME>:

Warning  Unhealthy  101s (x10 over 3m41s)  kubelet            
Startup probe failed: dial tcp <IP>:9200: connect: connection refused

I solve that error by increasing the initialDelaySeconds value, but that takes me back to the forbidden sysctl: "vm.max_map_count" not allowlisted error…

@powerful_clouds Since this issue no longer regards security plugin I’d suggest to have a look at the Helm installation procedure and the files.
You should find all the answers for all the kubernetes questions in the charts.

@powerful_clouds Just a quick note. In my case I’ve set vm.max_map_count on my kubernetes host. Then all containers picked up that setting from the host configuration.

sysctl -w vm.max_map_count=262144