New Cluster with New Admin where am i wrong?

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser): 3.0.0

Describe the issue: New cluster build
No issues when nodes are brought up siglen-node with cert and security
with system brought up with cluster configuration
Unable to run securityadmin.sh

Configuration:
cluster.name: mccoy

node.name: mccoy01

node.roles: [ data ]

path.data: /opensearch_data

path.logs: /var/log/opensearch

bootstrap.memory_lock: true

#network.host: mccoy01.bc.edu

#network.host: 136.167.4.136

#network.host: [ local, site ]

#network.host: 136.167.4.136

network.host: 0.0.0.0

http.port: 9200

#discovery.seed_hosts: [“136.167.4.136”, “136.167.4.137”, “136.167.4.151”]

discovery.seed_hosts: [“mccoy01.bc.edu”, “mccoy02.bc.edu”, “mccoy04.bc.edu”]

#discovery.type: single-node

#cluster.initial_cluster_manager_nodes: [“136.167.4.136”, “136.167.4.137”, “136.167.4.151”]

cluster.initial_cluster_manager_nodes: [“mccoy01.bc.edu”, “mccoy02.bc.edu”, “mccoy04.bc.edu”]

gateway.recover_after_data_nodes: 3

bootstrap.system_call_filter: false

#action.destructive_requires_name: true

plugins.security.ssl.transport.pemcert_filepath: /etc/opensearch/node1.pem

plugins.security.ssl.transport.pemkey_filepath: /etc/opensearch/node1-key.pem

plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/opensearch/root-ca.pem

plugins.security.ssl.http.enabled: true

plugins.security.ssl.http.pemcert_filepath: /etc/opensearch/node1.pem

plugins.security.ssl.http.pemkey_filepath: /etc/opensearch/node1-key.pem

plugins.security.ssl.http.pemtrustedcas_filepath: /etc/opensearch/root-ca.pem

plugins.security.allow_default_init_securityindex: true

plugins.security.authcz.admin_dn:

  • ‘CN=A,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA’

plugins.security.nodes_dn:

  • ‘CN=mccoy*.bc.edu,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA’

plugins.security.audit.type: internal_opensearch

plugins.security.enable_snapshot_restore_privilege: true

plugins.security.check_snapshot_restore_write_privileges: true

plugins.security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]

plugins.security.disabled: false

Relevant Logs or Screenshots:
securityadmin:
[root@mccoy01 tools]# OPENSEARCH_JAVA_HOME=/usr/share/opensearch/jdk ./securityadmin.sh -h mccoy01.bc.edu -cd /etc/opensearch/opensearch-security/ -cacert /etc/opensearch/root-ca.pem -cert /etc/opensearch/admin.pem -key /etc/opensearch/admin-key.pem -icl -nhnv --accept-red-cluster
Security Admin v7
Will connect to mccoy01.bc.edu:9200 … done
Connected as “CN=A,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA”
OpenSearch Version: 3.0.0
Contacting opensearch cluster ‘opensearch’ …
Cannot retrieve cluster state due to: Timeout due to inactivity (30000 MILLISECONDS). This is not an error, will keep on trying …
Root cause: org.apache.hc.core5.http2.H2StreamResetException: Timeout due to inactivity (30000 MILLISECONDS) (java.io.IOException/org.apache.hc.core5.http2.H2StreamResetException)

  • Try running securityadmin.sh with -icl (but no -cn) and -nhnv (If that works you need to check your clustername as well as hostnames in your TLS certificates)
  • Make sure that your keystore or PEM certificate is a client certificate (not a node certificate) and configured properly in opensearch.yml
  • If this is not working, try running securityadmin.sh with --diagnose and see diagnose trace log file)
  • Add --accept-red-cluster to allow securityadmin to operate on a red cluster.

/var/log/opensearch/mccoy.log snippet
[2025-06-18T09:10:06,965][WARN ][o.o.c.c.ClusterFormationFailureHelper] [mccoy01] cluster-manager not discovered yet: have discovered [{mccoy01}{mR9sRm6jQVmzOSPls3IFFA}{zZtNmP5bT8Cl0P7hTPvwlw}{136.167.4.136}{136.167.4.136:9300}{d}{shard_indexing_pressure_enabled=true}]; discovery will continue using [136.167.4.137:9300, 136.167.4.151:9300] from hosts providers and from last-known cluster state; node term 0, last-accepted version 0 in term 0

[2025-06-18T09:10:07,083][INFO ][o.o.s.c.ConfigurationRepository] [mccoy01] Wait for cluster to be available …

[2025-06-18T09:10:08,084][INFO ][o.o.s.c.ConfigurationRepository] [mccoy01] Wait for cluster to be available …

[2025-06-18T09:10:09,084][INFO ][o.o.s.c.ConfigurationRepository] [mccoy01] Wait for cluster to be available …

[2025-06-18T09:10:10,084][INFO ][o.o.s.c.ConfigurationRepository] [mccoy01] Wait for cluster to be available …

[2025-06-18T09:10:11,084][INFO ][o.o.s.c.ConfigurationRepository] [mccoy01] Wait for cluster to be available …

[2025-06-18T09:10:12,084][INFO ][o.o.s.c.ConfigurationRepository] [mccoy01] Wait for cluster to be available …

[2025-06-18T09:10:13,084][INFO ][o.o.s.c.ConfigurationRepository] [mccoy01] Wait for cluster to be available …

[2025-06-18T09:10:14,084][INFO ][o.o.s.c.ConfigurationRepository] [mccoy01] Wait for cluster to be available …

[2025-06-18T09:10:15,085][INFO ][o.o.s.c.ConfigurationRepository] [mccoy01] Wait for cluster to be available …

[2025-06-18T09:10:16,085][INFO ][o.o.s.c.ConfigurationRepository] [mccoy01] Wait for cluster to be available …

[2025-06-18T09:10:16,924][INFO ][o.o.j.s.JobSweeper ] [mccoy01] Running full sweep

[2025-06-18T09:10:16,965][WARN ][o.o.c.c.ClusterFormationFailureHelper] [mccoy01] cluster-manager n

@GuyS In your configuration, you seem to give role of “node.roles: [ data ]” to this node, is this correct? Are you attempting to run a cluster with 3 master eligible nodes or 3 data and only one master?

also has OpenSearch on the mccoy01 node previously been started as a single node cluster? Is it still using the same data repo? If so, is there data there that you need to keep or can you remove the data directory entirely?

This is a brand new build new hardware software with no data injested as of yet.
there are 6 physical node nodes 3 to be masters.
my plan was to get the master up first. but I think I see the issue now. I mis understood the step 2 of the cluster build documentation.

cluster.name: mccoy
node.name: mccoy01
node.roles: [ cluster_manager ]
path.data: /opensearch_man
path.logs: /var/log/opensearch

bootstrap.memory_lock: true

#network.host: mccoy01.bc.edu
#network.host: 136.167.4.136
#network.host: [ local, site ]
#network.host: 136.167.4.136
network.host: 0.0.0.0
http.port: 9200
#discovery.seed_hosts: [“136.167.4.136”, “136.167.4.137”, “136.167.4.151”]
discovery.seed_hosts: [“mccoy01.bc.edu”, “mccoy02.bc.edu”, “mccoy04.bc.edu”]
#discovery.type: single-node
#cluster.initial_cluster_manager_nodes: [“136.167.4.136”, “136.167.4.137”, “136.167.4.151”]
cluster.initial_cluster_manager_nodes: [“mccoy01.bc.edu”, “mccoy02.bc.edu”, “mccoy04.bc.edu”]
gateway.recover_after_data_nodes: 3
bootstrap.system_call_filter: false
#action.destructive_requires_name: true
plugins.security.ssl.transport.pemcert_filepath: /etc/opensearch/node1.pem
plugins.security.ssl.transport.pemkey_filepath: /etc/opensearch/node1-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/opensearch/root-ca.pem
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: /etc/opensearch/node1.pem
plugins.security.ssl.http.pemkey_filepath: /etc/opensearch/node1-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: /etc/opensearch/root-ca.pem
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:

  • ‘CN=A,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA’
    plugins.security.nodes_dn:
  • ‘CN=mccoy*.bc.edu,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA’
    plugins.security.audit.type: internal_opensearch
    plugins.security.enable_snapshot_restore_privilege: true
    plugins.security.check_snapshot_restore_write_privileges: true
    plugins.security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]
    plugins.security.disabled: false

no change indication with securityadmin.sh

Found out how set set debbug on. ndingClusterTasksRequest:
org.opensearch.client.ResponseException: method [GET], host [https://mccoy01.bc.edu:9200], URI [/_cluster/pending_tasks], status line [HTTP/2.0 503 Service Unavailable]
{“error”:{“root_cause”:[{“type”:“cluster_manager_not_discovered_exception”,“reason”:null}],“type”:“cluster_manager_not_discovered_exception”,“reason”:null},“status”:503}
at org.opensearch.client.RestClient.convertResponse(RestClient.java:501)
at org.opensearch.client.RestClient.performRequest(RestClient.java:384)
at org.opensearch.client.RestClient.performRequest(RestClient.java:359)
at org.opensearch.security.tools.SecurityAdmin.generateDiagnoseTrace(SecurityAdmin.java:1171)
at org.opensearch.security.tools.SecurityAdmin.execute(SecurityAdmin.java:681)
at org.opensearch.security.tools.SecurityAdmin.main(SecurityAdmin.java:154)

IndicesStatsRequest:
org.opensearch.client.ResponseException: method [GET], host [https://mccoy01.bc.edu:9200], URI [/_stats], status line [HTTP/2.0 503 Service Unavailable]
{“error”:{“root_cause”:[{“type”:“cluster_block_exception”,“reason”:“blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];”}],“type”:“cluster_block_exception”,“reason”:“blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];”},“status”:503}

cluster is up…finally
the root issues was theat the root-ca cert and key where unique per node. this is wrong.

generate 1 root-ca and key and distribute among all the node in the cluster and buld the node cert off the original root-ca pair.