Opensearch version 1.3.20 on almalinux 9.5, setting up the cluster and resetting the situation

Hello,
I just installed on 3 pc almalinux 9.5 and then followed the official documentation of opensearch 1.3.20 to set up a cluster.
The installation was made as root with those commands:

cd /tmp
curl -SL https://artifacts.opensearch.org/releases/bundle/opensearch/1.x/opensearch-1.x.repo -o /etc/yum.repos.d/opensearch-1.x.repo
yum clean all
yum repolist
yum list opensearch --showduplicates
yum install opensearch
systemctl start opensearch
systemctl status opensearch
systemctl restart opensearch.service
less /var/log/opensearch/dw4-mongo-test.log 
systemctl status firewalld
firewall-cmd --add-port=9200/tcp --permanent
firewall-cmd --add-port=9300/tcp --permanent
firewall-cmd --add-port=443/tcp --permanent
nano /etc/opensearch/opensearch.yml
systemctl restart opensearch.service
firewall-cmd --reload
sudo systemctl restart opensearch.service

the cluster should consist of 3 identical nodes, here it is the /etc/opensearc/opensearch.yml of one of the nodes

cluster.name: dw4-mongo-test
node.name: dw4-test-node2
path.data: /var/lib/opensearch
path.logs: /var/log/opensearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["10.17.60.102", "10.17.60.103", "10.17.60.104"]
cluster.initial_master_nodes: ["dw4-test-node2", "dw4-test-node1", "dw4-test-node3"]
action.destructive_requires_name: true
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: false
plugins.security.ssl.http.pemcert_filepath: esnode.pem
plugins.security.ssl.http.pemkey_filepath: esnode-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".plugins-ml-model", ".plugins-ml-task", ".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opensearch-notifications-*", ".opensearch-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]
node.max_local_storage_nodes: 3

I am in a test environment and disabled https.
but when I check if cluster is created i get 3 single node clusters instead of a cluster with 3 nodes.

curl -XGET http://10.17.60.103:9200/_cat/nodes?v -u 'admin:admin' --insecure
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.17.60.103           34          74   0    0.54    0.20     0.07 dimr      *      dw4-test-node2
curl -XGET http://10.17.60.104:9200/_cat/nodes?v -u 'admin:admin' --insecure
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.17.60.104           35          70   6    0.30    0.12     0.04 dimr      *      dw4-test-node-3
curl -XGET http://10.17.60.102:9200/_cat/nodes?v -u 'admin:admin' --insecure
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.17.60.102           36          75   5    0.59    0.22     0.08 dimr      *      dw4-test-node1

maybe I should try stopping all nodes, deleting all the data made by opensearch, restarting all nodes, but don’t know how to do it.
what directory should I remove to reset the situation?

strange enough but it worked.
I had to delete on all 3 pc the content of the folder /var/lib/opensearch
then restarting the nodes now i get my cluster up and running.

curl -XGET http://10.17.60.103:9200/_cat/nodes?v -u 'admin:admin' --insecure
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.17.60.102           30          75   2    0.06    0.05     0.02 dimr      -      dw4-test-node1
10.17.60.103           41          75   3    0.14    0.11     0.04 dimr      *      dw4-test-node2
10.17.60.104           21          71   2    0.08    0.07     0.02 dimr      -      dw4-test-node-3
1 Like