Opensearch cluster: Node don't see the cluster

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
OpenSearch 2.13 on Alma Linux 9.2

Describe the issue:
I’ve start a first node-1 working perfectly 192.168.0.111 on cluster test-cluster. This node will be the master.

I config and start a second node : 19.168.0.112, starting without issue.

when I check the cluster status:

curl -H 'Content-Type: application/json' -X GET 'https://192.168.0.111:9200/_cluster/health?pretty' -ku admin: admin

{
  "cluster_name" : " test-cluster ",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "discovered_master" : true,
  "discovered_cluster_manager" : true,
  "active_primary_shards" : 12,
  "active_shards" : 12,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 5,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 70.58823529411765
}

Only 1 node, the second node don’t join the cluster.

Configuration:

opensearch.yml node-1: master

cluster.name: test-cluster
node.name: node-1
path.data: /srv/opensearch/data
path.logs: /var/log/opensearch


# Set the bind address to a specific IP (IPv4 or IPv6):
transport.host: localhost
transport.tcp.port: 9300
network.host: 192.168.17.111
http.port: 9200

discovery.seed_hosts: ["192.168.0.111", "19.168.0.112"]

cluster.initial_cluster_manager_nodes: ["192.168.0.111"]

######## Start OpenSearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: esnode.pem
plugins.security.ssl.http.pemkey_filepath: esnode-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".plugins-ml-config", ".plugins-ml-connector", ".plugins-ml-model-group", ".plugin>
node.max_local_storage_nodes: 3

opensearch.yml node-2

cluster.name: test-cluster
node.name: node-2
path.data: /srv/opensearch/data
path.logs: /var/log/opensearch

# Set the bind address to a specific IP (IPv4 or IPv6):
transport.host: localhost
transport.tcp.port: 9300
network.host: 192.168.0.112
http.port: 9200

discovery.seed_hosts: ["192.168.0.111", "192.168.0.112"]

cluster.initial_cluster_manager_nodes: ["192.168.0.111"]

######## Start OpenSearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: esnode.pem
plugins.security.ssl.http.pemkey_filepath: esnode-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".plugins-ml-config", ".plugins-ml-connector", ".plugins-ml-model-group", ".plugin>
node.max_local_storage_nodes: 3

Relevant Logs or Screenshots:

No error in nodes log.

Much appreciate any help.

Seems that the ip address in the setting discovery.seed_hosts and cluster.initial_cluster_manager_nodes of the first node are not configured correctly, 19.168... should be 192.168....

Sorry, just a tipo because I don’t want to show industrial IP. Fixed.

Hey

I noticed in your configuration file, lacking node attributes for each node in a cluster

node.roles: [ cluster_manager ]

Or

node.roles: [ data, ingest ]

Thank for the answer.

I’ve add the line to the .yml and restart them:

nodes 1 (master)
node.roles: [ cluster_manager, data, ingest]

node 2:
node.roles: [ data, ingest ]

No change, still:
“number_of_nodes” : 1,

1 Like

Hey,

I assume you restart OS services?
Did you open the correct ports? (9200,9300)

Thanks for your answers.

I write a working config .yml (based on ElasticSearch doc)

Here the config.

Node 1 (initial master):

cluster.name: test-cluster
node.name: node-1
path.data: /srv/opensearch/data
path.logs: /var/log/opensearch

network.host: 19.0.17.111
network.bind_host: 0.0.0.0
cluster.initial_master_nodes: ["node-1"]
discovery.zen.ping.unicast.hosts: ["192.168.0.111", "192.168.0.112", "192.168.0.113"]
node.master: true
node.data: true

Node 2

cluster.name: test-cluster
node.name: node-2
path.data: /srv/opensearch/data
path.logs: /var/log/opensearch

network.host: 192.168.0.112
network.bind_host: 0.0.0.0
cluster.initial_master_nodes: ["node-1"]
discovery.zen.ping.unicast.hosts: ["192.168.0.111", "192.168.0.112", "192.168.0.113"]
node.master: true
node.data: true

Node 3

cluster.name: test-cluster
node.name: node-3
path.data: /srv/opensearch/data
path.logs: /var/log/opensearch

network.host: 192.168.0.113
network.bind_host: 0.0.0.0
cluster.initial_master_nodes: ["node-1"]
discovery.zen.ping.unicast.hosts: ["192.168.0.111", "192.168.0.112", "192.168.0.113"]
node.master: true
node.data: true

Important:
You have to start the initial master node before the other. In my case, that why the nodes don’t see each other.

Hope that could help.

1 Like

Thank for posting your resolve :+1:

EDIT: I see you put this in, nice.

A final question:

does these 2 config lines are the same?

node.roles: [ cluster_manager ]
node.master: true

Much appreciate your help

Hey @Nash

This is from the link i provided above.

You can also explicitly specify that this node is a cluster manager node, even though it is already set to true by default. Set the node role to cluster_manager to make it easier to identify the cluster manager node.

node.roles: [ cluster_manager ]

You can make them cluster-manager-eligible data nodes that will also be used for ingesting data:

node.roles: [ data, ingest ]

Got it.

Thank a lot

Have a great day

1 Like