Add new nodes to a cluster and change shard settings

I have a single node cluster I want to add new VM to it as nodes.
Is there a specific processor to be followed

I want to change the shard settings also
Is there a way to do that without using existing data.
Let me know the steps and documents to be followed

Hey @rhtbansal

I havent added a node to a single instance, But I would imagine that you need to adjust Opensearch config file, Certificates (unless you not using a Production setup).

https://opensearch.org/docs/latest/tuning-your-cluster/cluster/

You can create a new index set and send the data there, depeneds on what your using to ship logs.

@pablo please help, I am trying to add data nodes to my cluster but those are not getting added. can you provide any sample config . I am following the document of creating a cluster but still not able to make a cluster, cluster ids are coming different even though the name of the cluster is same in all 3 VMs

@rhtbansal Have you checked the documentation suggested by @Gsmitt?

Please share the opensearch.yml of your single node (the original one) and the nodes you are trying to add.
Do you use demo certificates or custom?

Do you get any errors in the OpenSearch nodes’ logs?

1 Like

Yes pablo we followed the document

we are using the demo certificate since we are first trying in our development environment

we have tried using master role for all the nodes
master index and data role for all the nodes
1 master 2 data nodes

**Below is the yml of the main node we have **

plugins.security.ssl.transport.pemcert_filepath: esnode.pem
plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: esnode.pem
plugins.security.ssl.http.pemkey_filepath: esnode-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:

  • CN=kirk,OU=client,O=client,L=test, C=de

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [“.plugins-ml-model”, “.plugins-ml-task”, “.opendistro-alerting-config”, “.opendistro-alerting-alert*”, “.opendistro-anomaly-results*”, “.opendistro-anomaly-detector*”, “.opendistro-anomaly-checkpoints”, “.opendistro-anomaly-detection-state”, “.opendistro-reports-", ".opensearch-notifications-”, “.opensearch-notebooks”, “.opensearch-observability”, “.opendistro-asynchronous-search-response*”, “.replication-metadata-store”]
node.max_local_storage_nodes: 3

cluster.name: opensearch1
node.name: node-1
discovery.seed_hosts: [“10.29.2.4”, “10.29.2.5”, “10.29.2.6”]
network.host: 10.29.2.4
http.port: 9200
path.data: /disk1/data
path.logs: /var/log/opensearch
node.roles: [ cluster_manager ]

***********node we are trying to add

Node 2

plugins.security.disabled: false
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: esnode.pem
plugins.security.ssl.http.pemkey_filepath: esnode-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:

  • CN=kirk,OU=client,O=client,L=test, C=de

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [“.plugins-ml-model”, “.plugins-ml-task”, “.opendistro-alerting-config”, “.opendistro-alerting-alert*”, “.opendistro-anomaly-results*”, “.opendistro-anomaly-detector*”, “.opendistro-anomaly-checkpoints”, “.opendistro-anomaly-detection-state”, “.opendistro-reports-", ".opensearch-notifications-”, “.opensearch-notebooks”, “.opensearch-observability”, “.opendistro-asynchronous-search-response*”, “.replication-metadata-store”]
node.max_local_storage_nodes: 3
node.roles: [ data ]
cluster.name: opensearch1
node.name: node-2
path.data: /disk1/data
path.home: /usr/share/opensearch
path.logs: /var/log/opensearch
network.host: 10.29.2.5
http.port: 9200
discovery.seed_hosts: [“10.29.2.4”]


plugins.security.disabled: false
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: esnode.pem
plugins.security.ssl.http.pemkey_filepath: esnode-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:

  • CN=kirk,OU=client,O=client,L=test, C=de

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [“.plugins-ml-model”, “.plugins-ml-task”, “.opendistro-alerting-config”, “.opendistro-alerting-alert*”, “.opendistro-anomaly-results*”, “.opendistro-anomaly-detector*”, “.opendistro-anomaly-checkpoints”, “.opendistro-anomaly-detection-state”, “.opendistro-reports-", ".opensearch-notifications-”, “.opensearch-notebooks”, “.opensearch-observability”, “.opendistro-asynchronous-search-response*”, “.replication-metadata-store”]
node.max_local_storage_nodes: 3
node.roles: [ data ]
cluster.name: opensearch1
node.name: node-3
path.data: /disk1/data
path.home: /usr/share/opensearch
path.logs: /var/log/opensearch
network.host: 10.29.2.6
http.port: 9200
discovery.seed_hosts: [“10.29.2.4”]

@rhtbansal

Do you start the cluster from scratch or are you adding new hosts?
If you start from scratch then you must add cluster.initial_cluster_manager_nodes to the master node to form the cluster.

If you just join new nodes to the running cluster, then discovery.seed_hosts with master-eligible node will be enough in new nodes.
However, in your master node, you’ve defined new data nodes as master-eligible in `discovery.seed_hosts. This is incorrect as the master node will be treating data nodes as master-eligible nodes.

If 10.29.2.4 is your only master node in the cluster, then all of the nodes (including master) should have the following entry in opensearch.yml

discovery.seed_hosts: ["10.29.2.4"]

@rhtbansal I forgot to ask, how do you deploy your cluster?

I am adding new hosts to the existing cluster which is a single node cluster.
If I use a new cluster name will all the data previously stored will be lost ?
I am adding the nodes as Azure VMs.

Also during default installation a cluster name opensearch was formed. Do I have to change cluster name in yml before installation ?

Any specific errors which I can see in the logs of opensearch as the log file is huge

@rhtbansal As far as I’ve seen in my test lab, the cluster name change doesn’t corrupt the data.
However, I would avoid testing that in the production cluster.

Your yaml files have opensearch1 as the cluster name. Is that the original cluster name?

Look for any errors in the logs, especially the repeating ones.

opensearch is the original cluster name which is generally created by default.
I previously tried it with that name but it didn’t worked so I changed it for testing

Do you think any other yml changes are needed

@rhtbansal How did you configure your single node?

Did you use cluster.initial_cluster_manager_nodes to create the cluster or did you set discovery-type to signel-node?

Have a look at the startup logs. Look for any join or connectivity errors.
This issue might be related to cluster configuration or security plugin settings. That’s why logs would be very helpful here.

@rhtbansal Also try to create a new VM with OpenSearch, as the existing data nodes have already created and joined their own cluster (as you stated before). New OpenSearch VM will have a clean data folder.

Take a look at the startup logs of that new VM and check if it can see the master node (IP address with port 9300-9400). You should also see if it joins the master or creates its own.
I know the logs are huge, but this is the only way to troubleshoot OpenSearch.

I had created all the VMs separetely and used the default script for installation and configuration of the 2 new VMs I have.
Do I have to keep the ca and other certificates also same for all the VM and configure them manually.
I mean I want to know if the installation ond initial configuration of VM is different for making cluster

@rhtbansal All the nodes in the cluster must have the same CA configured. If your new nodes have certificates signed by a different CA then you must attach it to the file configured in plugins.security.ssl.transport.pemtrustedcas_filepath.

Have you noticed any connectivity errors during the additional nodes’ startup?

Any transport layer errors?

I am using default settings since its a Dev environment.

i tried creating a new cluster from scratch, for network host I have use both 0.0.0.0 and and relevant ips also but still two seprate 1 node cluster are formed

I have check both the servers are reachable to each other . Let me know if I am making a mistake

network.host: 0.0.0.0
http.port: 9200
search.max_buckets: 100000
discovery.seed_hosts: [“10.29.2.5”,“10.29.2.6”]
cluster.name: acoecluster3
node.name: node-2
cluster.initial_cluster_manager_nodes: [“node-1”,“node-2”]

and

network.host: 0.0.0.0
http.port: 9200
search.max_buckets: 100000
discovery.seed_hosts: [“10.29.2.5”,“10.29.2.6”]
cluster.name: acoecluster3
node.name: node-2
cluster.initial_cluster_manager_nodes: [“node-1”,“node-2”]

Is that a mistake in copying your configs or do you have exactly the same node name in both nodes?

I use node-1. mistake while pasting here

I am using different names

@rhtbansal What is the OpenSearch version?

Have you set any roles in these 2 nodes?