Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
OpenSearch 1.2.4 and 2.15.0
Describe the issue:
I recently upgraded OpenSearch 1.2.4 to 2.15.0. There was not really any clear documentation on how to switch from a tar file installation to an RPM. So I referenced a few forums for my attempt.
The installation steps were:
- Take snapshot of the linux node and the OpenSearch data.
- Moved all of the OpenSearch 1.2.4 related directories to /backup.
- Installed OpenSearch 2.15.0 RPM.
- Copied 1.2.4 opensearch.yml file to OpenSearch 2.15.0 (I know this is probably not good practice as there are more things offered in the newer opensearch.yml. This is for Dev testing purposes and will be carried out different in Prod)
- Pointed the data to the OpenSearch 1.2.4 data directory that is now in /backup (Again, bad practice. But was running into silly permissions issues I don’t want to troubleshoot at the moment and just wanted to get this running to prove this direct uplift from tar to rpm could work.)
Started Opensearch and Opensearch-Dashboards. No startup issues, but there are issue in runtime with Opensearch-Dashboards.
The first error was related to Opensearch-Dashboards index already being created. Even after deleting the index, I was still getting the error so I went ahead and disabled the create index for opensearch-dashboards for the time being.
Now I am currently dealing with Failed: 1: this action would add [1] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}
I’m having a hard time understanding why the shard count is going through the roof for a single node cluster that was running just fine on Opensearch 1.2.4.
If anyone has any information on how to properly go about the data path allocation I am trying to do, it would help greatly!
Configuration:
Relevant Logs or Screenshots:
Could you show the result of the GET _cluster/stats
API and GET _cluster/settings
API?
I went ahead and tried a fresh RPM installation as I was afraid that by starting the cluster first before changing the data directory path, I set up the cluster for failure. With a fresh install, I am starting OpenSearch for the first time with the changed data file path to the old cluster. OpenSearch is failing to start with no errors showing.
-- Unit opensearch.service has begun starting up.
WARNING: Using incubator modules: jdk.incubator.vector
WARNING: A terminally deprecated method in java.lang.System has been called
WARNING: System::setSecurityManager has been called by org.opensearch.bootstrap.OpenSearch (file:/usr/share/opensearch/lib/opensearch-2.15.0.jar)
WARNING: Please consider reporting this to the maintainers of org.opensearch.bootstrap.OpenSearch
WARNING: System::setSecurityManager will be removed in a future release
Jul 09, 2024 8:31:39 AM sun.util.locale.provider.LocaleProviderAdapter <clinit>
WARNING: COMPAT locale provider will be removed in a future release
WARNING: A terminally deprecated method in java.lang.System has been called
WARNING: System::setSecurityManager has been called by org.opensearch.bootstrap.Security (file:/usr/share/opensearch/lib/opensearch-2.15.0.jar)
WARNING: Please consider reporting this to the maintainers of org.opensearch.bootstrap.Security
WARNING: System::setSecurityManager will be removed in a future release
**start operation timed out. Terminating.**
**Failed with result 'timeout'.**
-- Subject: Unit failed
-- Defined-By: systemd
The only thing in OpenSearch logs are warnings for insecure permissions.
[WARN ][o.o.s.OpenSearchSecurityPlugin] [xxxxxx] File /opensearch/data/nodes/0/indices/xxxxxxx/0/index/_gc_Lucene84_0.doc has insecure file permissions (should be 0600)
[WARN ][o.o.s.OpenSearchSecurityPlugin] [xxxxxx] File /opensearch/data/nodes/0/indices/xxxxxxx/0/index/_8s.kdi has insecure file permissions (should be 0600)
[WARN ][o.o.s.OpenSearchSecurityPlugin] [xxxxxx] File /opensearch/data/nodes/0/indices/xxxxxxx/0/index/_1q.kdi has insecure file permissions (should be 0600)
[WARN ][o.o.s.OpenSearchSecurityPlugin] [xxxxxx] File /opensearch/data/nodes/0/indices/xxxxxxx/0/index/_cz.fnm has insecure file permissions (should be 0600)
Could the failure in the cluster starting be due to the global system settings not being compatible? Is what I am doing even possible for utilizing a data directory from OpenSearch 1.2.4 in a OpenSearch 2.15.0 cluster?
The error this action would add [1] total shards, but this cluster currently has [1000]/[1000] maximum shards open
means the total shards in the cluster exceed the default limit which is 1000, you can increase the value by
PUT _cluster/settings
{
"persistent": {
"cluster.max_shards_per_node":1500
}
}