Dashboards stops working as intended after a opensearch node stops in a cluster

What is interesting that the time from start of the opensearch container to the first timeout is always ~16 minutes.

It happens on all nodes almost at once. Below logs from two nodes, starting from the node with oldest timestamp. I tried to figure out what is going on there but it its to much for me.

plg-lms24-manager:

[2025-05-12T13:06:46,943][ERROR][o.o.t.n.s.SecureNetty4Transport] [plg-lms24-manager] Exception during establishing a SSL connection: java.io.IOException: Connection timed out
java.io.IOException: Connection timed out
	at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method) ~[?:?]
	at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:47) ~[?:?]
	at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:340) ~[?:?]
	at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:294) ~[?:?]
	at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:269) ~[?:?]
	at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:425) ~[?:?]
	at ....	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.118.Final.jar:4.1.118.Final]
	at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
[2025-05-12T13:06:46,961][INFO ][o.o.c.c.Coordinator      ] [plg-lms24-manager] cluster-manager node [{plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}{shard_indexing_pressure_enabled=true}] failed, restarting discovery
org.opensearch.transport.NodeDisconnectedException: [plg-lms24-idx][10.17.229.248:9300][disconnected] disconnected
[2025-05-12T13:06:46,970][INFO ][o.o.c.s.ClusterApplierService] [plg-lms24-manager] cluster-manager node changed {previous [{plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}{shard_indexing_pressure_enabled=true}], current []}, term: 3, version: 213, reason: becoming candidate: onLeaderFailure
[2025-05-12T13:06:46,978][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:06:47,204][INFO ][o.o.c.s.ClusterApplierService] [plg-lms24-manager] cluster-manager node changed {previous [], current [{plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}{shard_indexing_pressure_enabled=true}]}, term: 3, version: 214, reason: ApplyCommitRequest{term=3, version=214, sourceNode={plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}}
[2025-05-12T13:06:47,205][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:06:47,213][INFO ][o.o.d.PeerFinder         ] [plg-lms24-manager] setting findPeersInterval to [1s] as node commission status = [true] for local node [{plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}]
[2025-05-12T13:07:06,225][INFO ][o.o.c.c.Coordinator      ] [plg-lms24-manager] cluster-manager node [{plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}{shard_indexing_pressure_enabled=true}] failed, restarting discovery
org.opensearch.OpenSearchException: node [{plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}{shard_indexing_pressure_enabled=true}] failed [3] consecutive checks

.....

......

Caused by: org.opensearch.transport.RemoteTransportException: [plg-lms24-idx][10.89.0.6:9300][internal:coordination/fault_detection/leader_check]
Caused by: org.opensearch.cluster.coordination.CoordinationStateRejectedException: rejecting leader check since [{plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}] has been removed from the cluster
.....


[2025-05-12T13:07:06,235][INFO ][o.o.c.s.ClusterApplierService] [plg-lms24-manager] cluster-manager node changed {previous [{plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}{shard_indexing_pressure_enabled=true}], current []}, term: 3, version: 214, reason: becoming candidate: onLeaderFailure
[2025-05-12T13:07:06,238][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:06,794][INFO ][o.o.c.s.ClusterApplierService] [plg-lms24-manager] cluster-manager node changed {previous [], current [{plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}{shard_indexing_pressure_enabled=true}]}, term: 3, version: 217, reason: ApplyCommitRequest{term=3, version=217, sourceNode={plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}}
[2025-05-12T13:07:07,618][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:07,632][INFO ][o.o.d.PeerFinder         ] [plg-lms24-manager] setting findPeersInterval to [1s] as node commission status = [true] for local node [{plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}]
[2025-05-12T13:07:07,737][INFO ][o.o.p.PluginsService     ] [plg-lms24-manager] PluginService:onIndexModule index:[frq-standard/MZE5_sHIRZKr6Imsq6G7xQ]
[2025-05-12T13:07:07,743][INFO ][o.o.p.PluginsService     ] [plg-lms24-manager] PluginService:onIndexModule index:[.plugins-ml-config/2Ldl8jr4RyWWX7OIl_9ANg]
[2025-05-12T13:07:07,882][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:08,305][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:08,398][INFO ][o.o.p.PluginsService     ] [plg-lms24-manager] PluginService:onIndexModule index:[.opendistro_security/24Qn4krlRH2bfnyEc_v9XQ]
[2025-05-12T13:07:08,475][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:09,117][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:09,248][INFO ][o.o.p.PluginsService     ] [plg-lms24-manager] PluginService:onIndexModule index:[.opensearch-observability/l1MJHOiWR6yN9oLPu097SQ]
[2025-05-12T13:07:09,374][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:09,772][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:10,362][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:10,559][INFO ][o.o.p.PluginsService     ] [plg-lms24-manager] PluginService:onIndexModule index:[syslog-standard-2025.05.12/EXV7SnnlSHGco3BsELVF1g]
[2025-05-12T13:07:10,565][INFO ][o.o.p.PluginsService     ] [plg-lms24-manager] PluginService:onIndexModule index:[cisco-2025.05.12/QYP9nLGaTIGrWIId3DeIHg]
[2025-05-12T13:07:10,815][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:11,247][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:11,391][INFO ][o.o.p.PluginsService     ] [plg-lms24-manager] PluginService:onIndexModule index:[any-notparsed/d1ODGETNQUSoet8VUXWQzA]
[2025-05-12T13:07:11,517][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:12,492][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:12,855][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:13,021][INFO ][o.o.p.PluginsService     ] [plg-lms24-manager] PluginService:onIndexModule index:[.opensearch-sap-log-types-config/JqekxvNoRJqF_zkBZYoK3g]
[2025-05-12T13:07:13,030][INFO ][o.o.p.PluginsService     ] [plg-lms24-manager] PluginService:onIndexModule index:[.kibana_1/-9s8EFF6TUedzyPgUbl62w]
[2025-05-12T13:07:13,258][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:14,028][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:15,365][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:16,323][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:19,711][ERROR][o.o.t.n.s.SecureNetty4Transport] [plg-lms24-manager] Exception during establishing a SSL connection: java.io.IOException: Connection timed out
java.io.IOException: Connection timed out
	at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method) ~[?:?]
	at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:47) ~[?:?]
	at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:340) ~[?:?]
	at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:294) ~[?:?]
	at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:269) ~[?:?]
	at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:425) ~[?:?]
	at org.opensearch.transport.CopyBytesSocketChannel.readFromSocketChannel(CopyBytesSocketChannel.java:156) ~[transport-netty4-client-2.19.1.jar:2.19.1]

.......

[2025-05-12T13:07:19,722][ERROR][o.o.t.n.s.SecureNetty4Transport] [plg-lms24-manager] Exception during establishing a SSL connection: java.io.IOException: Connection timed out
java.io.IOException: Connection timed out
	at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method) ~[?:?]
	at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:47) ~[?:?]
	at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:340) ~[?:?]
	at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:294) ~[?:?]

.....


98) [netty-common-4.1.118.Final.jar:4.1.118.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.118.Final.jar:4.1.118.Final]
	at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
[2025-05-12T13:07:24,162][INFO ][o.o.c.s.ClusterApplierService] [plg-lms24-manager] removed {{plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}}, term: 3, version: 234, reason: ApplyCommitRequest{term=3, version=234, sourceNode={plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}}
[2025-05-12T13:07:24,300][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-manager] Cluster node changed, node removed: true, node added: false
[2025-05-12T13:07:24,301][INFO ][o.o.t.c.HashRing         ] [plg-lms24-manager] Node removed: [l9nMKU7-QGKWx-I6q6agbQ]
[2025-05-12T13:07:24,303][INFO ][o.o.t.c.HashRing         ] [plg-lms24-manager] Remove data node from version hash ring: l9nMKU7-QGKWx-I6q6agbQ
[2025-05-12T13:07:24,303][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-manager] Hash ring build result: true
[2025-05-12T13:07:24,303][INFO ][o.o.t.c.HashRing         ] [plg-lms24-manager] Rebuild hash ring for realtime with cooldown, nodeChangeEvents size 2
[2025-05-12T13:07:24,304][INFO ][o.o.t.c.HashRing         ] [plg-lms24-manager] Build version hash ring successfully
[2025-05-12T13:07:24,305][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:24,441][INFO ][o.o.i.s.IndexShard       ] [plg-lms24-manager] [any-notparsed][0] primary-replica resync completed with 0 operations
[2025-05-12T13:07:24,448][INFO ][o.o.i.s.IndexShard       ] [plg-lms24-manager] [frq-standard][0] primary-replica resync completed with 0 operations
[2025-05-12T13:07:25,107][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:25,361][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:27,233][INFO ][o.o.c.s.ClusterApplierService] [plg-lms24-manager] added {{plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}}, term: 3, version: 237, reason: ApplyCommitRequest{term=3, version=237, sourceNode={plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}}
[2025-05-12T13:07:27,478][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-manager] Cluster node changed, node removed: false, node added: true
[2025-05-12T13:07:27,478][INFO ][o.o.t.c.HashRing         ] [plg-lms24-manager] Node added: [l9nMKU7-QGKWx-I6q6agbQ]
[2025-05-12T13:07:27,479][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:27,479][INFO ][o.o.m.a.MLModelAutoReDeployer] [plg-lms24-manager] model auto reloading should be initialized by cluster manager node only, current node id is empty or current node not cluster manager!
[2025-05-12T13:07:27,482][INFO ][o.o.t.c.HashRing         ] [plg-lms24-manager] Add data node to version hash ring: l9nMKU7-QGKWx-I6q6agbQ
[2025-05-12T13:07:27,483][INFO ][o.o.t.c.HashRing         ] [plg-lms24-manager] All nodes with known version: {l9nMKU7-QGKWx-I6q6agbQ=ADNodeInfo{version=2.19.1, isEligibleDataNode=true}, JM-aB0PSQP-C-4MMoMgiMQ=ADNodeInfo{version=2.19.1, isEligibleDataNode=true}, upXNm-a4Sb-V8gBxqGdkaQ=ADNodeInfo{version=2.19.1, isEligibleDataNode=true}}
[2025-05-12T13:07:27,483][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-manager] Hash ring build result: true
[2025-05-12T13:07:29,852][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:30,107][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:30,616][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:30,776][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:31,111][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:31,220][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:31,408][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-manager] [frq-standard][0][recover to plg-lms24-idx2] finalizing recovery took [36.4ms]
[2025-05-12T13:07:31,653][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:31,730][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:32,869][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:33,597][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:33,635][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-manager] [any-notparsed][0][recover to plg-lms24-idx2] finalizing recovery took [1.3s]
[2025-05-12T13:07:34,103][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:34,738][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:34,917][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:36,419][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:07:36,775][INFO ][o.o.a.u.d.DestinationMigrationCoordinator] [plg-lms24-manager] Cancelling the migration process.
[2025-05-12T13:09:57,554][INFO ][o.o.j.s.JobSweeper       ] [plg-lms24-manager] Running full sweep
[2025-05-12T13:12:27,479][INFO ][o.o.i.i.PluginVersionSweepCoordinator] [plg-lms24-manager] Canceling sweep ism plugin version job
[2025-05-12T13:14:57,556][INFO ][o.o.j.s.JobSweeper       ] [plg-lms24-manager] Running full sweep

plg-lms24-idx:


......

[2025-05-12T13:07:02,169][ERROR][o.o.t.n.s.SecureNetty4Transport] [plg-lms24-idx] Exception during establishing a SSL connection: java.io.IOException: Connection timed out
java.io.IOException: Connection timed out
        at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method) ~[?:?]
        at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:47) ~[?:?]
        at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:340) ~[?:?]
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:294) ~[?:?]
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:269) ~[?:?]
        at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:425) ~[?:?]
   
.......

io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:998) [netty-common-4.1.118.Final.jar:4.1.118.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.118.Final.jar:4.1.118.Final]
        at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
[2025-05-12T13:07:02,197][INFO ][o.o.c.c.FollowersChecker ] [plg-lms24-idx] FollowerChecker{discoveryNode={plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}, failureCountSinceLastSuccess=0, [cluster.fault_detection.follower_check.retry_count]=3} disconnected
[2025-05-12T13:07:02,199][INFO ][o.o.c.c.FollowersChecker ] [plg-lms24-idx] FollowerChecker{discoveryNode={plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}, failureCountSinceLastSuccess=0, [cluster.fault_detection.follower_check.retry_count]=3} marking node as faulty
[2025-05-12T13:07:02,232][INFO ][o.o.c.r.a.AllocationService] [plg-lms24-idx] updating number_of_replicas to [1] for indices [.opensearch-observability, .plugins-ml-config, .opensearch-sap-log-types-config, .opendistro_security]
[2025-05-12T13:07:02,235][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:02,255][INFO ][o.o.c.s.MasterService    ] [plg-lms24-idx] Tasks batched with key: org.opensearch.cluster.coordination.NodeRemovalClusterStateTaskExecutor@618859e6, count:1 and sample tasks: node-left[{plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true} reason: disconnected], term: 3, version: 215, delta: removed {{plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}}
[2025-05-12T13:07:02,746][INFO ][o.o.c.s.ClusterApplierService] [plg-lms24-idx] removed {{plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}}, term: 3, version: 215, reason: Publication{term=3, version=215}
[2025-05-12T13:07:02,906][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-idx] Cluster node changed, node removed: true, node added: false
[2025-05-12T13:07:02,907][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] Node removed: [upXNm-a4Sb-V8gBxqGdkaQ]
[2025-05-12T13:07:02,907][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] Remove data node from version hash ring: upXNm-a4Sb-V8gBxqGdkaQ
[2025-05-12T13:07:02,907][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-idx] Hash ring build result: true
[2025-05-12T13:07:02,907][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] Rebuild hash ring for realtime with cooldown, nodeChangeEvents size 2
[2025-05-12T13:07:02,907][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] Build version hash ring successfully
[2025-05-12T13:07:02,912][INFO ][o.o.c.r.DelayedAllocationService] [plg-lms24-idx] scheduling reroute for delayed shards in [59.3s] (5 delayed shards)
[2025-05-12T13:07:02,926][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:03,032][INFO ][o.o.i.s.IndexShard       ] [plg-lms24-idx] [.kibana_1][0] primary-replica resync completed with 0 operations
[2025-05-12T13:07:03,072][INFO ][o.o.i.s.IndexShard       ] [plg-lms24-idx] [cisco-2025.05.12][0] primary-replica resync completed with 0 operations
[2025-05-12T13:07:03,127][INFO ][o.o.i.s.IndexShard       ] [plg-lms24-idx] [syslog-standard-2025.05.12][0] primary-replica resync completed with 0 operations
[2025-05-12T13:07:05,157][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] [.opensearch-sap-log-types-config][0] marking unavailable shards as stale: [Eiyu2S0hQMu4z37p4bcbzg]
[2025-05-12T13:07:05,158][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:05,432][INFO ][o.o.c.r.a.AllocationService] [plg-lms24-idx] updating number_of_replicas to [2] for indices [.opensearch-observability, .plugins-ml-config, .opensearch-sap-log-types-config, .opendistro_security]
[2025-05-12T13:07:05,433][INFO ][o.o.c.s.MasterService    ] [plg-lms24-idx] Tasks batched with key: org.opensearch.cluster.coordination.JoinHelper, count:1 and sample tasks: node-join[{plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true} join existing leader], term: 3, version: 217, delta: added {{plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}}
[2025-05-12T13:07:06,648][INFO ][o.o.c.s.ClusterApplierService] [plg-lms24-idx] added {{plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}}, term: 3, version: 217, reason: Publication{term=3, version=217}
[2025-05-12T13:07:06,654][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-idx] Cluster node changed, node removed: false, node added: true
[2025-05-12T13:07:06,655][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] Node added: [upXNm-a4Sb-V8gBxqGdkaQ]
[2025-05-12T13:07:06,657][INFO ][o.o.m.a.MLModelAutoReDeployer] [plg-lms24-idx] Index not found, not performing auto reloading!
[2025-05-12T13:07:06,660][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] Add data node to version hash ring: upXNm-a4Sb-V8gBxqGdkaQ
[2025-05-12T13:07:06,661][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] All nodes with known version: {l9nMKU7-QGKWx-I6q6agbQ=ADNodeInfo{version=2.19.1, isEligibleDataNode=true}, JM-aB0PSQP-C-4MMoMgiMQ=ADNodeInfo{version=2.19.1, isEligibleDataNode=true}, upXNm-a4Sb-V8gBxqGdkaQ=ADNodeInfo{version=2.19.1, isEligibleDataNode=true}}
[2025-05-12T13:07:06,661][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-idx] Hash ring build result: true
[2025-05-12T13:07:06,661][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:06,686][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:06,858][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.plugins-ml-config][0][recover to plg-lms24-manager] delaying recovery of [.plugins-ml-config][0] as it is not listed as assigned to target node {plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}
[2025-05-12T13:07:06,917][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:07,345][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:07,666][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.opendistro_security][0][recover to plg-lms24-manager] finalizing recovery took [41.7ms]
[2025-05-12T13:07:08,152][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:08,154][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.plugins-ml-config][0][recover to plg-lms24-manager] finalizing recovery took [86.3ms]
[2025-05-12T13:07:08,763][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.opensearch-observability][0][recover to plg-lms24-manager] finalizing recovery took [104.1ms]
[2025-05-12T13:07:09,490][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:09,780][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [syslog-standard-2025.05.12][0][recover to plg-lms24-manager] delaying recovery of [syslog-standard-2025.05.12][0] as it is not listed as assigned to target node {plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}
[2025-05-12T13:07:10,016][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [cisco-2025.05.12][0][recover to plg-lms24-manager] finalizing recovery took [46.9ms]
[2025-05-12T13:07:10,281][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:11,202][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [syslog-standard-2025.05.12][0][recover to plg-lms24-manager] finalizing recovery took [152.6ms]
[2025-05-12T13:07:11,891][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:12,242][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.opensearch-sap-log-types-config][0][recover to plg-lms24-manager] delaying recovery of [.opensearch-sap-log-types-config][0] as it is not listed as assigned to target node {plg-lms24-manager}{upXNm-a4Sb-V8gBxqGdkaQ}{vpidqLO3SMmbRqHBelu2Xg}{10.17.229.238}{10.17.229.238:9300}{dim}{shard_indexing_pressure_enabled=true}
[2025-05-12T13:07:12,475][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.kibana_1][0][recover to plg-lms24-manager] finalizing recovery took [46.4ms]
[2025-05-12T13:07:13,061][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:14,110][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.opensearch-sap-log-types-config][0][recover to plg-lms24-manager] finalizing recovery took [95.4ms]
[2025-05-12T13:07:14,114][INFO ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.opensearch-sap-log-types-config][0]]]).
[2025-05-12T13:07:14,406][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:15,243][INFO ][o.o.c.c.FollowersChecker ] [plg-lms24-idx] FollowerChecker{discoveryNode={plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}, failureCountSinceLastSuccess=1, [cluster.fault_detection.follower_check.retry_count]=3} failed, retrying
org.opensearch.transport.RemoteTransportException: [plg-lms24-idx2][10.89.0.6:9300][internal:coordination/fault_detection/follower_check]
Caused by: org.opensearch.cluster.coordination.CoordinationStateRejectedException: onFollowerCheckRequest: received check from faulty cluster-manager, rejecting FollowerCheckRequest{term=3, sender={plg-lms24-idx}{JM-aB0PSQP-C-4MMoMgiMQ}{L8w0M9eQSzurDdBR2jUQSQ}{10.17.229.248}{10.17.229.248:9300}{dim}}
	at org.opensearch.cluster.coordination.Coordinator.onFollowerCheckRequest(Coordinator.java:390) ~[opensearch-2.19.1.jar:2.19.1]

.....


	io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:998) [netty-common-4.1.118.Final.jar:4.1.118.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.118.Final.jar:4.1.118.Final]
	at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
[2025-05-12T13:07:22,671][INFO ][o.o.c.c.FollowersChecker ] [plg-lms24-idx] FollowerChecker{discoveryNode={plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}, failureCountSinceLastSuccess=0, [cluster.fault_detection.follower_check.retry_count]=3} disconnected
[2025-05-12T13:07:22,671][INFO ][o.o.c.c.FollowersChecker ] [plg-lms24-idx] FollowerChecker{discoveryNode={plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}, failureCountSinceLastSuccess=0, [cluster.fault_detection.follower_check.retry_count]=3} marking node as faulty
[2025-05-12T13:07:22,675][INFO ][o.o.c.r.a.AllocationService] [plg-lms24-idx] updating number_of_replicas to [1] for indices [.opensearch-observability, .plugins-ml-config, .opensearch-sap-log-types-config, .opendistro_security]
[2025-05-12T13:07:22,676][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:22,684][INFO ][o.o.c.s.MasterService    ] [plg-lms24-idx] Tasks batched with key: org.opensearch.cluster.coordination.NodeRemovalClusterStateTaskExecutor@618859e6, count:1 and sample tasks: node-left[{plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true} reason: disconnected], term: 3, version: 234, delta: removed {{plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}}
[2025-05-12T13:07:23,343][INFO ][o.o.c.s.ClusterApplierService] [plg-lms24-idx] removed {{plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}}, term: 3, version: 234, reason: Publication{term=3, version=234}
[2025-05-12T13:07:23,496][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-idx] Cluster node changed, node removed: true, node added: false
[2025-05-12T13:07:23,496][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] Node removed: [l9nMKU7-QGKWx-I6q6agbQ]
[2025-05-12T13:07:23,497][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] Remove data node from version hash ring: l9nMKU7-QGKWx-I6q6agbQ
[2025-05-12T13:07:23,498][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-idx] Hash ring build result: true
[2025-05-12T13:07:23,500][INFO ][o.o.c.r.DelayedAllocationService] [plg-lms24-idx] scheduling reroute for delayed shards in [59.1s] (4 delayed shards)
[2025-05-12T13:07:23,501][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:23,752][INFO ][o.o.i.s.IndexShard       ] [plg-lms24-idx] [top_queries-2025.05.12-81245][0] primary-replica resync completed with 0 operations
[2025-05-12T13:07:23,775][WARN ][o.o.a.b.TransportShardBulkAction] [plg-lms24-idx] [[security-auditlog-2025.05.12][0]] failed to perform indices:data/write/bulk[s] on replica [security-auditlog-2025.05.12][0], node[l9nMKU7-QGKWx-I6q6agbQ], [R], s[STARTED], a[id=5LTBOEr4SVG6N6OI-Hziqg]
org.opensearch.client.transport.NoNodeAvailableException: unknown node [l9nMKU7-QGKWx-I6q6agbQ]
	at org.opensearch.action.support.replication.TransportReplicationAction$ReplicasProxy.performOn(TransportReplicationAction.java:1455) [opensearch-2.19.1.jar:2.19.1]
	at org.opensearch.action.support.replication.ReplicationProxy.lambda$getReplicasProxyConsumer$0(ReplicationProxy.java:84) [opensearch-2.19.1.jar:2.19.1]
	at org.opensearch.action.support.replication.ReplicationOperation$3.tryAction(ReplicationOperation.java:317) [opensearch-2.19.1.jar:2.19.1]
	at org.opensearch.action.support.RetryableAction$1.doRun(RetryableAction.java:139) [opensearch-2.19.1.jar:2.19.1]


....

[2025-05-12T13:07:23,778][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] [security-auditlog-2025.05.12][0] marking unavailable shards as stale: [5LTBOEr4SVG6N6OI-Hziqg]
[2025-05-12T13:07:23,779][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:24,141][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:24,187][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] [any-notparsed][0] marking unavailable shards as stale: [3eEOs7BnRvy4YpRinFIzOw]
[2025-05-12T13:07:24,188][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:24,396][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:25,729][INFO ][o.o.c.r.a.AllocationService] [plg-lms24-idx] updating number_of_replicas to [2] for indices [.opensearch-observability, .plugins-ml-config, .opensearch-sap-log-types-config, .opendistro_security]
[2025-05-12T13:07:25,729][INFO ][o.o.c.s.MasterService    ] [plg-lms24-idx] Tasks batched with key: org.opensearch.cluster.coordination.JoinHelper, count:1 and sample tasks: node-join[{plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true} join existing leader], term: 3, version: 237, delta: added {{plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}}
[2025-05-12T13:07:28,751][INFO ][o.o.c.s.ClusterApplierService] [plg-lms24-idx] added {{plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}}, term: 3, version: 237, reason: Publication{term=3, version=237}
[2025-05-12T13:07:28,754][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-idx] Cluster node changed, node removed: false, node added: true
[2025-05-12T13:07:28,754][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] Node added: [l9nMKU7-QGKWx-I6q6agbQ]
[2025-05-12T13:07:28,756][INFO ][o.o.m.a.MLModelAutoReDeployer] [plg-lms24-idx] Index not found, not performing auto reloading!
[2025-05-12T13:07:28,760][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] Add data node to version hash ring: l9nMKU7-QGKWx-I6q6agbQ
[2025-05-12T13:07:28,761][INFO ][o.o.t.c.HashRing         ] [plg-lms24-idx] All nodes with known version: {l9nMKU7-QGKWx-I6q6agbQ=ADNodeInfo{version=2.19.1, isEligibleDataNode=true}, JM-aB0PSQP-C-4MMoMgiMQ=ADNodeInfo{version=2.19.1, isEligibleDataNode=true}, upXNm-a4Sb-V8gBxqGdkaQ=ADNodeInfo{version=2.19.1, isEligibleDataNode=true}}
[2025-05-12T13:07:28,761][INFO ][o.o.t.c.ClusterEventListener] [plg-lms24-idx] Hash ring build result: true
[2025-05-12T13:07:28,763][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:28,782][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:29,074][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:29,379][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.plugins-ml-config][0][recover to plg-lms24-idx2] finalizing recovery took [88.4ms]
[2025-05-12T13:07:29,649][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:29,841][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.opendistro_security][0][recover to plg-lms24-idx2] finalizing recovery took [153.2ms]
[2025-05-12T13:07:29,904][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.opensearch-observability][0][recover to plg-lms24-idx2] delaying recovery of [.opensearch-observability][0] as it is not listed as assigned to target node {plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}
[2025-05-12T13:07:30,150][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:30,691][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:31,125][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.opensearch-observability][0][recover to plg-lms24-idx2] finalizing recovery took [54.7ms]
[2025-05-12T13:07:32,557][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:33,244][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [top_queries-2025.05.12-81245][0][recover to plg-lms24-idx2] finalizing recovery took [104.1ms]
[2025-05-12T13:07:33,775][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:07:34,051][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.opensearch-sap-log-types-config][0][recover to plg-lms24-idx2] delaying recovery of [.opensearch-sap-log-types-config][0] as it is not listed as assigned to target node {plg-lms24-idx2}{l9nMKU7-QGKWx-I6q6agbQ}{b1SPWKdkQaO2cyDz7S3V4Q}{10.17.229.224}{10.17.229.224:9300}{dim}{shard_indexing_pressure_enabled=true}
[2025-05-12T13:07:35,261][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [.opensearch-sap-log-types-config][0][recover to plg-lms24-idx2] finalizing recovery took [63.4ms]
[2025-05-12T13:07:35,292][INFO ][o.o.i.r.RecoverySourceHandler] [plg-lms24-idx] [security-auditlog-2025.05.12][0][recover to plg-lms24-idx2] finalizing recovery took [319.9ms]
[2025-05-12T13:07:35,462][INFO ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[security-auditlog-2025.05.12][0]]]).
[2025-05-12T13:07:35,838][WARN ][o.o.c.r.a.AllocationService] [plg-lms24-idx] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-05-12T13:09:57,522][INFO ][o.o.j.s.JobSweeper       ] [plg-lms24-idx] Running full sweep
[2025-05-12T13:12:28,756][INFO ][o.o.i.i.PluginVersionSweepCoordinator] [plg-lms24-idx] Canceling sweep ism plugin version job
[2025-05-12T13:14:57,523][INFO ][o.o.j.s.JobSweeper       ] [plg-lms24-idx] Running full sweep