Opensearch issue with helm chart on IPV4 Cluster: StartupProbe Failed

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser): 3.2.0

Describe the issue: I have installed successfully bitnami/opensearch on IPV6 cluster and working fine, All pods up and running.
run the same helm chart on IPV4 Cluster (Rocky Linux release 8.10) but always startup failed.
increased the resources and startup, readiness and liveness till 900( 15 mins). Still same issues. PV PVC also bounded in local storage class.

I tried with same opensearch-project helm chart same issues we are facing in IPV4 Cluster.

Configuration:

Relevant Logs or Screenshots:

k logs opensearch-master-0:

Defaulted container “opensearch” out of: opensearch, sysctl (init), copy-default-plugins (init)
opensearch 08:51:55.03 INFO  ==>
opensearch 08:51:55.03 INFO  ==> Welcome to the Bitnami opensearch container
opensearch 08:51:55.04 INFO  ==> NOTICE: Starting August 28th, 2025, only a limited subset of images/charts will remain available for free. Backup will be available for some time at the
opensearch 08:51:55.04 INFO  ==>
opensearch 08:51:55.08 INFO  ==> \*\* Starting Opensearch setup \*\*
opensearch 08:52:01.59 INFO  ==> Configuring/Initializing Opensearch…
opensearch 08:52:01.62 INFO  ==> Custom configuration file detected, using it…
opensearch 08:52:01.64 INFO  ==> \*\* Opensearch setup finished! \*\*

opensearch 08:52:01.66 INFO  ==> \*\* Starting Opensearch \*\*
WARNING: Using incubator modules: jdk.incubator.vector
WARNING: Unknown module: org.apache.arrow.memory.core specified to --add-opens

Pod describe:
k describe pod opensearch-master-0

Name:             opensearch-master-0
Namespace:        default
Priority:         0
Service Account:  default
Start Time:       Mon, 27 Oct 2025 14:59:27 +0530
Labels:           app=master

controller-revision-hash=opensearch-master-7f74d9ddfc
helm.sh/chart=opensearch-2.0.11

Annotations:      containerID: 16430b863a60b7db8128ad45b868b57b7bcecae19ce79cd321ede8940bb4a823
podIP: 172.21.114.60/32
podIPs: 172.21.114.60/32

\[{
“name”: “calico”,
“ips”: \[
“172.21.114.60”
\],
“default”: true,
“dns”: {}
}\]

\[{
“name”: “calico”,
“ips”: \[
“172.21.114.60”
\],
“default”: true,
“dns”: {}
}\]
Status:           Running
IP:               172.21.114.60
IPs:
IP:           172.21.114.60
Controlled By:  StatefulSet/opensearch-master
Init Containers:
sysctl:
Container ID:  robin://f7b5d4bcc4f851a1c7a355f61ea3314ac461a17be725983e9c9e6f41858fa21c
Image ID:  sha256:4ce5097738341ec0d3ddbc201e850a26364d9b581eb050461b3549e7ac3514a1
Port:
Host Port:
Command:
/bin/bash
-ec
CURRENT=`sysctl -n vm.max_map_count`;
DESIRED=“262144”;
if \[ “$DESIRED” -gt “$CURRENT” \]; then
sysctl -w vm.max_map_count=262144;
fi;
CURRENT=`sysctl -n fs.file-max`;
DESIRED=“65536”;
if \[ “$DESIRED” -gt “$CURRENT” \]; then
sysctl -w fs.file-max=65536;
fi;

State:          Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Mon, 27 Oct 2025 14:59:29 +0530
  Finished:     Mon, 27 Oct 2025 14:59:29 +0530
Ready:          True
Restart Count:  0
Limits:
  cpu:     2
  memory:  6Gi
Requests:
  cpu:        1
  memory:     4Gi
Environment:  <none>
Mounts:       <none>

copy-default-plugins:

Container ID:  robin://69dd34d3e72245f90547f716c03e5a2b70ccf5779f3eb66b3ab4f736c8e590d5
Image ID:      256:612e46acf5c37432f5a5755110386bfa4b60c0edff73768db2fa86669af726e3
Port:
Host Port:
Command:
/bin/bash
Args:
-ec
. /opt/bitnami/scripts/liblog.sh
. /opt/bitnami/scripts/libfs.sh
. /opt/bitnami/scripts/opensearch-env.sh

  mkdir -p /emptydir/app-conf-dir /emptydir/app-plugins-dir
  info "Copying directories to empty dir"

  if ! is_dir_empty "$DB_DEFAULT_CONF_DIR"; then
      info "Copying default configuration"
      cp -nr --preserve=mode "$DB_DEFAULT_CONF_DIR"/* /emptydir/app-conf-dir
  fi
  if ! is_dir_empty "$DB_DEFAULT_PLUGINS_DIR"; then
      info "Copying default plugins"
      cp -nr "$DB_DEFAULT_PLUGINS_DIR"/* /emptydir/app-plugins-dir
  fi

  info "Copy operation completed"

State:          Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Mon, 27 Oct 2025 14:59:30 +0530
  Finished:     Mon, 27 Oct 2025 14:59:30 +0530
Ready:          True
Restart Count:  0
Limits:
  cpu:     2
  memory:  6Gi
Requests:
  cpu:        1
  memory:     4Gi
Environment:  <none>
Mounts:
  /emptydir from empty-dir (rw)


Containers:
opensearch:
Container ID:   robin://755ef5f91362412ebe6026d7b747e02fdc61a74328633fccf961346259f4aa9c
Image ID:       docker-pullable://jfrogbng2.altiostar.com/dev-irmas-docker/bitnami/opensearch@sha256:612e46acf5c37432f5a5755110386bfa4b60c0edff73768db2fa86669af726e3
Ports:          9200/TCP, 9300/TCP
Host Ports:     0/TCP, 0/TCP
State:          Running
Started:      Tue, 28 Oct 2025 14:21:55 +0530
Last State:     Terminated
Reason:       Error
Exit Code:    143
Started:      Tue, 28 Oct 2025 13:58:55 +0530
Finished:     Tue, 28 Oct 2025 14:21:54 +0530
Ready:          False
Restart Count:  61
Limits:
cpu:     2
memory:  6Gi
Requests:
cpu:      1
memory:   4Gi
Liveness:   tcp-socket :rest-api delay=480s timeout=5s period=10s #success=1 #failure=60
Readiness:  exec \[/opt/bitnami/scripts/opensearch/healthcheck.sh\] delay=600s timeout=5s period=10s #success=1 #failure=10
Startup:    tcp-socket :rest-api delay=480s timeout=5s period=10s #success=1 #failure=90
Environment:
BITNAMI_DEBUG:                     false
MY_POD_NAME:                       opensearch-master-0 (v1:metadata.name)
OPENSEARCH_IS_DEDICATED_NODE:      yes
OPENSEARCH_NODE_ROLES:             master
OPENSEARCH_TRANSPORT_PORT_NUMBER:  9300
OPENSEARCH_HTTP_PORT_NUMBER:       9200
OPENSEARCH_CLUSTER_NAME:           open
OPENSEARCH_CLUSTER_HOSTS:          opensearch-master-hl.default.svc.cluster.local,opensearch-coordinating-hl.default.svc.cluster.local,opensearch-data-hl.default.svc.cluster.local,opensearch-ingest-hl.default.svc.cluster.local,
OPENSEARCH_TOTAL_NODES:            2
OPENSEARCH_CLUSTER_MASTER_HOSTS:   opensearch-master-0
OPENSEARCH_MINIMUM_MASTER_NODES:   1
OPENSEARCH_ADVERTISED_HOSTNAME:    $(MY_POD_NAME).opensearch-master-hl.default.svc.cluster.local
OPENSEARCH_HEAP_SIZE:              3072m
Mounts:
/bitnami/opensearch/data from data (rw)
/opt/bitnami/opensearch/config from empty-dir (rw,path=“app-conf-dir”)
/opt/bitnami/opensearch/logs from empty-dir (rw,path=“app-logs-dir”)
/opt/bitnami/opensearch/plugins from empty-dir (rw,path=“app-plugins-dir”)
/opt/bitnami/opensearch/tmp from empty-dir (rw,path=“app-tmp-dir”)
/tmp from empty-dir (rw,path=“tmp-dir”)
Conditions:
Type              Status
Initialized       True
Ready             False
ContainersReady   False
PodScheduled      True
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  data-opensearch-master-0
ReadOnly:   false
empty-dir:
Type:        EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit:
QoS Class:       Burstable
Node-Selectors:
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From     Message

---

Warning  Unhealthy  3m36s (x5536 over 23h)  kubelet  Startup probe failed: dial tcp 172.21.114.60:9200: connect: connection refused

@pkniet2010 did you make any changes to run bitnami/opensearch on ipv6 cluster? can you share your values.yml file (redact any sensitive details). Can you provide the config for startup probe?

Also what IPs do you see when you run kubectl get po -o wide

@Anthony - I did not change any specific configuration in values.yaml for IPV6

kubectl get po -o wide

NAME                        READY   STATUS    RESTARTS   AGE   IP                                        NODE           NOMINATED NODE   READINESS GATES
opensearch-coordinating-0   1/1     Running   0          46h   2d33:ca9b:3a09:868c:4492:2350:114a:d3e9   ckubeemsn2              
opensearch-data-0           1/1     Running   0          46h   2d33:ca9b:3a09:868c:abad:dc1a:1721:6bb    ckubeemsn3              
opensearch-ingest-0         1/1     Running   0          46h   2d33:ca9b:3a09:868c:4492:2350:112a:d3e0   ckubeemsn2              
opensearch-master-0         1/1     Running   0          46h   2d33:ca9b:3a09:868c:abad:dc1a:1720:9ad    c11kubeemsn3   

only changes in startup probe:

startupProbe:
  enabled: true
  initialDelaySeconds: 480
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 90

same startup probe for data, inject, coordinator and master.
CPU limit 2 and request 1

Thanks,

@pkniet2010 are you using Rocky Linux release 8.10 for both ipv6 and ipv4?

can you provide the same output for the ipv4 cluster?

No IPV4 is Rocky linux only.
IPV6 is plain .

IPV6 output:
opensearch-coordinating-0 1/1 Running 0 46h 2d33:ca9b:3a09:868c:4492:2350:114a:d3e9 ckubeemsn2
opensearch-data-0 1/1 Running 0 46h 2d33:ca9b:3a09:868c:abad:dc1a:1721:6bb ckubeemsn3
opensearch-ingest-0 1/1 Running 0 46h 2d33:ca9b:3a09:868c:4492:2350:112a:d3e0 ckubeemsn2
opensearch-master-0 1/1 Running 0 46h 2d33:ca9b:3a09:868c:abad:dc1a:1720:9ad c11kubeemsn3

IPV4 output:

@pkniet2010 have you had a look at this forum case

This might be related to the same issue

But I am not getting any exception in logs.
only describe pod gives stratupprobe failed