Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
OS: Talos Linux 1.8.0
(base) raphy@raohy:~/.talos/openmetadata$ helm search repo opensearch
NAME CHART VERSION APP VERSION DESCRIPTION
opensearch-operator/opensearch-cluster 3.1.0 2.8.0 A Helm chart for OpenSearch Cluster
opensearch-operator/opensearch-operator 2.8.0 2.8.0 The OpenSearch Operator Helm chart for Kubernetes
Describe the issue:
(base) raphy@raohy:~/.talos/openmetadata/opensearch$ git clone https://github.com/
opensearch-project/opensearch-k8s-operator.git
Configuration:
(base) raphy@raohy:~/.talos/openmetadata$ nano opensearch/opensearch-k8s-operator/opensearch-operator/examples/2.x/omd-os-cluster.yaml :
#Minimal configuration of a cluster with version 2.X of the operator.
#Note the replacement of 'master' role with 'cluster_manager' on line 49
apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
name: omd-os-cluster
namespace: default
spec:
security:
config:
tls:
http:
generate: true
transport:
generate: true
perNode: true
general:
httpPort: 9200
serviceName: omd-os-cluster
version: 2.14.0
pluginsList: ["repository-s3"]
drainDataNodes: true
dashboards:
tls:
enable: true
generate: true
version: 2.14.0
enable: true
replicas: 1
resources:
requests:
memory: "512Mi"
cpu: "200m"
limits:
memory: "512Mi"
cpu: "200m"
nodePools:
- component: masters
replicas: 3
resources:
requests:
memory: "8Gi"
cpu: "1000m"
limits:
memory: "8Gi"
cpu: "1000m"
roles:
- "data"
- "cluster_manager"
persistence:
emptyDir: {}
OpenSearch pods have all startup probe failed status :
(base) raphy@raohy:~/.talos/openmetadata$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
omd-os-cluster-bootstrap-0 0/1 Running 388 (5m25s ago) 26h 10.244.1.69 talos-mrt-ge0 <none> <none>
omd-os-cluster-dashboards-5b9fbdfd45-hhjkg 0/1 Running 323 (3m16s ago) 26h 10.244.1.73 talos-mrt-ge0 <none> <none>
omd-os-cluster-masters-0 0/1 CrashLoopBackOff 410 (2m15s ago) 26h 10.244.1.68 talos-mrt-ge0 <none> <none>
omd-os-cluster-securityconfig-update-jp8g8 0/1 Unknown 0 26h <none> talos-mrt-ge0 <none> <none>
opensearch-operator-controller-manager-7448949c9b-gcwph 2/2 Running 118 (3m33s ago) 46h 10.244.1.70 talos-mrt-ge0 <none> <none>
postgres-65d7c9cb49-wmswv 1/1 Running 41 (67m ago) 2d 10.244.1.71 talos-mrt-ge0 <none> <none>
(base) raphy@raohy:~/.talos/openmetadata$ kubectl describe pod omd-os-cluster-dashboards-5b9fbdfd45-hhjkg
Name: omd-os-cluster-dashboards-5b9fbdfd45-hhjkg
Namespace: default
Priority: 0
Service Account: default
Node: talos-mrt-ge0/37.59.120.237
Start Time: Sat, 08 Nov 2025 12:18:24 +0100
Labels: opensearch.cluster.dashboards=omd-os-cluster
pod-template-hash=5b9fbdfd45
Annotations: checksum/dashboards.yml: 58dd97503c53a4255035e77f9df02ad465b99af8
Status: Running
IP: 10.244.1.73
IPs:
IP: 10.244.1.73
Controlled By: ReplicaSet/omd-os-cluster-dashboards-5b9fbdfd45
Containers:
dashboards:
Container ID: containerd://0b657473617926d36fd4fd0c4bcb0c77d49a4a12ca8a453a528d419fb5e343e0
Image: docker.io/opensearchproject/opensearch-dashboards:2.14.0
Image ID: docker.io/opensearchproject/opensearch-dashboards@sha256:94a42c94e179d8acbef4afc516d88686bb7424086279238c72cae2d03b64b081
Port: 5601/TCP
Host Port: 0/TCP
Command:
/bin/bash
-c
./opensearch-dashboards-docker-entrypoint.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 09 Nov 2025 14:59:46 +0100
Finished: Sun, 09 Nov 2025 15:03:06 +0100
Ready: False
Restart Count: 323
Limits:
cpu: 200m
memory: 512Mi
Requests:
cpu: 200m
memory: 512Mi
Liveness: http-get https://:5601/api/reporting/stats delay=10s timeout=5s period=20s #success=1 #failure=10
Startup: http-get https://:5601/api/reporting/stats delay=10s timeout=5s period=20s #success=1 #failure=10
Environment:
OPENSEARCH_HOSTS: https://omd-os-cluster.default.svc.cluster.local:9200
SERVER_HOST: 0.0.0.0
OPENSEARCH_USERNAME: kibanaserver
OPENSEARCH_PASSWORD: kibanaserver
Mounts:
/usr/share/opensearch-dashboards/certs from tls-cert (rw)
/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml from dashboards-config (rw,path="opensearch_dashboards.yml")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jl4dp (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
tls-cert:
Type: Secret (a volume populated by a Secret)
SecretName: omd-os-cluster-dashboards-cert
Optional: false
dashboards-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: omd-os-cluster-dashboards-config
Optional: false
kube-api-access-jl4dp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Killing 64m kubelet Container dashboards failed startup probe, will be restarted
Normal Pulled 64m (x2 over 67m) kubelet Container image "docker.io/opensearchproject/opensearch-dashboards:2.14.0" already present on machine
Normal Created 64m (x2 over 67m) kubelet Created container dashboards
Normal Started 64m (x2 over 67m) kubelet Started container dashboards
Warning Unhealthy 38m (x6 over 66m) kubelet Startup probe failed: Get "https://10.244.1.73:5601/api/reporting/stats": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 18m (x34 over 67m) kubelet Startup probe failed: Get "https://10.244.1.73:5601/api/reporting/stats": dial tcp 10.244.1.73:5601: connect: connection refused
Warning BackOff 7m54s (x100 over 47m) kubelet Back-off restarting failed container dashboards in pod omd-os-cluster-dashboards-5b9fbdfd45-hhjkg_default(e2848957-7fb9-4d7f-9072-6e00fbd5fbd0)
Warning Unhealthy 4m5s (x82 over 66m) kubelet Startup probe failed: HTTP probe failed with statuscode: 503
What’s wrong with this configuration? How to make it work?
Relevant Logs or Screenshots: