Helm v3 charts installation for v1.8.0 results in pending pods

I am trying to deploy the opendistro v 1.8.0 on my local centos box

    root@master>kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:45:16Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
    root@master>kubectl version
    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
    root@master>helm version
    version.BuildInfo{Version:"v3.3.0-rc.1", GitCommit:"5c2dfaad847df2ac8f289d278186d048f446c70c", GitTreeState:"dirty", GoVersion:"go1.14.4"}

helm package .

is successful and so is

helm deploy opends ./opendistro-es-1.8.0.tgz

if I do listing

root@master>helm list
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
opends  default         1               2020-07-10 15:19:31.197160318 +0000 UTC deployed        opendistro-es-1.8.0     1.8.0    

I can see the status as deployed, but the pod is forever pending

root@master>kubectl get pods
NAME                                           READY   STATUS    RESTARTS   AGE
opends-opendistro-es-client-5f7c7fd7cc-bcnh6   1/1     Running   0          16m
opends-opendistro-es-data-0                    0/1     Pending   0          16m
opends-opendistro-es-kibana-5bc564c795-qb4tl   1/1     Running   0          16m
opends-opendistro-es-master-0                  0/1     Pending   0          16m

What does kubectl describe tell about the pending pod? Is it waiting on a PVC to be bound to a PV? Where we’ve seen a pod waiting indefinitely, it’s generally due a Persistent Volume Claim (PVC) waiting on Kubernetes to bind it to a Persistent Volume (PV). This should show up in the Events section of the output from kubectl describe opends-opendistro-es-data-0. Even if that’s not the case here, the output should provide some clue to why the pod is “pending”.

The describe pod shows

kubectl describe pod ods-opendistro-es-data-0
Name:           ods-opendistro-es-data-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=ods-opendistro-es
                chart=opendistro-es-1.9.0
                controller-revision-hash=ods-opendistro-es-data-845b7d976d
                heritage=Helm
                release=ods
                role=data
                statefulset.kubernetes.io/pod-name=ods-opendistro-es-data-0
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/ods-opendistro-es-data
Init Containers:
  init-sysctl:
    Image:      busybox:1.27.2
    Port:       <none>
    Host Port:  <none>
    Command:
      sysctl
      -w
      vm.max_map_count=262144
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ods-opendistro-es-es-token-zvc5r (ro)
  fixmount:
    Image:      busybox:1.27.2
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      chown -R 1000:1000 /usr/share/elasticsearch/data
    Environment:  <none>
    Mounts:
      /usr/share/elasticsearch/data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from ods-opendistro-es-es-token-zvc5r (ro)
Containers:
  elasticsearch:
    Image:      amazon/opendistro-for-elasticsearch:1.9.0
    Port:       9300/TCP
    Host Port:  0/TCP
    Liveness:   tcp-socket :transport delay=60s timeout=1s period=10s #success=1 #failure=3
    Environment:
      cluster.name:          elasticsearch
      node.master:           false
      node.ingest:           false
      network.host:          0.0.0.0
      node.name:             ods-opendistro-es-data-0 (v1:metadata.name)
      discovery.seed_hosts:  ods-opendistro-es-discovery
      KUBERNETES_NAMESPACE:  default (v1:metadata.namespace)
      node.data:             true
      PROCESSORS:            node allocatable (limits.cpu)
      ES_JAVA_OPTS:          -Xms512m -Xmx512m
    Mounts:
      /usr/share/elasticsearch/config/logging.yml from config (rw,path="logging.yml")
      /usr/share/elasticsearch/data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from ods-opendistro-es-es-token-zvc5r (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-ods-opendistro-es-data-0
    ReadOnly:   false
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ods-opendistro-es-es-config
    Optional:    false
  ods-opendistro-es-es-token-zvc5r:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ods-opendistro-es-es-token-zvc5r
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "ods-opendistro-es-data-0": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "ods-opendistro-es-data-0": pod has unbound immediate PersistentVolumeClaims

For kubectl describe pv,pvc I get the following response. Like you ponted PVC are failing, but I am not sure of the troubleshooting steps.

Name:          data-ods-opendistro-es-data-0
Namespace:     default
StorageClass:  
Status:        Pending
Volume:        
Labels:        app=ods-opendistro-es
               heritage=Helm
               release=ods
               role=data
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    ods-opendistro-es-data-0
Events:
  Type    Reason         Age                   From                         Message
  ----    ------         ----                  ----                         -------
  Normal  FailedBinding  4m4s (x182 over 49m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set


Name:          data-ods-opendistro-es-master-0
Namespace:     default
StorageClass:  
Status:        Pending
Volume:        
Labels:        app=ods-opendistro-es
               heritage=Helm
               release=ods
               role=master
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    ods-opendistro-es-master-0
Events:
  Type    Reason         Age                   From                         Message
  ----    ------         ----                  ----                         -------
  Normal  FailedBinding  4m4s (x182 over 49m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

I think the “no storage class is set message” might be the key one. I believe the problem is the pod asked for storage (i.e. created a PVC) but didn’t tell Kubernetes what kind of storage (i.e. storageClass) it wanted, so Kubernetes didn’t know what to do with the request. Or, maybe Kubernetes assumes storageClass=“Default” in that situation. In any case, Kubernetes isn’t providing the requested storage.

To specify a particular storageClass to use, add something like the following to the values.yaml file you are passing to Helm. You may already have some of these items in your values.yaml file; if so, just add the missing items being careful to get the indentation and nesting right. Some storageClasses might require you specify a size (commented out below).

elasticsearch:
  master:
    #for master nodes
    persistence:
      storageClass: some-storage-class-name-here
      #size: 8Gi
  data:
    # for data nodes
    persistence:
      storageClass: some-storage-class-name-here
      #size: 8Gi

I had provided local-storage but had no success.

 storageClass: "local-storage"

Is there something automatically provisioning space for that storageClass? If not, you may need to create the necessary Persistent Volumes manually. I found this page from kubernetesbyexample.com on creating PVs helpful.

Let me check, the link. Not very proficient with k8s at the moment.

So I created two PV’s

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-volume
  labels:
    type: local
spec:
  storageClassName: local-storage
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/usr/share/elasticsearch"

and now I can see the claims

k get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                     STORAGECLASS    REASON   AGE
pv-volume       20Gi       RWO            Retain           Bound    default/data-ods-opendistro-es-master-0   local-storage            14h
pv-volume-two   20Gi       RWO            Retain           Bound    default/data-ods-opendistro-es-data-0     local-storage            14h

and can also see all the pods running

k get pods
NAME                                       READY   STATUS    RESTARTS   AGE
ods-opendistro-es-client-c7d978769-p4sgs   1/1     Running   0          13h
ods-opendistro-es-data-0                   1/1     Running   0          13h
ods-opendistro-es-kibana-8f4844bb7-zrnjj   1/1     Running   0          13h
ods-opendistro-es-master-0                 1/1     Running   0          13h

I still can’t access the end points

k get all
NAME                                           READY   STATUS    RESTARTS   AGE
pod/ods-opendistro-es-client-c7d978769-p4sgs   1/1     Running   0          14h
pod/ods-opendistro-es-data-0                   1/1     Running   0          14h
pod/ods-opendistro-es-kibana-8f4844bb7-zrnjj   1/1     Running   0          14h
pod/ods-opendistro-es-master-0                 1/1     Running   0          14h

NAME                                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                               AGE
service/kubernetes                         ClusterIP   10.96.0.1       <none>        443/TCP                               47h
service/ods-opendistro-es-client-service   ClusterIP   10.108.37.203   <none>        9200/TCP,9300/TCP,9600/TCP,9650/TCP   14h
service/ods-opendistro-es-data-svc         ClusterIP   None            <none>        9300/TCP,9200/TCP,9600/TCP,9650/TCP   14h
service/ods-opendistro-es-discovery        ClusterIP   None            <none>        9300/TCP                              14h
service/ods-opendistro-es-kibana-svc       ClusterIP   10.99.200.242   <none>        443/TCP                               14h

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ods-opendistro-es-client   1/1     1            1           14h
deployment.apps/ods-opendistro-es-kibana   1/1     1            1           14h

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/ods-opendistro-es-client-c7d978769   1         1         1       14h
replicaset.apps/ods-opendistro-es-kibana-8f4844bb7   1         1         1       14h

NAME                                        READY   AGE
statefulset.apps/ods-opendistro-es-data     1/1     14h
statefulset.apps/ods-opendistro-es-master   1/1     14h

elastic seems to be unreachable

k describe service/ods-opendistro-es-client-service
Name:              ods-opendistro-es-client-service
Namespace:         default
Labels:            app=ods-opendistro-es
                   app.kubernetes.io/managed-by=Helm
                   chart=opendistro-es-1.9.0
                   heritage=Helm
                   release=ods
                   role=client
Annotations:       meta.helm.sh/release-name: ods
                   meta.helm.sh/release-namespace: default
Selector:          role=client
Type:              ClusterIP
IP:                10.97.236.79
Port:              http  9200/TCP
TargetPort:        9200/TCP
Endpoints:         172.16.34.206:9200
Port:              transport  9300/TCP
TargetPort:        9300/TCP
Endpoints:         172.16.34.206:9300
Port:              metrics  9600/TCP
TargetPort:        9600/TCP
Endpoints:         172.16.34.206:9600
Port:              rca  9650/TCP
TargetPort:        9650/TCP
Endpoints:         172.16.34.206:9650
Session Affinity:  None
Events:            <none>

It is a futile exercise for me now

ods-opendistro-es-client-service   172.16.249.9:9200,172.16.249.9:9300,172.16.249.9:9650 + 1 more...         34m
ods-opendistro-es-data-svc         172.16.217.201:9200,172.16.217.201:9300,172.16.217.201:9650 + 1 more...   34m
ods-opendistro-es-discovery        172.16.34.209:9300                                                        34m
ods-opendistro-es-kibana-svc       172.16.226.140:5601                                                       34m

I can see for k logs -f ods-opendistro-es-client-c7d978769-wbrwm

[2020-07-16T08:45:11,354][WARN ][o.e.d.SeedHostsResolver  ] [ods-opendistro-es-client-c7d978769-wbrwm] failed to resolve host [ods-opendistro-es-discovery]
java.net.UnknownHostException: ods-opendistro-es-discovery
        at java.net.InetAddress$CachedAddresses.get(InetAddress.java:800) ~[?:?]
        at java.net.InetAddress.getAllByName0(InetAddress.java:1495) ~[?:?]
        at java.net.InetAddress.getAllByName(InetAddress.java:1354) ~[?:?]
        at java.net.InetAddress.getAllByName(InetAddress.java:1288) ~[?:?]
        at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:548) ~[elasticsearch-7.8.0.jar:7.8.0]
        at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:490) ~[elasticsearch-7.8.0.jar:7.8.0]
        at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:856) ~[elasticsearch-7.8.0.jar:7.8.0]
        at org.elasticsearch.discovery.SeedHostsResolver.lambda$resolveHostsLists$0(SeedHostsResolver.java:144) ~[elasticsearch-7.8.0.jar:7.8.0]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:636) ~[elasticsearch-7.8.0.jar:7.8.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]

Finally got the es running at least. phew

root@master>kubectl exec -it ods-opendistro-es-master-0 -- /bin/bash
[root@ods-opendistro-es-master-0 elasticsearch]# curl -XGET https://localhost:9200 -u admin:admin --insecure
{
  "name" : "ods-opendistro-es-master-0",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "XvGIKpvNTdeh2lAbx67csg",
  "version" : {
    "number" : "7.8.0",
    "build_flavor" : "oss",
    "build_type" : "tar",
    "build_hash" : "757314695644ea9a1dc2fecd26d1a43856725e65",
    "build_date" : "2020-06-14T19:35:50.234439Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

curl -XGET https://localhost:9200/_cat/nodes?pretty -u admin:admin --insecure

172.16.226.144 60 3 0 0.07 0.09 0.13 ir - ods-opendistro-es-client-c7d978769-x67vv
172.16.226.143 43 3 0 0.07 0.09 0.13 mr * ods-opendistro-es-master-0
172.16.249.13  58 3 0 0.03 0.04 0.05 dr - ods-opendistro-es-data-0

Main objective was looking at es and kibana and till not still grappling with installation. Any pointers on how to get Kibana running?

The issue is you cannot access Kibana from outside the Kubernetes cluster. You need to make it “visible” to the “outside world” somehow.

  • One approach is use Kubernetes port-forwarding to make the Kubernetes port visible. But that will only last until you kill the port-forwarding process. So, it isn’t very useful other than testing.
  • A better solution is to create a NodePort to make the Kibana port visible. You can do that via the Helm chart. It will remain there until you remove it.
  • You could also use Kubernetes Ingress to do that…I think that’s supported in the Helm chart as well but is more involved than the NodePort approach. I believe it (Ingress) would be a solution more suitable for “real”/“production” environments.
1 Like