Install via operator v3.0.0-alpha - No storageclass for Bootstrap

Versions OpenSearch 3.4 - Operator v3.0.0-alpha

Describe the issue: when deployed with the operator, the bootstrap container doesn’t have a storage class. I have a cluster with no default storage class, and then the PVC stays in the pending state

Cluster conf:

apiVersion: opensearch.org/v1

# apiVersion: opensearch.opster.io/v1

kind: OpenSearchCluster

metadata:

  name: my-first-cluster

  namespace: opensearch

spec:

  security:

    config:

    tls:

       http:

         generate: true 

       transport:

         generate: true

         perNode: true

  general:

    httpPort: 9200

    serviceName: my-first-cluster

    version: 3.4.0

    # pluginsList: ["repository-s3"]

    drainDataNodes: true

    setVMMaxMapCount: true

  dashboards:

    tls:

      enable: true

      generate: true

    version: 3.4.0

    enable: true

    replicas: 1

    resources:

      requests:

         memory: "512Mi"

         cpu: "200m"

      limits:

         memory: "512Mi"

         cpu: "200m"

  nodePools:

    - component: masters

      replicas: 3

      resources:

         requests:

            memory: "4Gi"

            cpu: "1000m"

         limits:

            memory: "4Gi"

            cpu: "1000m"

      roles:

        - "data"

        - "cluster_manager"

      persistence:

         emptyDir: {}


Generated Bootstrap PVC

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

annotations:

opensearch.org/last-applied: >-

  UEsDBBQACAAIAAAAAAAAAAAAAAAAAAAAAAAIAAAAb3JpZ2luYWx0kEFPIzEMhf+Lz8nudtvdduYKEidUqUhwQD14Mp4SNU0G2wNCo/x3lBRRIcHNsZ9fvucZTqTYoyK0MwTsKEip0khRCNk9/Up8+H15WhcmUWJo4fRmB8+in61sIOKJvhnZLiUVZRxt/esslBFdUV/cwUB6jcQ7GogpOhJoH2fA0d8Ti0/xi7qivSzAQBeSO27L5jUF0ipUnsiAS1E5hVCIz52jjz20sB0p3lWbqw/8H+nBwOTLjmuWG/w7/LPrZrGyqzU6u+m7wS6Rhv/YUN/9aSDvswEZyZU7onMkcpv6mgR2hP0De6VtdAR7A0ySJq45Z2B6nki01qKJ8VBoFjceci6eijqVYc7vAQAA//9QSwcIAWzbnA4BAAC3AQAAUEsBAhQAFAAIAAgAAAAAAAFs25wOAQAAtwEAAAgAAAAAAAAAAAAAAAAAAAAAAG9yaWdpbmFsUEsFBgAAAAABAAEANgAAAEQBAAAAAA==

creationTimestamp: ‘2026-02-10T08:42:42Z’

finalizers:

- kubernetes.io/pvc-protection

labels:

opensearch.org/opensearch-cluster: my-first-cluster

managedFields:

- apiVersion: v1

  fieldsType: FieldsV1

  fieldsV1:

    f:metadata:

      f:annotations:

        .: {}

        f:opensearch.org/last-applied: {}

      f:labels:

        .: {}

        f:opensearch.org/opensearch-cluster: {}

      f:ownerReferences:

        .: {}

        k:{"uid":"c938a2f5-7914-47ac-8dbf-3aef6a9edb09"}: {}

    f:spec:

      f:accessModes: {}

      f:resources:

        f:requests:

          .: {}

          f:storage: {}

      f:volumeMode: {}

  manager: manager

  operation: Update

  time: '2026-02-10T08:42:42Z'

name: my-first-cluster-bootstrap-data

namespace: opensearch

ownerReferences:

- apiVersion: opensearch.org/v1

  blockOwnerDeletion: true

  controller: true

  kind: OpenSearchCluster

  name: my-first-cluster

  uid: c938a2f5-7914-47ac-8dbf-3aef6a9edb09

resourceVersion: ‘587686453’

uid: 2697864f-0d71-4360-954a-883b890eea83

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

  storage: 1Gi

volumeMode: Filesystem

I tried to configure with

spec:

bootstrap:

persistence:

  pcv:

    storageClassName: netapp-ssd-nfs-sc

    accessModes:

      - ReadWriteOnce

but then, I have the error: ‘unknown field “spec.bootstrap.persistence”’

@djeannerod The operator doesn’t currently expose spec.bootstrap.persistence field. The bootstrap PVC is created without storageClassName, so clusters on environments without a default StorageClass get a Pending PVC.

As a workaround I would recommend to temporarily set a default storage class and then run the deployment. You can then remove it once the bootstrap PVC is bound.

I would also suggest to raise an issue for this in the github repo

1 Like

Thanks a lot for your answer. That’s what I supposed, but I really appreciate your confirmation and advice. My cluster is managed by a provider, so I can’t manage the default storage class myself. I will try to have this configured.

1 Like