Hi ,
Am trying to mount config.yml using secret but getting below error.
Normal Scheduled 2m36s default-scheduler Successfully assigned opensearch/opensearch-cluster-master-2 to wdc-tkg-wld-a1-md-0-c7d7c9d6f-s7lc4
Normal Pulling 2m20s kubelet Pulling image “harbor-repo.xxxxx.com/cnadatabase_prod/busybox:latest”
Normal Pulled 2m19s kubelet Successfully pulled image “harbor-repo.xxx.com/cnadatabase_prod/busybox:latest” in 765.329062ms
Normal Created 2m19s kubelet Created container fsgroup-volume
Normal Started 2m19s kubelet Started container fsgroup-volume
Normal Created 93s (x4 over 2m17s) kubelet Created container opensearch
Warning Failed 92s (x4 over 2m17s) kubelet Error: failed to create containerd task: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: rootfs_linux.go:76: mounting “/var/lib/kubelet/pods/100acb90-bfdd-4065-903a-660e25596d19/volume-subpaths/security-config/opensearch/1” to rootfs at “/usr/share/opensearch/plugins/opensearch-security/securityconfig/config.yml” caused: mount through procfd: not a directory: unknown
Warning BackOff 54s (x8 over 2m15s) kubelet Back-off restarting failed container
clusterName: “opensearch-cluster”
nodeGroup: “master”
The service that non master groups will try to connect to when joining the cluster
This should be set to clusterName + “-” + nodeGroup for your master group
masterService: “opensearch-cluster-master”
OpenSearch roles that will be applied to this nodeGroup
These will be set as environment variable “node.roles”. E.g. node.roles=master,ingest,data,remote_cluster_client
roles:
- master
- ingest
- data
- remote_cluster_client
replicas: 3
minimumMasterNodes: 1
if not set, falls back to parsing .Values.imageTag, then .Chart.appVersion.
majorVersion: “”
global:
Set if you want to change the default docker registry, e.g. a private one.
dockerRegistry: “harbor-repo.xxxx.com”
Allows you to add any config files in {{ .Values.opensearchHome }}/config
opensearchHome: /usr/share/opensearch
such as opensearch.yml and log4j2.properties
config:
Values must be YAML literal style scalar / YAML multiline string.
: |
<formatted-value(s)>
log4j2.properties: |
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
opensearch.yml: |
cluster.name: opensearch-cluster
# Bind to all interfaces because we don't know what IP address Docker will assign to us.
network.host: 0.0.0.0
# # minimum_master_nodes need to be explicitly set when bound on a public IP
# # set to 1 to allow single node clusters
# discovery.zen.minimum_master_nodes: 1
# Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
# discovery.type: single-node
# Start OpenSearch Security Demo Configuration
# WARNING: revise all the lines below before you go into production
plugins.security.ssl.transport.pemcert_filepath: /usr/share/opensearch/config/certs/node1.pem
plugins.security.ssl.transport.pemkey_filepath: /usr/share/opensearch/config/certs/node1-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: /usr/share/opensearch/config/certs/root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: /usr/share/opensearch/config/certs/node1.pem
plugins.security.ssl.http.pemkey_filepath: /usr/share/opensearch/config/certs/node1-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: /usr/share/opensearch/config/certs/root-ca.pem
plugins.security.authcz.admin_dn:
- 'CN=ADMIN,OU=IT,O=ORG,L=PA,ST=CA,C=US'
plugins.security.nodes_dn:
- 'CN=opensearch-cluster-master-0,OU=IT,O=ORG,L=PA,ST=CA,C=US'
- 'CN=opensearch-cluster-master-1,OU=IT,O=ORG,L=PA,ST=CA,C=US'
- 'CN=opensearch-cluster-master-2,OU=IT,O=ORG,L=PA,ST=CA,C=US'
- 'CN=opensearch-cluster*'
plugins.security.allow_default_init_securityindex: true
plugins.security.allow_unsafe_democertificates: false
plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi:
roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices:
enabled: true
indices:
[
“.opendistro-alerting-config”,
“.opendistro-alerting-alert*“,
“.opendistro-anomaly-results*“,
“.opendistro-anomaly-detector*“,
“.opendistro-anomaly-checkpoints”,
“.opendistro-anomaly-detection-state”,
“.opendistro-reports-*“,
“.opendistro-notifications-*“,
“.opendistro-notebooks”,
“.opendistro-asynchronous-search-response*“,
]
log4j2.properties:
Extra environment variables to append to this nodeGroup
This will be appended to the current ‘env:’ key. You can use any of the kubernetes env
syntax here
extraEnvs:
- name: MY_ENVIRONMENT_VAR
value: the_value_goes_here
Allows you to load environment variables from kubernetes secret or config map
envFrom:
- secretRef:
name: env-secret
- configMapRef:
name: config-map
A list of secrets and their paths to mount inside the pod
This is useful for mounting certificates for security and for mounting
the X-Pack license
secretMounts:
- name: certs-open
secretName: certs-open
path: /usr/share/opensearch/config/certs - name: ldap-certs
secretName: ldap-certs
path: /usr/share/opensearch/config/ldap-certs
hostAliases:
- ip: “127.0.0.1”
hostnames:
- “foo.local”
- “bar.local”
image:
repository: “cnadatabase_prod/opensearchproject/opensearch”
override image tag, which is .Chart.AppVersion by default
tag: “”
pullPolicy: “IfNotPresent”
podAnnotations: {}
iam.amazonaws.com/role: es-cluster
additionals labels
labels: {}
opensearchJavaOpts: “-Xmx2g -Xms2g”
resources:
requests:
cpu: “2000m”
memory: “5Gi”
initResources:
limits:
cpu: “1000m”
memory: “2Gi”
requests:
cpu: “1000m”
memory: “2Gi”
sidecarResources: {}
limits:
cpu: “25m”
# memory: “128Mi”
requests:
cpu: “25m”
memory: “128Mi”
networkHost: “0.0.0.0”
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: “”
podSecurityPolicy:
create: false
name: “”
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
- emptyDir
persistence:
enabled: true
Set to false to disable the fsgroup-volume
initContainer that will update permissions on the persistent disk.
enableInitChown: true
override image, which is busybox by default
image: cnadatabase_prod/busybox
override image tag, which is latest by default
imageTag:
labels:
# Add default labels for the volumeClaimTemplate of the StatefulSet
enabled: false
OpenSearch Persistent Volume Storage Class
If defined, storageClassName:
If set to “-”, storageClassName: “”, which disables dynamic provisioning
If undefined (the default) or set to null, no storageClassName spec is
set, choosing the default provisioner. (gp2 on AWS, standard on
GKE, AWS & OpenStack)
storageClass: “default”
accessModes:
- ReadWriteOnce
size: 8Gi
annotations: {}
extraVolumes:
- name: extras
emptyDir: {}
extraVolumeMounts:
- name: extras
mountPath: /usr/share/extras
readOnly: true
extraContainers:
- name: do-something
image: busybox
command: [‘do’, ‘something’]
extraInitContainers:
- name: do-somethings
image: busybox
command: [‘do’, ‘something’]
This is the PriorityClass settings as defined in
https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: “”
By default this will make sure two pods don’t end up on the same node
Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: “kubernetes.io/hostname”
Hard means that by default pods will only be scheduled if there are enough nodes for them
and that they will never end up on the same node. Setting this to soft will do this “best effort”
antiAffinity: “soft”
This is the node affinity settings as defined in
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
The default is to deploy all pods serially. By setting this to parallel all pods are started at
the same time when bootstrapping the cluster
podManagementPolicy: “Parallel”
The environment variables injected by service links are not used, but can lead to slow OpenSearch boot times when
there are many services in the current namespace.
If you experience slow pod startups you probably want to set this to false
.
enableServiceLinks: true
protocol: https
httpPort: 9200
transportPort: 9300
service:
labels: {}
labelsHeadless: {}
type: LoadBalancer
nodePort: “”
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: “”
loadBalancerSourceRanges:
externalTrafficPolicy: “”
updateStrategy: RollingUpdate
This is the max unavailable setting for the pod disruption budget
The default value of 1 will make sure that kubernetes won’t allow more than 1
of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
securityConfig:
enabled: true
path: “/usr/share/opensearch/plugins/opensearch-security/securityconfig”
actionGroupsSecret:
configSecret: “security-config”
internalUsersSecret:
rolesSecret:
rolesMappingSecret:
tenantsSecret:
The following option simplifies securityConfig by using a single secret and
specifying the config files as keys in the secret instead of creating
different secrets for for each config file.
Note that this is an alternative to the individual secret configuration
above and shouldn’t be used if the above secrets are used.
config:
# There are multiple ways to define the configuration here:
# * If you define anything under data, the chart will automatically create
# a secret and mount it.
# * If you define securityConfigSecret, the chart will assume this secret is
# created externally and mount it.
# * It is an error to define both data and securityConfigSecret.
securityConfigSecret: “”
data: {}
# config.yml: |-
# internal_users.yml: |-
# roles.yml: |-
# roles_mapping.yml: |-
# action_groups.yml: |-
# tenants.yml: |-
How long to wait for opensearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 2000
Use an alternate scheduler.
ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
schedulerName: “”
imagePullSecrets:
- name: regsecret
nodeSelector: {}
tolerations:
Enabling this will publically expose your OpenSearch instance.
Only enable this if you have security enabled on your cluster
ingress:
enabled: false
For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
ingressClassName: nginx
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: “true”
path: /
hosts:
- chart-example.local
tls:
- secretName: chart-example-tls
hosts:
- chart-example.local
nameOverride: “”
fullnameOverride: “”
masterTerminationFix: false
lifecycle: {}
preStop:
exec:
command: [“/bin/sh”, “-c”, “echo Hello from the postStart handler > /usr/share/message”]
postStart:
exec:
command:
- bash
- -c
- |
#!/bin/bash
# Add a template to adjust number of shards/replicas1
TEMPLATE_NAME=my_template
INDEX_PATTERN=“logstash-*”
SHARD_COUNT=8
REPLICA_COUNT=1
ES_URL=http://localhost:9200
while [[ “$(curl -s -o /dev/null -w ‘%{http_code}\n’ $ES_URL)” != “200” ]]; do sleep 1; done
curl -XPUT “$ES_URL/_template/$TEMPLATE_NAME” -H ‘Content-Type: application/json’ -d’{“index_patterns”:[‘"“$INDEX_PATTERN”"’],“settings”:{“number_of_shards”:‘$SHARD_COUNT’,“number_of_replicas”:‘$REPLICA_COUNT’}}’
keystore:
networkPolicy:
create: false
Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
In order for a Pod to access OpenSearch, it needs to have the following label:
{{ template “uname” . }}-client: “true”
Example for default configuration to access HTTP port:
opensearch-master-http-client: “true”
Example for default configuration to access transport port:
opensearch-master-transport-client: “true”
http:
enabled: false
Deprecated
please use the above podSecurityContext.fsGroup instead
fsGroup: “”
Set optimal sysctl’s. This requires privilege. Can be disabled if
the system has already been preconfigured. (Ex: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html)
Also see: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
sysctl:
enabled: false
Enable to add 3rd Party / Custom plugins not offered in the default OpenSearch image.
plugins:
enabled: false
installList: