SSL/TLS Error When Trying To Connect To ElasticSearch

From default installation we are unable to connect any pods to the es-client we get the following error from metric-beat pod

2020-11-17T22:31:32.767Z	ERROR	[publisher_pipeline_output]	pipeline/output.go:106	Failed to connect to backoff(elasticsearch(http://opendistro-es-client-service.core.svc.cluster.local:9200)): Get http://opendistro-es-client-service.core.svc.cluster.local:9200: EOF
2020-11-17T22:31:32.767Z	INFO	[publisher_pipeline_output]	pipeline/output.go:99	Attempting to reconnect to backoff(elasticsearch(http://opendistro-es-client-service.core.svc.cluster.local:9200)) with 4 reconnect attempt(s)
2020-11-17T22:31:32.767Z	INFO	[publisher]	pipeline/retry.go:173	retryer: send wait signal to consumer
2020-11-17T22:31:32.767Z	INFO	[publisher]	pipeline/retry.go:175	  done
2020-11-17T22:31:32.953Z	INFO	module/wrapper.go:259	Error fetching data for metricset kubernetes.node: error doing HTTP request to fetch 'node' Metricset data: HTTP error 400 in : 400 Bad Request
2020-11-17T22:31:33.188Z	INFO	module/wrapper.go:259	Error fetching data for metricset kubernetes.container: error doing HTTP request to fetch 'container' Metricset data: HTTP error 400 in : 400 Bad Request
2020-11-17T22:31:35.193Z	INFO	module/wrapper.go:259	Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 400 in : 400 Bad Request
2020-11-17T22:31:37.047Z	INFO	module/wrapper.go:259	Error fetching data for metricset kubernetes.pod: error doing HTTP request to fetch 'pod' Metricset data: HTTP error 400 in : 400 Bad Request

from the opendistro-es-client

[2020-11-17T22:34:37,077][WARN ][o.e.h.AbstractHttpServerTransport] [opendistro-es-client-66bc694685-zc2tx] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/10.10.4.109:9200, remoteAddress=/10.10.4.69:35520}
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f20485454502f312e310d0a486f73743a206f70656e64697374726f2d65732d636c69656e742d736572766963652e636f72652e7376632e636c75737465722e6c6f63616c3a393230300d0a557365722d4167656e743a20476f2d687474702d636c69656e742f312e310d0a4163636570743a206170706c69636174696f6e2f6a736f6e0d0a4163636570742d456e636f64696e673a20677a69700d0a0d0a
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:832) [?:?]
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f20485454502f312e310d0a486f73743a206f70656e64697374726f2d65732d636c69656e742d736572766963652e636f72652e7376632e636c75737465722e6c6f63616c3a393230300d0a557365722d4167656e743a20476f2d687474702d636c69656e742f312e310d0a4163636570743a206170706c69636174696f6e2f6a736f6e0d0a4163636570742d456e636f64696e673a20677a69700d0a0d0a
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
	... 16 more

Inside the helm chart for opendistro-es
values.yml

# Copyright 2019 Viasat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.

kibana:
  enabled: true
  image: amazon/opendistro-for-elasticsearch-kibana
  imageTag: 1.11.0
  ## Specifies the image pull policy. Can be "Always" or "IfNotPresent" or "Never".
  ## Default to "Always".
  imagePullPolicy: ""
  replicas: 1
  port: 5601
  externalPort: 443
  resources: {}
  #  limits:
  #    cpu: 2500m
  #    memory: 2Gi
  #  requests:
  #    cpu: 500m
  #    memory: 512Mi
  readinessProbe: []
  livenessProbe: []
  startupProbe: []

  elasticsearchAccount:
    secret: ""
    keyPassphrase:
      enabled: false

  extraEnvs: []

  extraVolumes: []
  # - name: extras
  #   emptyDir: {}

  extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

  extraInitContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

  extraContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

  ssl:
    kibana:
      enabled: true
      existingCertSecret: vault
      existingCertSecretCertSubPath: fullchain.pem
      existingCertSecretKeySubPath: tls-pkcs8.pem
      existingCertSecretRootCASubPath: ca.crt
    elasticsearch:
      enabled: true
      existingCertSecret: vault
      existingCertSecretCertSubPath: fullchain.pem
      existingCertSecretKeySubPath: tls-pkcs8.pem
      existingCertSecretRootCASubPath: ca.crt



  configDirectory: "/usr/share/kibana/config"
  certsDirectory: "/usr/share/kibana/certs"

  ingress:
    ## Set to true to enable ingress record generation
    enabled: false
    annotations: {}
    #  kubernetes.io/ingress.class: nginx
    #  kubernetes.io/tls-acme: "true"
    labels: {}
    path: /
    hosts:
      - chart-example.local
    tls: []
    #  - secretName: chart-example-tls
    #    hosts:
    #      - chart-example.local

  service:
    type: ClusterIP
    annotations: {}

  config: {}
    ## Default Kibana configuration from kibana-docker.
    # server.name: kibana
    # server.host: "0"

    ## Replace with Elasticsearch DNS name picked during Service deployment
    # elasticsearch.hosts: ${ELASTIC_URL}
    # elasticsearch.requestTimeout: 360000

    ## Kibana TLS Config
    # server.ssl.enabled: true
    # server.ssl.key: /usr/share/kibana/certs/kibana-key.pem
    # server.ssl.certificate: /usr/share/kibana/certs/kibana-crt.pem
    # elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/certs/kibana-root-ca.pem

    # opendistro_security.cookie.secure: true
    # opendistro_security.cookie.password: ${COOKIE_PASS}



  ## Node labels for pod assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  #
  nodeSelector: {}

  ## Tolerations for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: []

  affinity: {}

  serviceAccount:
    ## Specifies whether a ServiceAccount should be created
    create: true
    ## The name of the ServiceAccount to use.
    ## If not set and create is true, a name is generated using the fullname template
    name:

  podAnnotations: {}


global:
  clusterName: elasticsearch

  psp:
    create: true

  rbac:
    enabled: true

  # Optionally override the docker registry to use for images
  registry: docker.io

  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  # imagePullSecrets:
  #   - myRegistryKeySecretName


elasticsearch:
  ## Used when deploying hot/warm architecture. Allows second aliased deployment to find cluster.
  ## Default {{ template opendistro-es.fullname }}-discovery.
  discoveryOverride: ""
  securityConfig:
    enabled: true
    path: "/usr/share/elasticsearch/plugins/opendistro_security/securityconfig"
    actionGroupsSecret:
    configSecret:
    internalUsersSecret:
    rolesSecret:
    rolesMappingSecret:
    tenantsSecret:
    #The following option simplifies securityConfig by using a single secret and specifying the respective secrets in the corresponding files instead of creating different secrets for config,internal users, roles, roles mapping and tenants
    #Note that this is an alternative to the above secrets and shouldn't be used if the above secrets are used
    config:
       securityConfigSecret:
       data: {}
        # config.yml: |-
        # internal_users.yml: |-
        # roles.yml: |-
        # rolesMapping.yml: |-
        # tenants.yml: |-

  extraEnvs: []

  extraInitContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

  extraVolumes: []
  # - name: extras
  #   emptyDir: {}

  extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

  initContainer:
    image: busybox
    imageTag: 1.27.2

  ## Set optimal sysctl's. This requires privilege. Can be disabled if
  ## the system has already been preconfigured.
  sysctl:
    enabled: true

  ssl:
    ## TLS is mandatory for the transport layer and can not be disabled
    transport:
      existingCertSecret: vault
      existingCertSecretCertSubPath: fullchain.pem
      existingCertSecretKeySubPath: tls-pkcs8.pem
      existingCertSecretRootCASubPath: ca.crt
    rest:
      enabled: true
      existingCertSecret: vault
      existingCertSecretCertSubPath: fullchain.pem
      existingCertSecretKeySubPath: tls-pkcs8.pem
      existingCertSecretRootCASubPath: ca.crt
    admin:
      enabled: true
      existingCertSecret: vault
      existingCertSecretCertSubPath: fullchain.pem
      existingCertSecretKeySubPath: tls-pkcs8.pem
      existingCertSecretRootCASubPath: ca.crt

  master:
    enabled: true
    replicas: 1
    updateStrategy: "RollingUpdate"

    ## Enable persistence using Persistent Volume Claims
    ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
    persistence:
      enabled: true
      ## A manually managed Persistent Volume and Claim
      ## Requires persistence.enabled: true
      ## If defined, PVC must be created manually before volume will be bound
      ##
      # existingClaim:

      ## The subdirectory of the volume to mount to, useful in dev environments
      ## and one PV for multiple services.
      ##
      subPath: ""

      ## Open Distro master Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
      ##   GKE, AWS & OpenStack)
      ##
      # storageClass: "-"
      accessModes:
        - ReadWriteOnce
      size: 8Gi
      annotations: {}

    resources: {}
    #  limits:
    #    cpu: 1
    #    memory: 1024Mi
    #  requests:
    #    cpu: 200m
    #    memory: 1024Mi
    javaOpts: "-Xms512m -Xmx512m"
    podDisruptionBudget:
      enabled: false
      minAvailable: 1
    readinessProbe: []
    livenessProbe:
      tcpSocket:
        port: transport
      initialDelaySeconds: 60
      periodSeconds: 10
    startupProbe: []
    nodeSelector: {}
    tolerations: []
    ## Anti-affinity to disallow deploying client and master nodes on the same worker node
    affinity: {}
    #  podAntiAffinity:
    #    requiredDuringSchedulingIgnoredDuringExecution:
    #      - topologyKey: "kubernetes.io/hostname"
    #        labelSelector:
    #          matchLabels:
    #            role: master
    podAnnotations: {}

    extraInitContainers: []
    # - name: do-something
    #   image: busybox
    #   command: ['do', 'something']

    extraContainers: []
    # - name: do-something
    #   image: busybox
    #   command: ['do', 'something']

  data:
    enabled: true
    replicas: 1
    updateStrategy: "RollingUpdate"

    ## Enable persistence using Persistent Volume Claims
    ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
    persistence:
      enabled: true
      ## A manually managed Persistent Volume and Claim
      ## Requires persistence.enabled: true
      ## If defined, PVC must be created manually before volume will be bound
      ##
      # existingClaim:

      ## The subdirectory of the volume to mount to, useful in dev environments
      ## and one PV for multiple services.
      ##
      subPath: ""

      ## Open Distro master Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
      ##   GKE, AWS & OpenStack)
      ##
      # storageClass: "-"
      accessModes:
        - ReadWriteOnce
      size: 8Gi
      annotations: {}

    resources: {}
    #  limits:
    #    cpu: 1
    #    memory: 1024Mi
    #  requests:
    #    cpu: 200m
    #    memory: 1024Mi
    javaOpts: "-Xms512m -Xmx512m"
    podDisruptionBudget:
      enabled: false
      minAvailable: 1
    readinessProbe: []
    livenessProbe:
      tcpSocket:
        port: transport
      initialDelaySeconds: 60
      periodSeconds: 10
    startupProbe: []
    nodeSelector: {}
    tolerations: []
    ## Anti-affinity to disallow deploying client and master nodes on the same worker node
    affinity: {}
    #  podAntiAffinity:
    #    preferredDuringSchedulingIgnoredDuringExecution:
    #      - weight: 1
    #        podAffinityTerm:
    #          topologyKey: "kubernetes.io/hostname"
    #          labelSelector:
    #            matchLabels:
    #              role: data
    podAnnotations: {}

  client:
    enabled: true
    service:
      type: ClusterIP
      annotations: {}
        # # Defined ELB backend protocol as HTTPS to allow connection to Elasticsearch API
        # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https

        # # ARN of ACM certificate registered to the deployed ELB for handling connections over TLS
        # # ACM certificate should be issued to the DNS hostname defined earlier (elk.sec.example.com)
        # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:111222333444:certificate/c69f6022-b24f-43d9-b9c8-dfe288d9443d"
        # service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

        # service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
        # service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
        # service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"

        # # Annotation to create internal only ELB
        # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    replicas: 1
    javaOpts: "-Xms512m -Xmx512m"
    ingress:
      ## Set to true to enable ingress record generation
      enabled: false
      annotations: {}
      #  kubernetes.io/ingress.class: nginx
      #  kubernetes.io/tls-acme: "true"
      labels: {}
      path: /
      hosts:
        - chart-example.local
      tls: []
      #  - secretName: chart-example-tls
      #    hosts:
      #      - chart-example.local
    resources: {}
    #  limits:
    #    cpu: 1
    #    memory: 1024Mi
    #  requests:
    #    cpu: 200m
    #    memory: 1024Mi
    podDisruptionBudget:
      enabled: false
      minAvailable: 1
    readinessProbe: []
    livenessProbe:
      tcpSocket:
        port: transport
      initialDelaySeconds: 60
      periodSeconds: 10
    startupProbe: []
    nodeSelector: {}
    tolerations: []
    ## Weighted anti-affinity to disallow deploying client node to the same worker node as master node
    affinity: {}
    #  podAntiAffinity:
    #    preferredDuringSchedulingIgnoredDuringExecution:
    #      - weight: 1
    #        podAffinityTerm:
    #          topologyKey: "kubernetes.io/hostname"
    #          labelSelector:
    #            matchLabels:
    #              role: client
    podAnnotations: {}

  config: {}
    ## Example Config
    # opendistro_security.allow_unsafe_democertificates: false
    # opendistro_security.allow_default_init_securityindex: true
    # opendistro_security.audit.type: internal_elasticsearch
    # opendistro_security.enable_snapshot_restore_privilege: true
    # opendistro_security.check_snapshot_restore_write_privileges: true
    # cluster.routing.allocation.disk.threshold_enabled: false
    # opendistro_security.audit.config.disabled_rest_categories: NONE
    # opendistro_security.audit.config.disabled_transport_categories: NONE
    # cluster:
    #   name: ${CLUSTER_NAME}
    # node:
    #   master: ${NODE_MASTER}
    #   data: ${NODE_DATA}
    #   name: ${NODE_NAME}
    #   ingest: ${NODE_INGEST}
    #   max_local_storage_nodes: 1
    #   attr.box_type: hot

    # processors: ${PROCESSORS:1}

    # network.host: ${NETWORK_HOST}

    # thread_pool.bulk.queue_size: 800

    # path:
    #   data: /usr/share/elasticsearch/data
    #   logs: /usr/share/elasticsearch/logs

    # http:
    #   enabled: ${HTTP_ENABLE}
    #   compression: true

    # discovery:
    #   zen:
    #     ping.unicast.hosts: ${DISCOVERY_SERVICE}
    #     minimum_master_nodes: ${NUMBER_OF_MASTERS}

    # # TLS Configuration Transport Layer
    # opendistro_security.ssl.transport.pemcert_filepath: elk-transport-crt.pem
    # opendistro_security.ssl.transport.pemkey_filepath: elk-transport-key.pem
    # opendistro_security.ssl.transport.pemtrustedcas_filepath: elk-transport-root-ca.pem
    # opendistro_security.ssl.transport.enforce_hostname_verification: false

    # # TLS Configuration REST Layer
    # opendistro_security.ssl.http.enabled: true
    # opendistro_security.ssl.http.pemcert_filepath: elk-rest-crt.pem
    # opendistro_security.ssl.http.pemkey_filepath: elk-rest-key.pem
    # opendistro_security.ssl.http.pemtrustedcas_filepath: elk-rest-root-ca.pem

  log4jConfig: ""

  loggingConfig:
    ## Default config
    ## you can override this using by setting a system property, for example -Des.logger.level=DEBUG
    es.logger.level: INFO
    rootLogger: ${es.logger.level}, console
    logger:
      ## log action execution errors for easier debugging
      action: DEBUG
      ## reduce the logging for aws, too much is logged under the default INFO
      com.amazonaws: WARN
    appender:
      console:
        type: console
        layout:
          type: consolePattern
          conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  transportKeyPassphrase:
    enabled: false
    passPhrase:

  sslKeyPassphrase:
    enabled: false
    passPhrase:

  maxMapCount: 262144

  image: amazon/opendistro-for-elasticsearch
  imageTag: 1.11.0
  ## Specifies the image pull policy. Can be "Always" or "IfNotPresent" or "Never".
  ## Default to "Always".
  imagePullPolicy: ""

  configDirectory: /usr/share/elasticsearch/config

  serviceAccount:
    ## Specifies whether a ServiceAccount should be created
    create: true
    ## The name of the ServiceAccount to use.
    ## If not set and create is true, a name is generated using the fullname template
    name:


nameOverride: ""
fullnameOverride: ""

Wondering if each client has to have certs installed to be able to communicate with the Elasticsearch pod. Any help here would be great