I cannot get metrics in the Dashboard

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

  • Open Telemetry Helm Chart 0.104.0
  • Data Prepper Helm Chart 0.1.0
  • Open Search Helm Chart 2.27
  • Open Search Dashboards Helm Chart 2.21.1

Describe the issue:
Hello all,

I have an EKS cluster where I’ve install OTEL (Open Telemetry), Data Prepper, Open Search and Open Search Dashboards using Helm but with Terraform code.

I’ve managed not to have any major error as far as I seen in the pods so they stay up and running. If I check Data Prepper logs I see this kind of messages:
{“message”:“DF8D9C8B-7A91-448F-95E8-BA42AFDAB349”}
{“message”:“146278FB-1EAB-4B2E-AB93-F0DD5DEB2607”}
{“message”:“3C515100-CCB7-43D3-A214-904B6477EB49”}

I’ve kubectl exec into Data Prepper pod and do curl into OpenSearch and do something like this:

https://logging-cluster-master.playground-logging.svc.cluster.local:9200/metrics-otel-v1-*/_search?size=10

I get back from that curl some metrics related to cpu, etc. So I think Open Search is receiving ok the metrics data.

However, when I port-forward to OpenSearch Dashboard pod and access to it with the browser, I can see indexes like “metrics-otel-v1-2024.12.04” and “events-2024.12.05”, however, no index pattern is created automatically. So for viewing the logs I had to created a manual index pattern i.e. “events-*”, once I had the index pattern created I could see some logs in . So my questions are:

  1. I thought the index pattern would be created automatically, is this the expected behaviour?
  2. Why cannot I see the logs under the logs section in Observability? I can only see them in the Discover section
  3. I’ve created an index pattern for metrics-otel-v1-* as well, and I can see that under Discovery section but again, in Observability/metrics I cannot see any index. Same for traces. What am I missing?

I put down below all my Helm configuration with Terraform, as I use Terraform templates you will see I use %%{} instead of %{} this is expected.

Configuration:
Open Telemetry DaemonSet mode so I get stuff from each node:

mode: daemonset

image:
  repository: otel/opentelemetry-collector-k8s

presets:
  logsCollection:
    enabled: true
  hostMetrics:
    enabled: true
  kubeletMetrics:
    enabled: true
  kubernetesAttributes:
    enabled: true


config:
  exporters:
      otlp/data-processor-traces:
          endpoint: http://playground-logging-processor-data-prepper.playground-logging.svc.cluster.local:21890
          tls:
            insecure: true
      otlp/data-processor-metrics:
          endpoint: http://playground-logging-processor-data-prepper.playground-logging.svc.cluster.local:21891
          tls:
            insecure: true
      otlp/data-processor-logs:
          endpoint: http://playground-logging-processor-data-prepper.playground-logging.svc.cluster.local:21892
          tls:
            insecure: true

  service:
    pipelines:
      traces:
        exporters: [otlp/data-processor-traces]
      logs:
        exporters: [debug, otlp/data-processor-logs]
      metrics:
        exporters: [debug, otlp/data-processor-metrics]

resources:
  limits:
    cpu: 500m
    memory: 512Mi
  requests:
    cpu: 250m
    memory: 256Mi

Open Telemetry deployment mode, so I get stuff from the whole cluster:

mode: deployment

image:
  repository: otel/opentelemetry-collector-k8s

replicaCount: 1

presets:
  kubernetesAttributes:
    enabled: true
  clusterMetrics:
    enabled: true
  kubernetesEvents:
    enabled: true

config:
  exporters:
    otlp/data-processor-traces:
      endpoint: http://playground-logging-processor-data-prepper.playground-logging.svc.cluster.local:21890
      tls:
        insecure: true
    otlp/data-processor-cluster-metrics:
      endpoint: http://playground-logging-processor-data-prepper.playground-logging.svc.cluster.local:21891
      tls:
        insecure: true
    otlp/data-processor-logs:
      endpoint: http://playground-logging-processor-data-prepper.playground-logging.svc.cluster.local:21892
      tls:
        insecure: true

  processors:
    batch:
      timeout: 20s
      send_batch_size: 256
    memory_limiter:
      limit_mib: 1024
      spike_limit_mib: 256
      check_interval: 1s

  service:
    pipelines:
      traces:
        exporters: [otlp/data-processor-traces]
      metrics:
        exporters: [debug, otlp/data-processor-cluster-metrics]
      logs:
        exporters: [debug, otlp/data-processor-logs]

resources:
  limits:
    cpu: 1000m
    memory: 1024Mi
  requests:
    cpu: 500m
    memory: 512Mi

Data Prepper configuration:

pipelineConfig:
  config:
   otel-logs-pipeline:
     workers: 5
     delay: 10
     source:
       otel_logs_source:
         ssl: false
     buffer:
       bounded_blocking:
         buffer_size: 1024
         batch_size: 256
     sink:
      - opensearch:
         hosts: ["https://logging-cluster-master.playground-logging.svc.cluster.local:9200"]
         username: "admin"
         password: ${logging_password}
         insecure: true
         index_type: custom
         index: events-%%{yyyy.MM.dd}
         bulk_size: 4

   otel-trace-pipeline:
     workers: 8
     delay: "100"
     source:
       otel_traces_source:
         ssl: false
     buffer:
       bounded_blocking:
         buffer_size: 25600
         batch_size: 400
     sink:
       - pipeline:
           name: "raw-traces-pipeline"
       - pipeline:
           name: "otel-service-pipeline"

   raw-traces-pipeline:
    workers: 5
    delay: 3000
    source:
      pipeline:
        name: "otel-trace-pipeline"
    buffer:
      bounded_blocking:
        buffer_size: 25600
        batch_size: 400
    processor:
      - otel_traces_raw:
    sink:
      - opensearch:
          hosts: ["https://logging-cluster-master.playground-logging.svc.cluster.local:9200"]
          username: "admin"
          password: ${logging_password}
          insecure: true
          index_type: trace-analytics-raw

   otel-service-pipeline:
    workers: 5
    delay: 3000
    source:
      pipeline:
        name: "otel-trace-pipeline"
    processor:
      - service_map_stateful:
    buffer:
        bounded_blocking:
          buffer_size: 25600
          batch_size: 400
    sink:
      - opensearch:
          hosts: ["https://logging-cluster-master.playground-logging.svc.cluster.local:9200"]
          username: "admin"
          password: ${logging_password}
          insecure: true
          index_type: trace-analytics-service-map
          bulk_size: 4

   otel-metrics-pipeline:
     workers: 16
     delay: 1000
     source:
       otel_metrics_source:
         health_check_service: true
         ssl: false
     buffer:
       bounded_blocking:
         buffer_size: 4096
         batch_size: 1024
     processor:
       - otel_metrics:
           calculate_histogram_buckets: true
           calculate_exponential_histogram_buckets: true
           exponential_histogram_max_allowed_scale: 10
           flatten_attributes: false
     sink:
      - opensearch:
         hosts: ["https://logging-cluster-master.playground-logging.svc.cluster.local:9200"]
         username: "admin"
         password: ${logging_password}
         insecure: true
         index_type: custom
         index: metrics-otel-v1-%%{yyyy.MM.dd}
         bulk_size: 10

resources:
  requests:
    cpu: 500m
    memory: 2000Mi
  limits:
    cpu: 500m
    memory: 3000Mi

OpenSearch config:

clusterName: "logging-cluster"
nodeGroup: "master"
masterService: "logging-cluster-master"
extraEnvs:
  - name: OPENSEARCH_INITIAL_ADMIN_PASSWORD
    value: ${logging_password}

OpenSearch Dashboards config:

opensearchHosts: "https://logging-cluster-master.playground-logging.svc.cluster.local:9200"

Relevant Logs or Screenshots:

Thank you in advance and regards