Trace Analytics, a new capability in Open Distro for Elasticsearch

I am a newbie to OpenDistro and Elasticsearch. I am using the following as guidance :-

Step 1: Git clone the Data Prepper github project

Step 2: Change to example trace-analytics-sample-app directory

cd Data-Prepper/examples/trace-analytics-sample-app

Step 3: Launch the docker-compose

docker-compose up -d

I expect to see the app at the following address - but get nothing:-

Has anyone tested this new feature? Any tips on the same will be greatly appreciated.

:wave: Welcome @sundhar

Can you confirm that the docker containers are running?

Hey @sundhar! As Kyle mentioned, could you please check and post the log output from the sample app container? Should be something like docker logs trace-analytics-sample-app_sample-app_1

1 Like

Jeff, Kyle,
Thank you for your responses. I suspect I am currently running out of memory for my docker system. I need to get this running in a new machine with more memory. Will set up that new machine soon and let you know the results when done.


I upgraded to a new machine using more RAM. The issue seems resolved - the virtual machine that I am currently running is taking 8 GIGS of ram!

Thanks for all your help and guidance.


Glad to hear it! Yeah, dockerized environments are memory hungry on your desktop - although to put it in context you’re doing the work of several machines.

Can anyone explain the data flow for this sample application provided - the trace-analytics-sample-app

We have the following 6 services that docker compose brings up:

#1 data-prepper:

#2 opendistro-for-elasticsearch

#3 Kibana

#4 otel-collector

vendor-agnostic way to receive, process and export telemetry data

#5 mysql

#6 sample-app

  - '8082:8082' # inventory

  - '8083:8083' # database

  - '8084:8084' # payment

  - '8085:8085' # auth

  - '8086:8086' # recommendation

  - '8087:8087' # analytics-service

  - '8088:8088' # order

  - '8089:8089' # client-dashboard

How is data passed to the data-prepper and the otel-collector from the python code that is provided in the sample-app directory?

The sample app use OpenTelemetry instrumentation and is configured to send trace data to the OpenTelemetry Collector (which is a separate app also started by docker-compose).

Looking at as an example, we’re configuring the instrumentation here: data-prepper/ at main · opendistro-for-elasticsearch/data-prepper · GitHub.

This allows data to be sent to the OTel Collector, which then sends to Data Prepper.


very helpful information, thank you for the same.

I am using the spring-petclinic application to instrument the code using aws-opentelemetry-agent.jar; The command I use to run is as follows -

java -javaagent:/PATH/TO/aws-opentelemetry-agent.jar -jar target/*.jar

when I run the above command, I expect the traces to be sent to the collector by default. However, I get the exception

[ 2021-03-18 16:34:27:209 +0800] [grpc-default-executor-2] WARN io.opentelemetry.exporter.otlp.metrics.OtlpGrpcMetricExporter - Failed to export metrics
io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(
at io.grpc.internal.DelayedClientCall$DelayedListener$

The docker containers from the previous run are running as expected. Anyone can point out what am I missing?