OpenSearch json logging

OpenSearch 2.9


I am running OpenSearch in docker and I would like the logging on stdout to be in json format.
Is there an easy way to configure this?

Yes, you can add a volume with a Here’s what I have in docker-compose.yml:

      - './'

And then the itself:

appender.console.type = Console = console
appender.console.layout.type = OpenSearchJsonLayout
appender.console.layout.type_name = json_logger

rootLogger.level = info
rootLogger.appenderRef.console.ref = console
1 Like

@radu.gheorghe thank you, that works. One more question tho. Do you think it’s possible to have stack traces in one line? I checked the source code and to me it seems there is no option to configure this and the trace will always be multiline, but maybe I missed something.

For example an exception looks like this with the above configuration:

{"type": "json_logger", "timestamp": "2023-09-15T12:37:28,402+02:00", "level": "WARN", "component": "o.o.r.t.s.ShardReplicationTask", "": "oszto-log-a", "": "oszto-log-02a.teszt.sna.internal", "message": " [site-b-oszto-ellenorzo-000156][0] Encountered a failure while executing in org.opensearch.replication.action.changes.GetChangesRequest@2c6a250. Retrying in 10 seconds.", "cluster.uuid": "FPTcJq6gRUu8VBb_V5ofPg", "": "OjizL9_1QH6NMyJD1Mr5-w" ,
"stacktrace": ["org.opensearch.OpenSearchTimeoutException: global checkpoint not synced. Retry after a few miliseconds...",
"at org.opensearch.replication.action.changes.TransportGetChangesAction$asyncShardOperation$1.invokeSuspend(TransportGetChangesAction.kt:93) ~[opensearch-cross-cluster-replication-]",
"at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) [kotlin-stdlib-1.6.0.jar:1.6.0-release-798(1.6.0)]",
"at [kotlinx-coroutines-core-jvm-1.6.0.jar:?]",
"at org.opensearch.common.util.concurrent.ThreadContext$ [opensearch-2.9.0.jar:2.9.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$ [?:?]",
"at [?:?]"] }

If there is no option for single line stack traces do you think it’s possible to join these lines into a single json line with rsyslog somehow?

As a rsyslog fanboy, I usually say that the answer to “do you think it’s possible to [insert requirement here] with rsyslog?” is yes :smiley:

Here’s my experimental rsyslog.conf:




set $.stitchedstacktrace = "";
foreach ($.trace in $!data!stacktrace) do {
  set $.stitchedstacktrace = $.stitchedstacktrace & " " & $.trace;

template(name="just-stacktrace" type="list"){
  # whole stacktrace: property(name="$!data!stacktrace")
  property(name="$.stitchedstacktrace") # the stacktrace we messed with

action(type="omfile" template="just-stacktrace" file="/tmp/output")

I’ll explain every chunk:

  • the file input module reads the test JSON that I would copy-paste from your post above (minus the newlines, I assume rsyslog doesn’t get them? we can stitch them, too)
  • the normalize message modifier module parses the JSON. It can parse other stuff to (think grok, but MUCH faster but more limited if you stick to grammar parsing and keep regex disabled). You can find a more complex example of mmnormalize here, but for now I’m just parsing the whole thing as a JSON and putting it in the $!data variable (hence data:json)
  • the next bit is RainerScript: $.stitchedstacktrace is my local variable, which starts empty, but then I iterate through the stacktrace variable of the parsed JSON and concatenate each element to the existing string
  • the resulting $.stitchedstacktrace goes into the template, called just-stacktrace in this case (you’ll probably want all properties, maybe add some other parsed variable, but I’m just testing)
  • I’m using this template in the file output action, which writes to /tmp/output, but again you may have other outputs going to TCP, TLS, OpenSearch