OpenSearch 2.9
Hi!
I am running OpenSearch in docker and I would like the logging on stdout to be in json format.
Is there an easy way to configure this?
OpenSearch 2.9
Hi!
I am running OpenSearch in docker and I would like the logging on stdout to be in json format.
Is there an easy way to configure this?
Yes, you can add a volume with a log4j.properties
. Here’s what I have in docker-compose.yml:
volumes:
- './log4j2.properties:/usr/share/opensearch/config/log4j2.properties'
And then the log4j2.properties itself:
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = OpenSearchJsonLayout
appender.console.layout.type_name = json_logger
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
@radu.gheorghe thank you, that works. One more question tho. Do you think it’s possible to have stack traces in one line? I checked the source code and to me it seems there is no option to configure this and the trace will always be multiline, but maybe I missed something.
For example an exception looks like this with the above configuration:
{"type": "json_logger", "timestamp": "2023-09-15T12:37:28,402+02:00", "level": "WARN", "component": "o.o.r.t.s.ShardReplicationTask", "cluster.name": "oszto-log-a", "node.name": "oszto-log-02a.teszt.sna.internal", "message": " [site-b-oszto-ellenorzo-000156][0] Encountered a failure while executing in org.opensearch.replication.action.changes.GetChangesRequest@2c6a250. Retrying in 10 seconds.", "cluster.uuid": "FPTcJq6gRUu8VBb_V5ofPg", "node.id": "OjizL9_1QH6NMyJD1Mr5-w" ,
"stacktrace": ["org.opensearch.OpenSearchTimeoutException: global checkpoint not synced. Retry after a few miliseconds...",
"at org.opensearch.replication.action.changes.TransportGetChangesAction$asyncShardOperation$1.invokeSuspend(TransportGetChangesAction.kt:93) ~[opensearch-cross-cluster-replication-2.9.0.0.jar:2.9.0.0]",
"at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) [kotlin-stdlib-1.6.0.jar:1.6.0-release-798(1.6.0)]",
"at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) [kotlinx-coroutines-core-jvm-1.6.0.jar:?]",
"at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:849) [opensearch-2.9.0.jar:2.9.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]",
"at java.lang.Thread.run(Thread.java:833) [?:?]"] }
If there is no option for single line stack traces do you think it’s possible to join these lines into a single json line with rsyslog somehow?
As a rsyslog fanboy, I usually say that the answer to “do you think it’s possible to [insert requirement here] with rsyslog?” is yes
Here’s my experimental rsyslog.conf:
module(load="imfile")
module(load="mmnormalize")
input(type="imfile"
File="/tmp/log.json"
tag="test"
)
action(type="mmnormalize"
rule=["rule=:%data:json%"])
set $.stitchedstacktrace = "";
foreach ($.trace in $!data!stacktrace) do {
set $.stitchedstacktrace = $.stitchedstacktrace & " " & $.trace;
}
template(name="just-stacktrace" type="list"){
# whole stacktrace: property(name="$!data!stacktrace")
property(name="$.stitchedstacktrace") # the stacktrace we messed with
constant(value="\n")
}
action(type="omfile" template="just-stacktrace" file="/tmp/output")
I’ll explain every chunk:
$!data
variable (hence data:json
)$.stitchedstacktrace
is my local variable, which starts empty, but then I iterate through the stacktrace
variable of the parsed JSON and concatenate each element to the existing string$.stitchedstacktrace
goes into the template, called just-stacktrace
in this case (you’ll probably want all properties, maybe add some other parsed variable, but I’m just testing)/tmp/output
, but again you may have other outputs going to TCP, TLS, OpenSearch…