Stack trace in a single line

Hi!

I am trying to configure opensearch to log exceptions as a single line, so it’s easier to filter them with logstash.

I tried this log4j2.properties configuration file:

status = error

appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout

appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %replace{%m}{%n}{\u2028} %throwable{separator(\u2028)}%n

rootLogger.level = info
rootLogger.appenderRef.console.ref = console

The pattern seems to be working. Most stack traces get logged in a single line, for example:

[2023-08-10T11:53:12,701][ERROR][o.o.j.s.u.LockService    ] [node1] Exception occurred finding lock NoShardAvailableActionException[No shard available for [get [.opendistro-job-scheduler-lock][.opendistro-ism-config-mJx3FvlbQl61lQ03_3SO2g]: routing [null]]] 	at org.opensearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:256) 	at org.opensearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.start(TransportSingleShardAction.java:231) 	at org.opensearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:125) 	at org.opensearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:78) 	at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:218) 	at org.opensearch.indexmanagement.rollup.actionfilter.FieldCapsFilter.apply(FieldCapsFilter.kt:118) 	at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:216) 	at org.opensearch.indexmanagement.controlcenter.notification.filter.IndexOperationActionFilter.apply(IndexOperationActionFilter.kt:39) 	at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:216) 	at org.opensearch.performanceanalyzer.action.PerformanceAnalyzerActionFilter.apply(PerformanceAnalyzerActionFilter.java:78) 	at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:216) 	at org.opensearch.security.filter.SecurityFilter.apply0(SecurityFilter.java:320) 	at org.opensearch.security.filter.SecurityFilter.apply(SecurityFilter.java:165) 	at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:216) 	at org.opensearch.action.support.TransportAction.execute(TransportAction.java:188) 	at org.opensearch.action.support.TransportAction.execute(TransportAction.java:107) 	at org.opensearch.client.node.NodeClient.executeLocally(NodeClient.java:110) 	at org.opensearch.client.node.NodeClient.doExecute(NodeClient.java:97) 	at org.opensearch.client.support.AbstractClient.execute(AbstractClient.java:476) 	at org.opensearch.client.support.AbstractClient.get(AbstractClient.java:572) 	at org.opensearch.jobscheduler.spi.utils.LockService.findLock(LockService.java:255) 	at org.opensearch.jobscheduler.spi.utils.LockService.lambda$acquireLockWithId$3(LockService.java:154) 	at org.opensearch.action.ActionListener$1.onResponse(ActionListener.java:80) 	at org.opensearch.jobscheduler.spi.utils.LockService.createLockIndex(LockService.java:85) 	at org.opensearch.jobscheduler.spi.utils.LockService.acquireLockWithId(LockService.java:151) 	at org.opensearch.jobscheduler.spi.utils.LockService.acquireLock(LockService.java:122) 	at org.opensearch.indexmanagement.indexstatemanagement.ManagedIndexRunner$runJob$1$lock$1.invoke(ManagedIndexRunner.kt:235) 	at org.opensearch.indexmanagement.indexstatemanagement.ManagedIndexRunner$runJob$1$lock$1.invoke(ManagedIndexRunner.kt:235) 	at org.opensearch.indexmanagement.opensearchapi.OpenSearchExtensionsKt.suspendUntil(OpenSearchExtensions.kt:231) 	at org.opensearch.indexmanagement.indexstatemanagement.ManagedIndexRunner$runJob$1.invokeSuspend(ManagedIndexRunner.kt:235) 	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) 	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56) 	at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571) 	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738) 	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678) 	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

but I still see some exceptions spanning multiple lines:

uncaught exception in thread [DefaultDispatcher-worker-5]
NoShardAvailableActionException[No shard available for [get [.opendistro-job-scheduler-lock][.opendistro-ism-config-mJx3FvlbQl61lQ03_3SO2g]: routing [null]]]
	at org.opensearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:256)
	at org.opensearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.start(TransportSingleShardAction.java:231)
	at org.opensearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:125)
	at org.opensearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:78)
	at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:218)
	at org.opensearch.indexmanagement.rollup.actionfilter.FieldCapsFilter.apply(FieldCapsFilter.kt:118)
	at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:216)
	at org.opensearch.indexmanagement.controlcenter.notification.filter.IndexOperationActionFilter.apply(IndexOperationActionFilter.kt:39)
	at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:216)
	at org.opensearch.performanceanalyzer.action.PerformanceAnalyzerActionFilter.apply(PerformanceAnalyzerActionFilter.java:78)
	at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:216)
	at org.opensearch.security.filter.SecurityFilter.apply0(SecurityFilter.java:320)
	at org.opensearch.security.filter.SecurityFilter.apply(SecurityFilter.java:165)
	at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:216)
	at org.opensearch.action.support.TransportAction.execute(TransportAction.java:188)
	at org.opensearch.action.support.TransportAction.execute(TransportAction.java:107)
	at org.opensearch.client.node.NodeClient.executeLocally(NodeClient.java:110)
	at org.opensearch.client.node.NodeClient.doExecute(NodeClient.java:97)
	at org.opensearch.client.support.AbstractClient.execute(AbstractClient.java:476)
	at org.opensearch.client.support.AbstractClient.get(AbstractClient.java:572)
	at org.opensearch.jobscheduler.spi.utils.LockService.findLock(LockService.java:255)
	at org.opensearch.jobscheduler.spi.utils.LockService.lambda$acquireLockWithId$3(LockService.java:154)
	at org.opensearch.action.ActionListener$1.onResponse(ActionListener.java:80)
	at org.opensearch.jobscheduler.spi.utils.LockService.createLockIndex(LockService.java:85)
	at org.opensearch.jobscheduler.spi.utils.LockService.acquireLockWithId(LockService.java:151)
	at org.opensearch.jobscheduler.spi.utils.LockService.acquireLock(LockService.java:122)
	at org.opensearch.indexmanagement.indexstatemanagement.ManagedIndexRunner$runJob$1$lock$1.invoke(ManagedIndexRunner.kt:235)
	at org.opensearch.indexmanagement.indexstatemanagement.ManagedIndexRunner$runJob$1$lock$1.invoke(ManagedIndexRunner.kt:235)
	at org.opensearch.indexmanagement.opensearchapi.OpenSearchExtensionsKt.suspendUntil(OpenSearchExtensions.kt:231)
	at org.opensearch.indexmanagement.indexstatemanagement.ManagedIndexRunner$runJob$1.invokeSuspend(ManagedIndexRunner.kt:235)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
	at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

What am I missing?

Hey @szurokl

If I understand you correctly, you trying to take the stack trace for OpenSearch log file an put it in a single line? or are you trying to shorten the stack trace. I’m not quite sure what you want to achieve.

Yes, I would like the whole stack trace to be on a single line.

Hey @szurokl

Have you looked at Filebeat multiline confiurations?

I havent used logstash yet for multiline, but i found this .

Hey @Gsmitt

Yes, I already looked at filebeats multiline options. Unfortunately I use filebeat syslog input, which currently does not support multiine messages (Support multiline in syslog input · Issue #7594 · elastic/beats · GitHub)

In the logstash multiline codec plugins documentation the following is stated:

If you are using a Logstash input plugin that supports multiple hosts, such as the beats input plugin, you should not use the multiline codec to handle multiline events. Doing so may result in the mixing of streams and corrupted event data. In this situation, you need to handle multiline events before sending the event data to Logstash.

I do have multiple source hosts and I am using beats input, so sadly I can not use this option either.

Hey @szurokl

Well that sucks on both accounts. I did a quick search but if you have to use filebeat /w Syslog and cant use Logstash not sure what to suggest. I used nxlog before just an idea.

usually, switching to JSON log format helps to parse even multiline messages.

Did anyone say Syslog? :smiley: Sorry, I’m a bit of a syslog nerd :see_no_evil:

I’m curious where you’re using syslog exactly. Because most syslog protocols and deamons will happily process multiline messages (more details here). But my thinking is:

  • @mkhl is right, JSON doesn’t allow newlines, so if you make OpenSearch log in JSON, it will escape newlines for free. Typically something like Kibana will un-escape them so you can still see stacktraces in all their glory
  • Filebeat can read the OpenSearch log (File input), then do the multiline stitching (see @Gsmitt’s reply) and forward the data to either Logstash or OpenSearch. Even to Kafka if you’re feeling fancy. So I’m not sure where syslog would be in this scenario
  • If you need to use syslog (e.g. to send from one syslog daemon to the next), then most syslog daemons can happily stitch multiline esceptions for you. I’m an rsyslog guy - so if that’s an option for you, look for the startmsg.regex option of the file input module. But maybe you didn’t know that most syslog daemons can do pretty complex parsing and enriching of data and send it to OpenSearch/Elasticsearch. I’m talking Logstash-levels of complex. Here’s an example: Recipe: Apache Logs + rsyslog (parsing) + Elasticsearch - Sematext

I’ll shut up for now, but if you need me, just say “syslog” :stuck_out_tongue_closed_eyes:

Thanks everyone for the suggestions!
I will check them out.

For some more context on my scenario:

We run a bunch of services including opensearch in docker containers on a number of physical machines.

We use the journal logging driver of docker, so the logs from the containers end up in systemd journal.

We use rsyslog as well. We use the imjournal module of rsyslog to get the logs from the journal and then rsyslog sends everything to the filebeat syslog input.

I would try to simplify the pipeline. One idea is to have docker log to your local rsyslog using the syslog driver. Probably UDP is the best bet because it will support multiline and it doesn’t block if rsyslog is down or its queue is full or something.

And then from rsyslog you can do the parsing/enriching and send to the destination (which can be OpenSearch, see my link above). If using a SaaS is an option, you might want to try Sematext Cloud (disclaimer: I work for Sematext). You’d use it as you’d write to Elasticsearch/OpenSearch, here’s a sample rsyslog config: Sematext Rsyslog Integration

We actually support OpenSearch (among others) out of the box, so you’ll get nice dashboards, you can set up alerts, etc: OpenSearch Logs Integration