We are running out of space on /tmp on the elastic servers
-rw-r--r-- 1 elasticsearch elasticsearch 2305568768 Aug 3 12:36 performance_analyzer_agent_stats.log
-rw-r--r-- 1 elasticsearch elasticsearch 15613952 Aug 3 12:37 PerformanceAnalyzer.log
Especially the performance_analyzer_agent_stats.log uses currently 2.2GB (and not leaving anything left)
I looked into the file. The first statement ist from
EndTime=Wed, 29 Jan 2020 11:39:29 CET
So it seems the log is not rotated. Since the log lies in /tmp I assume that this should not be filled at all to this level.
What can I do to get rid of this log (it does not seem to contain any useful data for normal operations) or that this gets rotated/truncated appropriately
We still have this problem and are in need to manually delete and restart the whole cluster.
Is there any effort or new information available? When will this be fixed? (or is there a configuration change possible)?
would also like to bring attention to this issue, is there documentation somewhere that will help us configure automatic rotation of the performance analyzer logs?
kinda wondering if these are even needed if we have /dev/shm/performanceanalyzer