High memory usage on master nodes

Versions Opensearch 2.9.0

Java uses more than heap allocated memory

I have 3 dedicated master nodes, 2 coordinator nodes and 10 data nodes.
This is the heap allocation for the master nodes

-Xms3g
-Xmx3g

I have total of 6GB memory on VM, 50% allocated for java heap, 50% for OS. I got 63% memory usage by java.

2256083 opensea+ 20 0 7224776 3.5g 32 S 0.7 63.5 131:23.27 java

In my older cluster ES (5.x) I am running master nodes with 1GB java heap allocation and I had no problems.

These are some of the metrics from prometheus

FS IO Total Writes

Coordinator 5M io/s
12M io/s
Data 307M io/s
185M io/s
183M io/s
258M io/s
241M io/s
179M io/s
131M io/s
180M io/s
117M io/s
180M io/s
Master 135M io/s
121M io/s
112M io/s

FS IO Read

Coordinator 339K io/s
397K io/s
Data. 0 io/s
9K io/s
9K io/s
9K io/s
9K io/s
9K io/s
9K io/s
9K io/s
33K io/s
9K io/s
Master 18M io/s
16M io/s
16M io/s

While most of the memory used by the OpenSearch process is heap, there’s also some off-heap memory, such as metaspace and code cache.

I see two options here:

  1. You can leave it as is, most Linux distros don’t need more than 2.5GB of memory to run OK, even if you run other small processes besides OpenSearch. With master/coordinator nodes, you don’t need the free space for OS caches that you normally need on data nodes, because there’s no data there.
  2. You can try to reduce heap usage. To do that, I’d suggest monitoring OpenSearch (disclaimer - I work at Sematext and I helped with the OpenSearch integration in particular, so I’m biased in thinking that ours is the best, but we’re not the only ones for sure) and see how much heap is actually needed. In most clusters, you can make do with less than 3GB.

My rule of thumb is that if the lowest points of heap usage (i.e. after garbage collection) are under 50% of your heap, you can make do with less, because those low points are closer to what you really need. Otherwise you can give Java 100GB and it will use it all in time, delaying garbage collection.

The other problem, with master/coordinator nodes in particular, is that if you have a ton of traffic (for masters - cluster managers - those are events with lots of metadata changes, such as a full cluster restart), GC may fall behind. In this case you’ll probably need to tweak your GC settings. The last link gives a lot of generic info, but also wrote a blog post a while ago specifically for the workload of masters (cluster managers): Garbage Collection Settings for Elasticsearch Master Nodes

Should I just add more RAM, or will I be masking a possible problem? Like I mentioned with ES I had no problem. Is it maybe to the java that I am running that is bundled with opensearch?

openjdk version “17.0.7” 2023-04-18
OpenJDK Runtime Environment Temurin-17.0.7+7 (build 17.0.7+7)
OpenJDK 64-Bit Server VM Temurin-17.0.7+7 (build 17.0.7+7, mixed mode, sharing)

I am attaching grafana monitors for the node in question, to give you better idea. Maybe there is something I am not seeing?





It looks like you have plenty of heap (usage drops below 20% very often). I think you can make do with 2GB of heap just fine. I wouldn’t add more RAM.

I think your Java version is fine (at least in my experience).

Where do you think is my problem? Is there something that doesn’t release from java cache?

I think that your problem is that some of the heap memory is now off-heap. So if you reduce your heap size, your memory usage should get close to where it used to be.

1 Like

Sounds good, I will give that a try

1 Like