While most of the memory used by the OpenSearch process is heap, there’s also some off-heap memory, such as metaspace and code cache.
I see two options here:
- You can leave it as is, most Linux distros don’t need more than 2.5GB of memory to run OK, even if you run other small processes besides OpenSearch. With master/coordinator nodes, you don’t need the free space for OS caches that you normally need on data nodes, because there’s no data there.
- You can try to reduce heap usage. To do that, I’d suggest monitoring OpenSearch (disclaimer - I work at Sematext and I helped with the OpenSearch integration in particular, so I’m biased in thinking that ours is the best, but we’re not the only ones for sure) and see how much heap is actually needed. In most clusters, you can make do with less than 3GB.
My rule of thumb is that if the lowest points of heap usage (i.e. after garbage collection) are under 50% of your heap, you can make do with less, because those low points are closer to what you really need. Otherwise you can give Java 100GB and it will use it all in time, delaying garbage collection.
The other problem, with master/coordinator nodes in particular, is that if you have a ton of traffic (for masters - cluster managers - those are events with lots of metadata changes, such as a full cluster restart), GC may fall behind. In this case you’ll probably need to tweak your GC settings. The last link gives a lot of generic info, but also wrote a blog post a while ago specifically for the workload of masters (cluster managers): Garbage Collection Settings for Elasticsearch Master Nodes