While most of the memory used by the OpenSearch process is heap, there’s also some off-heap memory, such as metaspace and code cache.
I see two options here:
You can leave it as is, most Linux distros don’t need more than 2.5GB of memory to run OK, even if you run other small processes besides OpenSearch. With master/coordinator nodes, you don’t need the free space for OS caches that you normally need on data nodes, because there’s no data there.
You can try to reduce heap usage. To do that, I’d suggest monitoring OpenSearch (disclaimer - I work at Sematext and I helped with the OpenSearch integration in particular, so I’m biased in thinking that ours is the best, but we’re not the only ones for sure) and see how much heap is actually needed. In most clusters, you can make do with less than 3GB.
My rule of thumb is that if the lowest points of heap usage (i.e. after garbage collection) are under 50% of your heap, you can make do with less, because those low points are closer to what you really need. Otherwise you can give Java 100GB and it will use it all in time, delaying garbage collection.
The other problem, with master/coordinator nodes in particular, is that if you have a ton of traffic (for masters - cluster managers - those are events with lots of metadata changes, such as a full cluster restart), GC may fall behind. In this case you’ll probably need to tweak your GC settings. The last link gives a lot of generic info, but also wrote a blog post a while ago specifically for the workload of masters (cluster managers): Garbage Collection Settings for Elasticsearch Master Nodes
Should I just add more RAM, or will I be masking a possible problem? Like I mentioned with ES I had no problem. Is it maybe to the java that I am running that is bundled with opensearch?
openjdk version “17.0.7” 2023-04-18
OpenJDK Runtime Environment Temurin-17.0.7+7 (build 17.0.7+7)
OpenJDK 64-Bit Server VM Temurin-17.0.7+7 (build 17.0.7+7, mixed mode, sharing)
I am attaching grafana monitors for the node in question, to give you better idea. Maybe there is something I am not seeing?
It looks like you have plenty of heap (usage drops below 20% very often). I think you can make do with 2GB of heap just fine. I wouldn’t add more RAM.
I think your Java version is fine (at least in my experience).
I think that your problem is that some of the heap memory is now off-heap. So if you reduce your heap size, your memory usage should get close to where it used to be.