Memory Used Bytes indicating by OpenSearch Stats API(/_nodes/stats/os?pretty) is higher than the Kubectl top pods command response.
The below is the response got from Stats (/_nodes/stats/os?pretty) API and it is indicating the below values “mem” : { “total_in_bytes” : 4294967296, “free_in_bytes” : 32768, “used_in_bytes” : 4294934528, “free_percent” : 0, “used_percent” : 100 },
But when we checked the used bytes using the Kubectl top command then we got the memory used bytes as 2535Mi
AFAIK, what you see in stats is what OpenSearch see as the OS-level used memory, while the top output is specifically the memory used by the OpenSearch process. I think that explains why you see much more reported by OpenSearch, because it sees other containers from the same node.
I wouldn’t say that it needs to, but it’s nice to know when you’re running out of system memory. The Stats API doesn’t assume you’re running in a container, in a bare-metal or VM deployment it maybe makes more sense. For a containerized environment, I guess you’ll get the same metrics from other sources.
@chirumanem the only hypothesis I have left is that you are using CGroups v2, could you confirm that? The JVM support of CGroups v2 is coming [1] in JDK 11.0.16 release.
Hi @radu.gheorghe ,
Hi,
As per the below output it is clear that data-0, data-1, data-5 & data-6 are not sharing the node with other pods/containers whereas data-2, data-3, data-4 & data-7 are sharing.
Example 1: The below is the response from the Stat APIs and kubectl top pods of the data-1, which does not contain other pods/containers in the same node:
Response from Stat APIs
eric-data-search-engine-data-1
“mem” : {
“total_in_bytes” : 4294967296,
“free_in_bytes” : 9699328,
“used_in_bytes” : 4285267968,
“free_percent” : 0,
“used_percent” : 100
},
Output from kubectl top pod
eric-data-search-engine-data-1 860m 2641Mi
Example 2: The below is the response from the kubectl top pods and Stat APIs of the data-3, which contains other pods/containers in the same node:
Response from Stat APIs
eric-data-search-engine-data-3
“mem” : {
“total_in_bytes” : 4294967296,
“free_in_bytes” : 28512256,
“used_in_bytes” : 4266455040,
“free_percent” : 1,
“used_percent” : 99
}, Output from kubectl top pod
eric-data-search-engine-data-3 888m 2622Mi
In both the examples, OpenSearch Stats API displaying high values in mem_used_bytes compared to kubectl top pods.
Oh, sorry. I didn’t realize used_in_bytes also accounts for the buffered/cached memory. So anything that’s memory mapped will also show up in there. Which makes this stat… not terribly useful, unless your index size is supposed to fit in your OS cache (and this way you can check).
Also, this has nothing to do with containers. Here’s what I see on a VM:
Because I’m also monitoring it with Sematext Cloud (which collects most OS metrics from the OS itself, not the Stats API), I know for a fact that a big chunk of those 64GB are OS caches: