Default consumption of os_memory usage

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

Opensearch 1.2.4

Describe the issue:
Could you please explain me the default memory consumption of opensearch with log insertion at 1000tps.

Total log throughput at data container
|(log event/sec, avg 500 bytes/event)| Replica |
|—|—|—|
|1000|ingest 1, data 2, master 3|

I have set my data container - 2gb

Since I have consumed almost full, I have got 95% usage in graph

  1. I have stopped the input from logstash and deleted the indices and found that the memory usage is nearly 60%,And at the same time the memory by top is still nearly 1.5G.

kubectl top data-0 -n perftest
NAME CPU(cores) Memory(bytes)
data-0 36m 1493Mi

Could you please explain the behavior here?

I assume that the operating system used memory for caching. Once you removed data, some of that cache memory was freed up.

You should see this if you do something like free -m on the box. Or you can get more detailed memory metrics via something like Sematext Cloud (disclosure: I work for Sematext), especially since it sounds like you’re using Kubernetes and we have good support for both Kubernetes and OpenSearch (running in k8s).

1 Like

Thanks @radu.gheorghe.

1 Like