What happens when you go over 32GiB of heap memory?

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
AWS OpenSearch Service (Managed Cluster) OpenSearch_2_9_R20230926-P1

Describe the issue:
ElasticSearch/Opensearch does not recommend setting memory limit over 32GiB due to a pointer optimization that only happens when the heap size is under 32GiB. This is called compressed oops (1) (2).

However, I have found little discussion on exactly how much of a performance impact this will have on CPU and memory. Does anyone have insights into this?

We are deciding between having 12 very large nodes (e.g. 32 CPUs, 128-256 GiB memory) or having 24 smaller nodes (16 CPUs, 64 GiB memory). If we go with the large nodes, we want to go over 32GiB soft limit.

Hi @ssanti,

If my understanding is correct, the above 32GiB is indeed affected by pointer optimization, however, that will be only noticeable if you increase it to let’s say 34-35GiB, the jump should be more in the range of 45-48GiB to gain performance.
Both of your options above are valid and you should pick one based on your use case.


N.B. I found an answer on ElasticSearch site:

Once you cross that magical ~32 GB boundary, the pointers switch back to ordinary object pointers. The size of each pointer grows, more CPU-memory bandwidth is used, and you effectively lose memory. In fact, it takes until around 40–50 GB of allocated heap before you have the same effective memory of a heap just under 32 GB using compressed oops.

There are some additional blog posts related to this, such as A Heap of Trouble: Managing Elasticsearch's Managed Heap | Elastic Blog