Is Opensearch free and what is its memory requirement?

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

I am new and exploring OpenSearch. I see that there are two ways: 1. Self-hosted 2. AWS OpenSearch.

Can someone help me understand the pricing for Self-hosted and memory requirements in comparison to AWS hosted and Elasticsearch?

Describe the issue:

Configuration:

Relevant Logs or Screenshots:

Hello @ishat . Yes, OpenSearch is free (Apache 2.0). But keep in mind that compared to Elastic, it requires “manual” configuration in those moments where Elastic does it natively. I can’t tell you the price in AWS, but if you use the self-hosted option, it all depends on how much you can spend on hardware. Maybe you have more specific questions?

1 Like

Thanks a lot! We are planning to create a usecase using OpenSearch - log analytics. We will be using Linux CentOS version 8 and Java version 11 LTS. We will pass log files to fluentBit and later to Data Prepper. Through Data prepper, it would be passed to OpenSearch.
We are planning to go basic (minimum hardware), i.e. 1 cluster, 2 nodes each with 3 shards (1 primary + 2 replicas). How do you calculate memory requirement for this usecase?
I saw OpenSearch uses 4GB RAM memory via settings and for 2 instances of fluentBit, it uses 72MB based on their website. I am unsure about the rest of the memory requirement. Could your please help me understand - how to calculate it?
Also, read - By default, OpenSearch Service uses 50% of an instance’s RAM for the JVM heap, up to a heap size of 32 GiB.
OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m - sets the size of the Java heap (we recommend half of system RAM).
Also read that for Linux - Linux: Ensure vm.max_map_count is set to at least 262144 as per the documentation.
We strongly recommend maintaining a minimum of 30% free disk space in normal operations. This is to ensure there is sufficient available space for OpenSearch to perform operations.
Could you please help with memory requirement calculations?

Thanks a lot! We are planning to create a usecase using OpenSearch - log analytics. We will be using Linux CentOS version 8 and Java version 11 LTS. We will pass log files to fluentBit and later to Data Prepper. Through Data prepper, it would be passed to OpenSearch.

Okay. I’m not going to say anything about this use case. We are using Ubuntu 20.04 + Elastic beats OSS + Logstash with OpenSearch output plugin (mainly for Filebeat). For now.

We are planning to go basic (minimum hardware), i.e. 1 cluster, 2 nodes each with 3 shards (1 primary + 2 replicas). How do you calculate memory requirement for this usecase?
I saw OpenSearch uses 4GB RAM memory via settings and for 2 instances of fluentBit, it uses 72MB based on their website. I am unsure about the rest of the memory requirement. Could your please help me understand - how to calculate it?

It all depends on how much data you plan to store and for how long. But your calculation is strange. You have 2 nodes and a master and a date node? What are their characteristics?

  • You can store 20 shards on 1 GB of JVM Heap.
  • If you have, for example, 64 GB of RAM, then the JVM Heap is a maximum of 32 GB. So do the math.

It all depends on how much data you plan to store and for how long. But your calculation is strange. Do you have 2 nodes, a master and a date node? What are their characteristics?

  • You can store 20 shards on 1 GB of JVM Heap.
  • If you have, for example, 16 GB of RAM, then the JVM Heap is a maximum of 8 GB. So do the math.

Here is an example of our cluster
2 Dasboards
3 Masters
12 Data
1 Cold (For snapshot restore)
1 ML (In future)

1 Data node = 64 RAM (32 JVM Heap) + 500 GB free space
1 Index size MAX 120 GB (But this is the maximum I limited the index to, according to the calculation, because if we calculate that 1 shard is a maximum of 50 GB, we have a maximum of 480 GB)

1 Like

Wow, this is really helpful! Thanks a lot :slight_smile:

@ishat Here you can calculate your cluster
https://gbaptista.github.io/elastic-calculator/

1 Like