Max shards per node clarification

I understand that having too many shards in a node can cause performance degradation since Opensearch needs to store metadata for each shard, but reading the docs it’s not clear what’s the actual limit: Choosing the number of shards - Amazon OpenSearch Service
The docs says that the max_shards_per_node setting is configured to 1000 for Opensearch up to 2.15 and to 1000 per 16GB of data node heap for 2.17. But then in the next paragraph it says that you shouldn’t have more than 25 shards per GiB of Java heap, and that gives only 400 shards per 16 GB which is significantly less than the 1000 limit mentioned above.
So what’s the actual limit that we should consider?

Hi @abize,

I would read it as 1000 is a hard limit, and “no more than 25 shards per GiB of Java heap” is the best practice(recommended)?

Here is some other recommendation:

"For logging, shard sizes between 10 and 50 GB usually perform well.

For search operations, 20-25 GB is usually a good shard size.

Another rule of thumb takes into account your overall heapsize. You should aim for having 20 shards per GB of heap."

Best,
mj