To allocate shards evenly across nodes for two main indexes

190 NVME disks in the OpenSearch cluster. Each NVME is used by a OpenSearch-data node docker instance. So there’re 190 data nodes in the cluster.

70 data nodes in 100% CPU usage most of the time. Is this called “Shard skewed”?
There’re two major indexes for massive logs.
filebeat-log-a: 90 shards and 4.5TB per day.
filebeat-lob-b: 60 shards and 3TB per day.

Is there a way to have the shards of these two indexes on different data nodes(NVMEs)? If so, the loading will be evenly across data nodes.

It’s very challenging to pick up the right shards of the indexes. To leverage all NVME and data nodes resource, having 190 shards might be ideal. But the total shards will go over 7,200 shards in 20 days. We’ll keep these indexed logs for 30~60days. I remember the total shards over thousands would result in performance degradation.

Note: New log indexes are created 00:00 UTC.

Regards // Hugo

Hi @hugok

Is there a way to have the shards of these two indexes on different data nodes(NVMEs)? If so, the loading will be evenly across data nodes.

I think Index-level shard allocation filtering is what you are looking for:

if you are using replica then choosing 95 shards with 1 replica will make the shards evenly balanced.
otherwise you need to add node attr based on which index you would like to get allocated on that node and then use what @reta suggested.
you could use ism based policies to update index allocation routing settings so that warm indices are allocated appropriately

Thanks @jathin12 and @reta

choosing 95 shards with 1 replica

This doesn’t work for two indexes (100 replicas) with 190 data nodes. There’re overlapping for these two indexes to allocate shards.

Index-level shard allocation filtering

This seems work. I can have 100 nodes with node.attr.index1 and another 90 with node-attr.index2.
The only challenge is if the shard of index 1 size in 30GB and 18GB for index2 shard. Then the disk usage will be unevenly for 190 NVMEs.

Problem1: The data node with nodr.attr.index1 will have higher overall disk usage after days. I’m thinking how to prevent it. But this is a good suggestion.

Problem2:
Is it possible to specifiy different node.attr for data nodes managed by k8s helm-chart? I guess it’s not.

Hugo

We moved away from daily indices to use ISM policies to get almost equal sized indices. ISM policies can roll over automatically when index reaches certain size\age. Since we are use 1 replica, i tend to choose number of primaries to 50% of number of nodes. so that every node has either a primary and replica. and if you do that for all the indices you have then every node will have more or less equal indexing load and disk usage.

i suggested Index-level shard allocation because i assumed you are not using ISM to rollover initially.