Difference between current index size and reported by the ISM policy

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
OpenSearch version: opensearch-2.10.0-1

Describe the issue:
We have configured an ISM policy with the following restrictions: if the primary shard is bigger than 30 GB or older than 31 days, then rollover the index. I would say it is working fine, but we see a discrepancy in the values reported by the policy about the size, and the current primary shard size. The first reporting 28 GB (see screenshot), while the value from the _shards API reports 33,3 GB (see below)

lsdf-storage-2024.10-000001 0     p      STARTED 73407567 33.3gb XXXX:XXXX:X:XXXX::XX elastic-XXX-04

Has anyone observed a similar discrepancy? I know that the policies are checked every 10 minutes, but this does not correspond to a 10-minute difference.

Thanks in advance!
Samuel

Hi,

Taking a look at the primary shard size today, I see that its size is 30 GB and then it was rolled over perfectly. I think that the discrepancy between the shard size, provided by the _cat/shards API, and the size seen by the ISM Policy resides somehow in the internal segment merging, performed by OpenSearch. This is optimized internally, and somehow the ISM Policy gets a better current view of this process than the _cat/shards API.

I am going to close this issue now. So, please do not panic if you see a primary shard in the _cat/shards API bigger than the one you expect for the Rollover to happen. I am also aware that the default time for checking the policies is 10 minutes, so if your index is indexing crazily, the match will not be perfect, but almost.

Cheers,
Samuel