Is there any clean-up process for internal indexes (.kibana, .monitoring...)

Hello.

I managed an small cluster in AWS using the managed service. The instances are small, with only 8GB RAM, so OpenSearch uses 4GB for the heap Following AWS recommendations, the cluster should not have more than 25 shards per GiB of Java heap.

Currently, I have ~600 shards in total, with each node handling ~200 shards. The issue is that ~65% of these shards are internal, automatically created as backups when the cluster is updated. I understand the need to retain some old backups in case a rollback is required, but keeping the current data plus three old backups seems excessive.

Additionally, some of these internal indexes are created with 5 primary shards and 1 replica, meaning they use 10 shards for only about 20 documents!

We all know that oversharding is a concern, but in this case, the problem is caused by OpenSearch itself, not by me. What can we do to clean up these internal indexes? Is there an automated process for managing them, or will they keep accumulating over time?

Thanks for any help!

Perhaps you could use the Index State Management system?

But the problem is that it’s unclear which internal indexes can be safely deleted without risking the cluster’s stability. Additionally, these old indexes cannot be easily managed by age or time, as the cleanup process would depend on updates. If the cluster is never updated, there’s no straightforward way to remove these indexes based on time.
My proposal is that OpenSearch should automatically manage the cleanup of these internal indexes, perhaps retaining only two old backups instead of multiple ones.