How to reduce node's disk usage?

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
opensearch 2.7.0

Describe the issue:
In particular, os-manager-node2 shows high disk usage. What should I do to reduce disk usage?
*



finally, os-manager-node2 has been filled :sob::sob::sob:

os-manager-node2    | [2023-08-04T02:38:22,139][INFO ][o.o.a.m.EntityColdStarter] [os-manager-node2] Succeeded in training entity: 4YVSvokBuKdzxfzqsC3w_entity_vGOlOGUeT-e-6ZBzb3TmhA
os-manager-node2    | [2023-08-04T02:38:22,350][WARN ][o.o.i.e.Engine           ] [os-manager-node2] [security-auditlog-2023.08.04][0] failed engine [index id[etRpvokBtQ2bsGDJCjMd] origin[PRIMARY] seq#[2432]]
os-manager-node2    | java.nio.file.FileSystemException: /usr/share/opensearch/data/nodes/0/indices/TCFbFTfgREGtC1jzuUkznw/0/index/_204.fdm: No space left on device
os-manager-node2    |   at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?]
os-manager-node2    |   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) ~[?:?]
os-manager-node2    |   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]
os-manager-node2    |   at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218) ~[?:?]
os-manager-node2    |   at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:484) ~[?:?]
os-manager-node2    |   at java.nio.file.Files.newOutputStream(Files.java:228) ~[?:?]
os-manager-node2    |   at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:394) ~[lucene-core-9.5.0.jar:9.5.0 13803aa6ea7fee91f798cfeded4296182ac43a21 - 2023-01-25 16:44:59]
os-manager-node2    |   at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:387) ~[lucene-core-9.5.0.jar:9.5.0 13803aa6ea7fee91f798cfeded4296182ac43a21 - 2023-01-25 16:44:59]
os-manager-node2    | Error grabbing logs: unexpected EOF 

At first, you can move the shard from os-manager-node2 to other nodes manually, and then check if there are some big shards existing, try to split the index or use a new index with more primary shards instead.

POST /_cluster/reroute
{
  "commands": [
    {
      "move": {
        "index": "test", "shard": 0,
        "from_node": "node1", "to_node": "node2"
      }
    }]
}