Disk usage on host running opensearch in docker containers

I have opensearch running in docker containers on a single centos vm.
I had an index policy to delete indices after 14 days. I changed this to 7 days after noting that the disk usage was going up. I see that the setting has taken effect by looking at the available indices. But I do not see any disk space getting released.

Here is output from running df on the host.
What are the suggested steps to take here, before my disk space is completely consumed?

Used this command
sudo du -h $(docker inspect --format=‘{{.LogPath}}’ $(docker ps -qa))
from How to list docker logs size for all containers? - Stack Overflow

Now, need a way to somehow delete old entries and limit log growth for future

Got around this, by adding a logging section to all of the services in the docker-compose file:

And did a
docker-compose up -d
to restart my docker containers.

It appears that logstash is logging every log entry sent to it. Hence the growth of log files.
Still looking for a setting to prevent this

Here is the output section from one of my pipelines
output {
opensearch {
hosts => [“https://odfe-node1:9200”,“https://odfe-node2:9200”] # change to https when using https
manage_template => false
index => “%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}”
ssl_certificate_verification => false
user => “logstash”
password => “logstash”
}
stdout {
codec => rubydebug
}
}

1 Like

Commenting out this section in the config files for logstash pipeline did it.
After the change,
docker-compose up did not do it.
Had to do a docker restart for the logstash container.

image

1 Like