Kafka brokers disk getting full

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

Describe the issue:
In my kafka brokers the data is not getting cleaned

Configuration:

I have configured three ec2 instances for Kafka. I have installed Kafka in those three ec2 instances. After i have created a topic for Kafka by using the command <./bin/kafka-configs.sh --bootstrap-server 10.50.0.232:9092,10.50.1.247:9092,10.50.2.175:9092 --alter --entity-type topics --entity-name mskkafkatopic --add-config retention.ms=86400000,retention.bytes=1073741824,delete.retention.ms=129600000>
I am sending logs from fluent bit to Kafka. But after configuring the retention period to 1 day in the Kafka topic by giving retention.ms parameter the messages are not getting deleted in Kafka.

Relevant Logs or Screenshots:

This is my fluent bit configuration
data:
custom_parsers.conf: |
[PARSER]
Name docker_no_time
Format json
Time_Keep Off
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Decode_Field_As json log
fluent-bit.conf: |
[SERVICE]
Daemon Off
Flush 1
Log_Level debug
Parsers_File parsers.conf
Parsers_File custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
Health_Check On

[INPUT]
    Name tail
    Exclude_Path  /var/log/containers/logstash*, /var/log/containers/fluent-bit*, /var/log/containers/aws-node*, /var/log/containers/kube-proxy*
    Path /var/log/containers/*.log
    multiline.parser docker, cri
    Tag kube.*
    Mem_Buf_Limit 100MB
    Skip_Long_Lines On
    Buffer_Chunk_Size  128K
    Buffer_Max_Size  4096k
    Refresh_Interval  100

[INPUT]
    Name systemd
    Tag host.*
    Systemd_Filter _SYSTEMD_UNIT=kubelet.service
    Read_From_Tail On

[FILTER]
    Name kubernetes
    Match kube.*
    Merge_Log On
    Keep_Log Off
    K8S-Logging.Parser On
    K8S-Logging.Exclude On

[OUTPUT]
    Name    kafka
    Match   *
    Format  json
    Brokers 10.50.0.232:9092,10.50.1.247:9092,10.50.2.175:9092
    Topics  mskkafkatopic
    timestamp_format iso8601
    Retry_Limit  false
    queue_full_retries  10
    rdkafka.message.timeout.ms  3600000
    rdkafka.log.connection.close  false
    rdkafka.queue.buffering.max.kbytes 10240
    rdkafka.request.required.acks 1
    storage.total_limit_size 500MB

Hey @vamsi.ramisetti

Is this something like your experiencing?