Logstash bulk 401 error

Hello everyone
I use opensearch 2.6.0 version, logstash-oss-with-opensearch-output-plugin 8.4.0 and filebeat 8.4.3
Everything goes fine, i use right credentials. I know it, because i see logs from in opensearch
But once a day i got error:

[2023-08-07T04:33:26,460][ERROR][logstash.outputs.opensearch][main][ced00a4ad08859f1e5b7b64b15bda94bbd12664a76730d86d29c9830584648ee] Encountered a retryable error (will retry with exponential backoff) {:code=>401, :url=>"https://opensearch:9200/_bulk", :content_length=>135541}
[2023-08-07T09:33:28,131][ERROR][logstash.outputs.opensearch][main][ced00a4ad08859f1e5b7b64b15bda94bbd12664a76730d86d29c9830584648ee] Encountered a retryable error (will retry with exponential backoff) {:code=>401, :url=>"https://opensearch:9200/_bulk", :content_length=>27313}

I dont know where should i found reason of this error?
Maybe increase bulk_max_size should help?

401 means unauthorized, what does your output config look like on logstash?

@zelanastasia According to the documentation, log collectors should be of the following versions.

output {
        opensearch {
            hosts => ["coordinator1:9200", "coordinator2:9200"]
            user => {{ ELASTICSEARCH_USER }}
            password => {{ ELASTICSEARCH_PASSWORD }}
            ssl => true
            ssl_certificate_verification => false
            index => "xxxx-%{+YYYY.MM.dd}"
            timeout => 120
            }

Our credentials is right, so i dont understand how we can catch 401 error. Maybe its because we send logs to 2 cordinator hosts?

The errors report opensearch:9200 but in the config, you have:

Did you redact the error message? If yes, which OpenSearch node does the logstash keep reporting?

Please share opensearch.yml from both coordinator1 and coordinator2 nodes.

yes, i just hide real host names in logs and config. Sorry for confusing. Its same hosts
I have two coordinator nodes and two servers on each logstash container is running. I have error reporting for coordinator 1 and coordinator 2 too.

---
## Opendistro main configuratuion
cluster.name: odfe-cluster
node.name: "coordinator1"
node.master: false
node.data: false
node.ingest: false
path.data: /usr/share/elasticsearch/data
path.logs: "/var/log/opensearch"
network.host: ["127.0.0.1","x"]
http.port: 9200
discovery.seed_hosts: ["x","x","x","x","x","x","x","x"]
cluster.initial_master_nodes: ["x","x","x"]

## Opendistro security plugin configuratuion
plugins.security.ssl.transport.pemcert_filepath: "coordinator1.pem"
plugins.security.ssl.transport.pemkey_filepath: "coordinator1.key"
plugins.security.ssl.transport.pemtrustedcas_filepath: RootCA.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: admin.pem
plugins.security.ssl.http.pemkey_filepath: admin-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: RootCA.pem
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:
    - "CN=ADMIN,OU=infra,O=x,L=SPB,ST=SPB,C=RU"
plugins.security.nodes_dn:
    - 'CN=master1,OU=infra,O=x,L=SPB,ST=SPB,C=RU'
    - 'CN=master2,OU=infra,O=x,L=SPB,ST=SPB,C=RU'
    - 'CN=master3,OU=infra,O=x,L=SPB,ST=SPB,C=RU'
    - 'CN=data01,OU=infra,O=x,L=SPB,ST=SPB,C=RU'
    - 'CN=data02,OU=infra,O=x,L=SPB,ST=SPB,C=RU'
    - 'CN=data03,OU=infra,O=x,L=SPB,ST=SPB,C=RU'
    - 'CN=coordinator1,OU=infra,O=x,L=SPB,ST=SPB,C=RU'
    - 'CN=coordinator2,OU=infra,O=x,L=SPB,ST=SPB,C=RU'
    - 'CN=logs01,OU=infra,O=x,L=SPB,ST=SPB,C=RU'
    - 'CN=logs02,OU=infra,O=x,L=SPB,ST=SPB,C=RU'

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]

and same config for coordinator 2, but another nodename and ip

@zelanastasia They’re both in the same cluster, correct?

You’ve mentioned that you get an error once a day. Is it at the same time of the day?

Does this issue a recent problem or have you had it since you configured the logstash?

Yes, they are in the same cluster.
No, its always different time, sometimes it can be two or three times a day, but usually once.
I dont know how it was when we dont have a logstash, but with logstash we have this error, i think, all the time since logstash exists in our cluster.