Plugins logs level

Hello, I am using Opensearch 1.3.8,
can you tell me how set up logs level of plugins?

I know that I can configure log.level in log4j2.properties, but can I set level of logging for plugins?
I am using:
opensearch-cross-cluster-replication
opensearch-index-management
opensearch-job-scheduler

I have plenty of INFO and WARN logs like this.
It spawns like every 5 seconds but I think that none of them are critical and I need to watch them.

Index Management

Feb 12 10:52:15 apesmaster02-spc apesmaster-node2-elastic[48992]: [2023-02-12T09:52:15,567][INFO ][o.o.i.i.ManagedIndexRunner] [apesmaster-node2] Executing attempt_transition_step for xxxx

Feb 12 10:52:15 apesmaster02-spc apesmaster-node2-elastic[48992]: [2023-02-12T09:52:15,567][INFO ][o.o.i.i.ManagedIndexRunner] [apesmaster-node2] Finished executing attempt_transition_step for xxxx

This happens when I replicate indexes from another cluster

Feb 12 11:49:49 peshot01-spc peshot-node1-elastic[1588]: [2023-02-12T10:49:49,670][WARN ][o.o.r.t.s.ShardReplicationTask] [peshot-node1] [xxx-aws-000005][0] Encountered a failure while executing
in org.opensearch.replication.action.changes.GetChangesRequest@58af9c49. Retrying in 10 seconds.
Feb 12 11:49:50 peshot01-spc peshot-node1-elastic[1588]: [2023-02-12T10:49:50,327][INFO ][o.o.i.i.ManagedIndexRunner] [peshot-node1] Executing attempt_transition_step for xxx-000011
Feb 12 11:49:50 peshot01-spc peshot-node1-elastic[1588]: [2023-02-12T10:49:50,327][INFO ][o.o.i.i.ManagedIndexRunner] [peshot-node1] Finished executing attempt_transition_step for xxx-000011
Feb 12 11:49:51 peshot01-spc peshot-node1-elastic[1588]: [2023-02-12T10:49:51,280][INFO ][o.o.r.t.s.ShardReplicationTask] [peshot-node1] [xxx-aws-000005][0] opensearch[peshot-node1][replication_f
ollower][T#9]: Got 55 changes starting from seqNo: 2904538
Feb 12 11:49:53 peshot01-spc peshot-node1-elastic[1588]: [2023-02-12T10:49:53,561][WARN ][o.o.r.t.s.ShardReplicationTask] [peshot-node1] [xxx-aws-000005][0] Encountered a failure while executing
in org.opensearch.replication.action.changes.GetChangesRequest@1732c692. Retrying in 10 seconds.
Feb 12 11:49:53 peshot01-spc peshot-node1-elastic[1588]: org.opensearch.OpenSearchTimeoutException: global checkpoint not synced. Retry after a few miliseconds...
Feb 12 11:49:53 peshot01-spc peshot-node1-elastic[1588]: #011at org.opensearch.replication.action.changes.TransportGetChangesAction$asyncShardOperation$1.invokeSuspend(TransportGetChangesAction.k
t:93) ~[opensearch-cross-cluster-replication-1.3.8.0.jar:1.3.8.0]
Feb 12 11:49:53 peshot01-spc peshot-node1-elastic[1588]: #011at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) [kotlin-stdlib-1.3.72.jar:1.3.72-release-468
 (1.3.72)]
Feb 12 11:49:53 peshot01-spc peshot-node1-elastic[1588]: #011at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56) [kotlinx-coroutines-core-1.3.5.jar:?]
Feb 12 11:49:53 peshot01-spc peshot-node1-elastic[1588]: #011at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:733) [opensearch-1.3.8.jar:1.3
.3]
Feb 12 11:49:53 peshot01-spc peshot-node1-elastic[1588]: #011at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
Feb 12 11:49:53 peshot01-spc peshot-node1-elastic[1588]: #011at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
Feb 12 11:49:53 peshot01-spc peshot-node1-elastic[1588]: #011at java.lang.Thread.run(Thread.java:829) [?:?]

Feb 12 11:49:57 peshot01-spc peshot-node1-elastic[1588]: [2023-02-12T10:49:57,654][INFO ][o.o.r.t.s.ShardReplicationTask] [peshot-node1] [xxx-aws-000005][0] opensearch[peshot-node1][replication_follower][T#10]: Timed out waiting for new changes. Current seqNo: 0. OpenSearchTimeoutException[1m]

Feb 12 11:49:59 peshot01-spc peshot-node1-elastic[1588]: [2023-02-12T10:49:59,438][ERROR][o.o.r.t.i.IndexReplicationTask] [peshot-node1] [xxx-aws-000004] Error in getting the required metadata [xxx-aws-000004] IndexNotFoundException[no such index [xxx-aws-000004]]

I dont know why I have so much WARN and failed logs from Replication when everything looks fine.