Is it normal having 10k+ wrtie/bulk tasks?

Trying to understand if there’s any potential issue when seeing 10k+ wrtie/bulk[s] tasks.

% cat tasks.txt | awk ‘{ print $1 }’ | sort | uniq -c
1 action
1 cluster:monitor/nodes/info
6 cluster:monitor/prometheus/metrics
3 cluster:monitor/state
1 cluster:monitor/tasks/lists
263 cluster:monitor/tasks/lists[n]
21 indices:admin/seq_no/global_checkpoint_sync
14 indices:admin/seq_no/global_checkpoint_sync[p]
287 indices:data/write/bulk
11976 indices:data/write/bulk[s]
316 indices:data/write/bulk[s][p]
1 indices:data/write/index
3 indices:monitor/stats
16 indices:monitor/stats[n]

indices:data/write/bulk[s]                     951T-C51RKWK046jWqjVIQ:2747047587 951T-C51RKWK046jWqjVIQ:2747047522 transport 1653617227493 02:07:07 147.6ms     10.42.3.57   opensearch-cluster-client-10
indices:data/write/bulk[s]                     951T-C51RKWK046jWqjVIQ:2747047591 951T-C51RKWK046jWqjVIQ:2747047522 transport 1653617227494 02:07:07 146ms       10.42.3.57   opensearch-cluster-client-10
indices:data/write/bulk[s]                     951T-C51RKWK046jWqjVIQ:2747047593 951T-C51RKWK046jWqjVIQ:2747047522 transport 1653617227495 02:07:07 145.2ms     10.42.3.57   opensearch-cluster-client-10
indices:data/write/bulk[s]                     1KaBwCuFREOUqebuVrIVqQ:3897700902 951T-C51RKWK046jWqjVIQ:2747047593 transport 1653617227496 02:07:07 2.1ms       10.42.6.58   opensearch-cluster-data-56
indices:data/write/bulk[s][p]                  1KaBwCuFREOUqebuVrIVqQ:3897700903 1KaBwCuFREOUqebuVrIVqQ:3897700902 direct    1653617227496 02:07:07 1.9ms       10.42.6.58   opensearch-cluster-data-56
indices:data/write/bulk[s]                     951T-C51RKWK046jWqjVIQ:2747047594 951T-C51RKWK046jWqjVIQ:2747047522 transport 1653617227495 02:07:07 144.8ms     10.42.3.57   opensearch-cluster-client-10
indices:data/write/bulk[s]                     951T-C51RKWK046jWqjVIQ:2747047597 951T-C51RKWK046jWqjVIQ:2747047522 transport 1653617227496 02:07:07 143.7ms     10.42.3.57   opensearch-cluster-client-10
indices:data/write/bulk[s]                     951T-C51RKWK046jWqjVIQ:2747047595 951T-C51RKWK046jWqjVIQ:2747047522 transport 1653617227496 02:07:07 144.5ms     10.42.3.57   opensearch-cluster-client-10
indices:data/write/bulk[s]                     951T-C51RKWK046jWqjVIQ:2747047598 951T-C51RKWK046jWqjVIQ:2747047522 transport 1653617227497 02:07:07 143.3ms     10.42.3.57   opensearch-cluster-client-10
indices:data/write/bulk[s]                     951T-C51RKWK046jWqjVIQ:2747047602 951T-C51RKWK046jWqjVIQ:2747047522 transport 1653617227498 02:07:07 141.7ms     10.42.3.57   opensearch-cluster-client-10
indices:data/write/bulk[s]                     951T-C51RKWK046jWqjVIQ:2747047603 951T-C51RKWK046jWqjVIQ:2747047522 transport 1653617227499 02:07:07 141.4ms     10.42.3.57   opensearch-cluster-client-10

The tasks distribution is suspecious.

1131 opensearch-cluster-client-56
 915 opensearch-cluster-client-16
 611 opensearch-cluster-client-44
 564 opensearch-cluster-client-42
 522 opensearch-cluster-client-52
 495 opensearch-cluster-client-45
 474 opensearch-cluster-client-36
 440 opensearch-cluster-client-28
 439 opensearch-cluster-client-66
 419 opensearch-cluster-client-33
 411 opensearch-cluster-client-5
 369 opensearch-cluster-client-29
 359 opensearch-cluster-client-11
 319 opensearch-cluster-client-17
 295 opensearch-cluster-client-54
 293 opensearch-cluster-client-63
 271 opensearch-cluster-client-59
 253 opensearch-cluster-client-32
 241 opensearch-cluster-client-20
 241 opensearch-cluster-client-10
 222 opensearch-cluster-client-43
 209 opensearch-cluster-client-25
 198 opensearch-cluster-data-26
 197 opensearch-cluster-client-39
 191 opensearch-cluster-client-7
 186 opensearch-cluster-client-65
 172 opensearch-cluster-client-21
 158 opensearch-cluster-client-23
 140 opensearch-cluster-data-20
 118 opensearch-cluster-client-12
 117 opensearch-cluster-client-6
 109 opensearch-cluster-client-35
 105 opensearch-cluster-client-31
 103 opensearch-cluster-client-18
 100 opensearch-cluster-client-0
  91 opensearch-cluster-client-41
  89 opensearch-cluster-client-55

This could happen when your clients are not using bulk index api… you may review the clients ingesting data and check if bulk indexing is enabled

@jathin12
In the filebea, the max_bulk_size is configured to 1024 on all my servers. I assume it should call bulk API to index data.

bulk_max_size edit

The maximum number of events to bulk in a single Elasticsearch bulk API index request. The default is 50.