Shard allocation taking long time during an upgrade

Hi,
I’m trying to upgrade from ES 7.10.2 to OS 1.2.4 in Kubernetes environment (SLES 15).
During this upgrade, I’m shutting down data nodes one by one to make sure all shards are in assigned state before upgrading the next data pod.
Here the shard allocation is taking nearly 10mins sometimes, 2mins sometimes, etc…
Total no. of indices : 10, total no. of shards : 50 primary, 50 secondary (5 pri shards for each index and 1 replica).
Sometimes, the cluster is also going to red state saying

"explanation" : "a copy of this shard is already allocated to this node [[index3][1], node[z0eHDX-hShuVkzWJsjHejw], [P], s[STARTED], a[id=-jUwPrtYRwa93PWXB7JEgg]]"

Is this expected behaviour during upgrade ? Or do we have any proposal to find the shard allocation time for given no. of shards/indices ?

Hi all,
Is there any formula/logic to find the upgrade time for each node based on number of data nodes, number of shards, data size, shard size, etc… ?