Master and data nodes have different cluster UUIDs

Hi team,

We have a cluster hosted on AWS EC2 instances having
1 kibana (8GB root volume, non persistent)
3 master (8GB root volume, non-persistent)
12 data nodes (200GB persistent disk is attached per node)
We are using EC2 discovery plugin for discovering the cluster nodes. Due to some reasons, our master and kibana autoscaling groups recreated new instances and the older instances got replaced.

Current situation:
Master and Kibana nodes have new cluster UUIDS and the data nodes are running with older cluster UUID. Due to this, we are getting the following errors

2022-06-20T12:36:58,952][WARN ][o.e.c.c.Coordinator      ] [] failed to validate incoming join request from node [{}{ADsjli4JQzKtHRdmdp-bDA}{ZoGZPrbFScmb2xVutbI3cA}{}{}{dilrt}{aws_availability_zone=eu-west-1a, ml.machine_memory=33252052992, ml.max_open_jobs=20, node_type=hot, xpack.installed=true, transform.node=true}]
org.elasticsearch.transport.RemoteTransportException: [][][internal:cluster/coordination/join/validate]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: join validation on cluster state with a different cluster uuid vJoCmAqBRmS_zw5Q1g02iQ than local cluster uuid lY1hx_21SVmxrdXAmuYElA, rejecting
        at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$4( ~[elasticsearch-7.8.0.jar:7.8.0]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at$7$1.messageReceived( ~[?:?]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived( ~[elasticsearch-7.8.0.jar:7.8.0]
        at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun( ~[elasticsearch-7.8.0.jar:7.8.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun( ~[elasticsearch-7.8.0.jar:7.8.0]
        at ~[elasticsearch-7.8.0.jar:7.8.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:?]
        at java.util.concurrent.ThreadPoolExecutor$ [?:?]
        at [?:?]

How can we make these masters join the old cluster? Any support would be highly appreciated.

We are also getting opendistro security plugin not initialized on master and Kibana nodes. I believe this is because of the above specified issue.