Read billion records

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
2.13.0

Describe the issue:
I want to pull billions of docs from opensearch, i tried point-in-time with search after however i don’t see a good performance. Lets say 5 million takes 6.5 hrs to pull in a batch of 1000, anything more gets timedout. Is there a better approach to pull billion records in a reasonable time.

Configuration:

Relevant Logs or Screenshots:

What’t the target place after you pull the records? If the target is also an OpenSearch cluster? I suggest using snapshot/restore which is superiorly faster than other ways.
If snapshot/restore is not suitable, when using PIT with search after, you can consider increase the batch size, maybe 10000 or 20000, and even slicing the pagination with multiple threads to achieve better performance.

thanks for the response, target place could be the same cluster or an external system. The idea is to build a scanner which can scan through open search docs, worst case all, and update different user defined actions.
Perhaps i should investigate more on the batch size, my request fails for more than 2000, after a point in pagination like it can run for an hr and fail. this will be sandbox not PROD. Thanks for other options will look into it.