Facing issue in reindxing

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

Describe the issue

POST _reindex
{
  "source": {
    "index": "source_index",
    "query": {
      "bool": {
      }
    }
  },
  "dest": {
    "index": "destination_index"
  }
}

currently this was my reindexing query i used to reindex the required data to separate index when findings was created. But here I have some issues like

  1. how to sync events to destination index if the result of the query exceeds 10000 data ?
  2. currently i am facing one more issue when no of indexes are more because i am using alias name in source index.

Configuration:

Created roll over index per day with common alias name. using the common alias name and trying to reindex the data to

Relevant Logs or Screenshots

{
  "error": {
    "root_cause": [
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      },
      {
        "type": "rejected_execution_exception",
        "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
      }
    ],
    "type": "search_phase_execution_exception",
    "reason": "Partial shards failure",
    "phase": "query",
    "grouped": true,
    "failed_shards": [
      {
        "shard": 3,
        "index": "file-delete-000043",
        "node": "jzVcas0aQ8-03m8iXRRrgg",
        "reason": {
          "type": "rejected_execution_exception",
          "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
        }
      },
      {
        "shard": 3,
        "index": "image-load-000002",
        "node": "jzVcas0aQ8-03m8iXRRrgg",
        "reason": {
          "type": "rejected_execution_exception",
          "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
        }
      },
      {
        "shard": 3,
        "index": "image-load-000003",
        "node": "jgfXpVMtRSCp7hefXhj8Cg",
        "reason": {
          "type": "rejected_execution_exception",
          "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
        }
      },
      {
        "shard": 3,
        "index": "image-load-000004",
        "node": "jzVcas0aQ8-03m8iXRRrgg",
        "reason": {
          "type": "rejected_execution_exception",
          "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
        }
      },
      {
        "shard": 3,
        "index": "image-load-000006",
        "node": "jgfXpVMtRSCp7hefXhj8Cg",
        "reason": {
          "type": "rejected_execution_exception",
          "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
        }
      },
      {
        "shard": 3,
        "index": "image-load-000008",
        "node": "jgfXpVMtRSCp7hefXhj8Cg",
        "reason": {
          "type": "rejected_execution_exception",
          "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
        }
      },
      {
        "shard": 3,
        "index": "image-load-000010",
        "node": "jzVcas0aQ8-03m8iXRRrgg",
        "reason": {
          "type": "rejected_execution_exception",
          "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
        }
      },
      {
        "shard": 3,
        "index": "image-load-000011",
        "node": "jzVcas0aQ8-03m8iXRRrgg",
        "reason": {
          "type": "rejected_execution_exception",
          "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
        }
      },
      {
        "shard": 4,
        "index": "network-connection-000014",
        "node": "jzVcas0aQ8-03m8iXRRrgg",
        "reason": {
          "type": "rejected_execution_exception",
          "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
        }
      },
      {
        "shard": 4,
        "index": "network-connection-000015",
        "node": "jzVcas0aQ8-03m8iXRRrgg",
        "reason": {
          "type": "rejected_execution_exception",
          "reason": "Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."
        }
      }
    ]
  },
  "status": 429
}

i am getting the error for more indexes, but reduced in output for making it more understandable.

Any help or suggestion to fix this reindexing issue is welcomed

@Aravinth Thank you for the questions, I will try to address them one at a time.

The 10k limit applied to search operations, where as reindex used scroll internally therefore this would not be an issue.

The error you are seeing, is because scroll search consumes a scroll context on the coordinating node. Reindex from an alias that resolves to many indices + shards, opens many concurrent scrolls. The current limit of 500 is set to protect the cluster, as this memory usage goes up with higher numbers and can cause issues. Therefore of course you can increase this limit, but thats not recommended as a long term solution.

Instead you would have to rethink the approach you are using, below are two of the possible options:

  1. Tagging the documents at ingest time using ingest pipeline.
    You can tag the documents based on the search you are using, something like the following:
PUT _ingest/pipeline/tag_findings
{
  "description": "Tag suspicious events with finding=true",
  "processors": [
    {
      "script": {
        "lang": "painless",
        "source": "if (ctx.event != null && ctx.event.action == 'failed_login') { ctx.finding = true; ctx.finding_type = 'failed_login'; }"
      }
    }
  ]
}

When the documents are ingested the appropriate tag would be added, which you can then use to retrieve the relevant documents, skipping reindexing altogether.

  1. Using Data Prepper to index to a findings index at ingest
    Using Data Prepper Conditional routing, you can add entries and route based on these entries to additional index. Again avoiding heavy reindex operations.

If you need examples of exact ingest or data prepper pipelines, please provide additional details about the documents being indexed.

Hope this helps.

@Anthony Thanks for the reply.

The two earlier options won’t work for my current scenario. I’m trying to store events related to findings in a separate index because the main events index is managed by an ISM policy and gets deleted after a certain period. I want the findings-related data to be retained permanently in OpenSearch. The challenge is that I can only identify the events associated with a finding after the finding is generated, so I cannot tag or route these events at ingestion time through Data Prepper.

Now that you understand the issue, what I need is a way to reduce the scrollable context API limit for a specific reindexing query. I’ve already tried slicing, and although it solved the problem, I’m not fully aware of how slicing works or how other users typically handle this situation with the reindex API. I’m looking for standard, recommended approaches to address this problem.

@Aravinth There’s no way to reduce the scroll context limit only for a specific _reindex operation. There could be other factors that effected the result when using slices, fewer shards being hit, no other active scrolls etc. As slices is not a reliable way to reduce the scroll contexts.

In general it is recommended to try and reduce the shards that the the scroll is being operated on, by using time ranges in the reindex API for example.

{
            "range": {
              "@timestamp": {
                "gte": "2025-11-30T00:00:00Z",
                "lte": "2025-12-01T23:59:59Z"
              }
            }
          }

Alternatively, you can manually slice during reindexing using following approach:

POST _reindex?refresh
{
  "source": {
    "index": "image-load-alias",
    "slice": {
      "id": 0,
      "max": 3
    },
    "query": {
      "bool": {
        "must": [
          {
            "exists": {
              "field": "finding_id"
            }
          }
        ]
      }
    }
  },
  "dest": {
    "index": "findings-events"
  }
}

In the above case, this is the first slice of the 3, this will reduce the overall scrollable context and give you more control, but it is more manual, and you would need to be decide how many slices to use. If your alias points to indices with 5 shards each across 10 indices, that’s 50 total shards, you can try max: 50. And the default wait_for_completion=true will prevent multiple reindex operations from running concurrently.

Hope this helps