krusha
March 6, 2026, 10:32am
1
Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
Describe the issue : I want to create dynamic index name for target index in transform job
Configuration :“transform”: {
“description”: “Daily aggregation for sample”,
“enabled”: true,
“continuous”: false,
“page_size”: 100,
“schedule”: {
“cron”: {
“expression”: “57 15 * * ",
“timezone”: “Asia/Kolkata”
}
},
“source_index”: "sample-usage4-2026 ”,
“data_selection_query”: {
“range”: {
“@timestamp ”: {
“gte”: “now-1d/d”,
“lt”: “now/d”
}
}
},
“target_index”: “<job-afteraggregation-{now-1d/d}-v2>”, —→ this is not working
Relevant Logs or Screenshots :
Its giving error like the org.opensearch.ResourceAlreadyExistsException and its failing to run or process the complete data
pablo
March 6, 2026, 10:59am
2
@krusha As per OpenSearch documentation, the schedule field in the transform job expects interval instead of cron.
Aside from using OpenSearch Dashboards, you can also use the REST API to create, start, stop, and complete other operations relative to transform jobs. | Transforms APIs
Where did you get your example?
krusha
March 6, 2026, 1:54pm
3
I simply tried this and it worked, and Mau I know if we can use dynamic naming for this transform job
pablo
March 6, 2026, 5:59pm
4
@krusha The pattern seems to be working for me. I’m running OpenSearch 3.5
This is my sample transform job.
PUT _plugins/_transform/sample3
{
"transform": {
"transform_id": "job1",
"schema_version": 27,
"schedule": {
"interval": {
"start_time": 1772794235,
"period": 1,
"unit": "Minutes"
}
},
"metadata_id": null,
"updated_at": 1772794235437,
"enabled": true,
"enabled_at": 1772794235437,
"description": "",
"source_index": "security-auditlog-2026.03.06",
"data_selection_query": {
"match_all": {
"boost": 1
}
},
"target_index": "<job-afteraggregation-{now-1d/d}-v2>",
"page_size": 1000,
"groups": [
{
"date_histogram": {
"fixed_interval": "1m",
"source_field": "@timestamp",
"target_field": "@timestamp _date_histogram_1_m_fixed",
"timezone": "UTC",
"format": null
}
}
],
"aggregations": {},
"continuous": false
}
}
Do you get this error when running with cron or manually?
krusha
March 7, 2026, 10:11am
5
This is my job
PUT _plugins/_transform/k3-usage4-tests
{
"transform": {
"description": "Daily aggregation for k3-usage4",
"transform_id": "job1",
"enabled": true,
"continuous": false,
"page_size": 1000,
"schedule": {
"interval": {
"period": 1,
"unit": "Minutes"
}
},
"source_index": "k3-usage4-2026*",
"data_selection_query": {
"range": {
"@timestamp": {
"gte": "now-1d/d",
"lt": "now/d"
}
}
},
"target_index": "<testing-afteraggregation-{now-1d/d}-v2>",
"groups": [
{
"terms": {
"source_field": "exit_code",
"target_field": "exit_code"
}
},
{
"terms": {
"source_field": "type.keyword",
"target_field": "type"
}
},
{
"terms": {
"source_field": "cloud_name.keyword",
"target_field": "cloud_name"
}
},
{
"date_histogram": {
"source_field": "@timestamp",
"fixed_interval": "1m",
"target_field": "@timestamp",
"format": "yyyy-MM-dd'T'HH:mm:ss"
}
}
],
"aggregations": {
"total_time": { "sum": { "field": "total_time" } },
"s3dl_time": { "sum": { "field": "s3dl_time" } },
"run_time": { "sum": { "field": "run_time" } },
"brsync_time": { "sum": { "field": "brsync_time" } }
}
}
}
krusha
March 7, 2026, 10:12am
6
This is what i see
{
"k3-usage4-tests": {
"metadata_id": "HnTjn2C9i0i9nbYCVqcmow",
"transform_metadata": {
"transform_id": "k3-usage4-tests",
"after_key": {
"@timestamp": "2026-03-06T09:25:00",
"cloud_name": "prod-scmcs-2-80",
"exit_code": null,
"type": "usage4"
},
"last_updated_at": 1772878134038,
"status": "failed",
"failure_reason": "Failed to index the documents",
"stats": {
"pages_processed": 1,
"documents_processed": 6543,
"documents_indexed": 1000,
"index_time_in_millis": 10729,
"search_time_in_millis": 94
}
}
}
}
krusha
March 7, 2026, 10:29am
7
the problem is target index is getting created and its only running for 1000 pages , proccesed document is also 1000 it goes into error state
pablo
March 9, 2026, 4:18pm
8
@krusha I did some further testing, and I’ve got the same error. When I increased the page size, the error appeared exactly after indexing 3000 documents.
What is your OpenSearch version?
krusha
March 10, 2026, 3:25am
9
My opensearch version is : “2.19.0”
pablo
March 10, 2026, 12:35pm
10
@krusha I think the issue is related to the target index pattern.
The index management plugin correctly resolves the target index pattern and creates the index. However, when a new page is created, the check against the existing index uses the pattern rather than the resolved index name.
// To update the retry policy with updated settings
clusterService.clusterSettings.addSettingsUpdateConsumer(
TransformSettings.TRANSFORM_JOB_INDEX_BACKOFF_MILLIS,
TransformSettings.TRANSFORM_JOB_INDEX_BACKOFF_COUNT,
) { millis, count ->
backoffPolicy = BackoffPolicy.constantBackoff(millis, count)
}
}
private suspend fun createTargetIndex(targetIndex: String, targetFieldMappings: Map<String, Any>) {
if (!clusterService.state().routingTable.hasIndex(targetIndex)) {
val transformTargetIndexMapping = TargetIndexMappingService.createTargetIndexMapping(targetFieldMappings)
val request = CreateIndexRequest(targetIndex).mapping(transformTargetIndexMapping)
// TODO: Read in the actual mappings from the source index and use that
val response: CreateIndexResponse = client.admin().indices().suspendUntil { create(request, it) }
if (!response.isAcknowledged) {
logger.error("Failed to create the target index $targetIndex")
throw TransformIndexException("Failed to create the target index")
}
}
}
That’s why you see this error.
org.opensearch.indexmanagement.transform.exceptions.TransformIndexException: Failed to index the documents
...
Caused by: org.opensearch.ResourceAlreadyExistsException: index [testing-afteraggregation-2026.03.09-v2/HGsAOfIQT_qY_mys5OZtSA] already exists
I suggest reporting this in the Index management plugin GitHub repo .
If you do so, please share the link here.