Hi Experts,
I am trying to migrate my Opensearch cluster version 1.1 to elastic cloud 7.18. I have created a logstash pipeline for the same who’s configuration looks like this :
‘’’
input {
opensearch {
hosts => [“url”]
user => “”
password => “”
index => “*”
size => 100
scroll => “1m”
query => ‘{ “query”: { “match_all”: {}} }’
docinfo => true
}
}
filter {
}
output {
elasticsearch {
hosts => [“”]
user => “”
password => “”
index => “%{[@metadata][_index]}”
}
stdout { codec => rubydebug { metadata => true } }
}
‘’’
The data for some indexes ranges ~50million and i am not sure how error handling should be done for big load if something goes wrong while migration, so i have following questions:
- Does logstash provides any error handling out of the box which will make sure the entire process of migration (let say data around ~60GB) completes seamless.
- If due to any network error migration fails, how logstash handles this,will the migration starts from the same record where it failed, how logstash avoids duplication in this scenario
- Can we do batch migration in logstash, if yes how ?
- If during batch migration any record of an batch encounters issue, does logstash supports any of rollback and retry to make sure everything get migrated successfully.
I am new to logstash so can someone suggest industry accepted way to configure pipeline to migrate clusters (~60GB) seamlessly from one version to another.