I have a 3 node cluster with different hardware, i watch to bench the performance in production with real queries fom customers. So I want to be able to move all the query / ingest from one node to another.
of course I want to keep the cluster functional, if the node fails, the next one should take over.
I have an haproxy in front of my cluster (with 1 node in live, 2 node in backup)
It’s possible but not recommended, because nodes in OpenSearch cluster don’t run independently, they communicate with each other every time, if all traffic burden are on a node, then the node will become a hot node, this may cause the node fail. By default every node can receive the incoming requests, so you can use haproxy to distribute the requests to all data nodes and configure health check, I think health check is a good way to detect the failed node and ensure that the incoming requests will not be sent to the failed node.
i agree it’s not a good way, but at the moment I’m unable to know whether my servers are adapted to the load or not.
I don’t mind for the moment forcing a mode where I’ll only have one server to carry out the requests and change it every day to see the difference. Then of course I’ll use 3 identical servers once I’ve found the right model…
Since you have an LB in front, you can use search preference. Set it to local. Then, if HAProxy decides to hit just one node (if it’s available), that node will take all the load.
This assumes all 3 nodes have all data (number_of_shards=2). And preference applies only to searches, indexing load will be about the same on all 3 anyway (unless you use segment replication).