How to separate ml ingestion workload from search workload

OS 2.11.0

How can I seperate ML ingestion workload and search workloads on the same index. I have 2 master nodes, 2 data nodes and 2 (ingest+ml) nodes.

I tried to create both sparse and dense indices and I noticed very high CPU utilization during ingestion. This has tremendously lowered search speed over the index.

Any recommendations?

Also, is there a way to force ingestion pipelines to use specific ML nodes so that I always have free ML nodes for search workloads?

Thanks

hi, @asfoorial , you can set this setting as true plugins.ml_commons.allow_custom_deployment_plan, then you can deploy model to a specific node. That way can help shift some ML model workload. But the ingestion and searching workload seems hard to split as they depend on the same index.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.