Evaluation Metrics for ML Commons Supported Algorithms in OpenSearch 3.1.0

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

3.1.0

Describe the issue:

I’m evaluating the ML Commons “supported algorithms” (e.g. Random Cut Forest, K-means, Linear Regression) via the train and predict REST APIs. The documentation clearly shows how to train models and generate predictions, but I can’t find any built-in endpoints or parameters for retrieving evaluation metrics (e.g. anomaly precision/recall, clustering silhouette score, regression RMSE).

  • Question 1: Do any of the supported algorithms expose evaluation metrics as part of their API?

  • Question 2: If not, how are other users measuring model performance in OpenSearch? Are there recommended best practices or community-driven tools for exporting results and calculating metrics externally?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.