Deploying custom TEXT_EMBEDDING model fails with "model content changed" despite matching SHA-256

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

  • OpenSearch Dashboards: v3.1.0
  • Brownser : Chrome

Describe the issue:

When trying to deploy a custom model on OpenSearch Dashboard DevTools, using
_deploy or _load, I have this issues

“error”: “”“{“6QRZMXpNSAS5HIQRKHag1A”:“model content changed”}”“”

I tried to use only one node, let OpenSearch chunk it, I check the hash value by download my model from the url to be sure the hash value don’t change. But the error is still there.

Cluster ML settings

{
  "plugins.ml_commons.allow_registering_model_via_url": true,
  "plugins.ml_commons.only_run_on_ml_node": false
}

Describe the issue
I’m trying to deploy a custom TEXT_EMBEDDING model via DevTools (_deploy / _load). The deployment consistently fails with:

"error": """{"<node_id>":"model content changed"}"""

The SHA-256 of the model served by the URL matches the model_content_hash_value I register with, and the file size also matches.

Steps to reproduce

  1. Register the model
POST _plugins/_ml/models/_register
{
  "name": "tenderbert-model",
  "version": "0.0.8",
  "model_group_id": "9zOWYZkBkPSFK9bhhcJ8",
  "description": "TenderBERT model for construction domain",
  "function_name": "TEXT_EMBEDDING",
  "model_format": "TORCH_SCRIPT",
  "model_content_size_in_bytes": 817806115,
  "model_content_hash_value": "f241c56af1e83a91fbb3ed4d386b969308ab7fa726ac5cd7bffd1eaec3d3f613",
  "url": "https://xxx/tenderbert_opensearch_19092025.zip",
  "model_config": {
    "model_type": "bert",
    "embedding_dimension": 768,
    "framework_type": "sentence_transformers",
    "all_config": "{ ... large HF config json ... }"
  }
}

  1. Deploy the model
POST _plugins/_ml/models/<MODEL_ID>/_deploy

  1. Actual result
    Deployment task ends in FAILED with:
"error": """{"<node_id>":"model content changed"}"""

Expected result
Model deploys successfully.

What I have tried

  • Verified the exact bytes from the model URL on a Linux host:

    curl -L "https://xxx/tenderbert_opensearch_19092025.zip" -o /tmp/tenderbert.zip
    sha256sum /tmp/tenderbert.zip   # f241c56a...3d3f613
    stat -c%s /tmp/tenderbert.zip   # 817806115
    
    

    → Matches model_content_hash_value and model_content_size_in_bytes.

  • Deployed with the default plan (no node_ids), and also tried targeting a single node. Same result.

  • Re-registered under a new version (new model_id) and re-deployed. Same result.

  • Toggled plugins.ml_commons.only_run_on_ml_node to false to allow data nodes (no eligible ML nodes). Same result.

  • I cannot query system indices like .plugins-ml-model-chunk from DevTools due to proxy/method restrictions (e.g., Method Not Allowed), so I’m unable to confirm the chunk count/mapping from my side.

Diagnostics I can provide if needed

  • Output of:

    • GET _plugins/_ml/models/<MODEL_ID>

    • GET _cat/plugins?v

    • GET _nodes?filter_path=nodes.*.name,nodes.*.roles

    • GET _cluster/settings?include_defaults=true&flat_settings=true

  • Any other logs or endpoints you suggest.

Thanks in advance for any guidance.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.