Load and Deploy a 20GB Language Model from a Local File

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
I’m using opeansearch version 2.11.0

Describe the issue:
I need to upload a custom language model (T5), which is around 20 GB. I have followed the instructions mentioned inhttps://opensearch-project.github.io/opensearch-py-ml/examples/demo_deploy_cliptextmodel.html and [GitHub - aws-samples/semantic-search-with-amazon-opensearch]. The size of the zipped model is around 20 GB. I can register my model, but failed to deploy it.

Error: Exception: Model file size exceeds the limit of 4GB .

My shard maximum is 1000. Then how can I use a large language model in opensearch?

Blockquote

Configuration:
3 nodes
max shards:1000

Relevant Logs or Screenshots:

Hi, unfortunately we don’t support models larger than 2 GB. You can use ML Extensibility feature : Connecting to remote models - OpenSearch documentation

The idea is you host your LLM and then you can connect that model with your opensearch cluster.

Please let me know if you have any further questions on this. Thanks.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.