Versions (relevant - OpenSearch/Dashboard/Server OS/Browser): OpenSearch 2.16.0
Describe the issue:
Hi. I’m writing a ML connector blueprint for my embedding API to implement semantic search.
I’ve checked predefined ones (OpenAI, Cohere, SageMaker), it seems that these models should be passed String array as requests.
But my embedding API only accepts single String text as a request.
In this situation, can I integrate my API to OpenSearch ML?
If so, is there any problem when using chunking?
Configuration:
Relevant Logs or Screenshots: