OpenSearch Multimodal search - pre_process_function

I am trying to use Amazon Bedrock’s ‘amazon.titan-embed-image-v1’ for multimodal search in opensearch. In the process, I tried created a remote opensearch-bedrock connector but am unsure about the pre_process_function to be used to create the connector here.

Below is the pre_process_function to be used for bedrock models from opensearch doc,

StringBuilder builder = new StringBuilder(); builder.append("\""); String first = params.text_docs[0]; builder.append(first); builder.append("\""); def parameters = "{" +"\"inputText\":" + builder + "}"; return "{" +"\"parameters\":" + parameters + "}";

The above function is only suitable for bedrock text embedding model as the request body to fetch text embedding from bedrock will be

"{ \"inputText\": \"${parameters.inputText}\" }"

Whereas, for bedrock multimodal embedding, the request body will be having both text and image inputs as below,

{ "inputText": "Green iguana on tree branch", "inputImage": input_image }

So, what will be the pre_process_function that I should use in the connector payload request to account both inputText and inputImage

Please assist with some code reference. Thanks in advance.

Hi @Praveen ,

Could you please take a look at this doc and see if that solves your issue?


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.