Hello OpenSearch Community,
I am working on setting up the OpenSearch Assistant feature in my OpenSearch 3.1.0 environment. My goal is to use a locally hosted Large Language Model, and I’m running into an IllegalArgumentException when I try to execute my conversational agent.
Here is a summary of my setup and the issue I’m facing:
Environment:
- OpenSearch Version: 3.1.0
- LLM: Deepseek R1 1.5B, running locally.
What I Have Accomplished So Far:
- Model Registration: I have successfully connected my local Deepseek model to OpenSearch as a remote model.
- Direct Model Inference: I can successfully send requests to the model directly via the
_predictAPI, and it returns valid responses. This confirms that the connection between OpenSearch and my local LLM is working correctly. - Agent Creation: I have created an ML Agent that is configured to use this registered model.
The Problem
While direct model inference works, I encounter an error when I try to interact with the agent using the _execute API.
Here is the successful direct model inference call using _predict:
Request:
JSONPOST /_plugins/_ml/models/<your_model_id>/_predict { "parameters": { "messages": [ { "role": "user", "content": "Hello" } ] } }
Successful Response:
JSON{ "inference_results": [ { "output": [ { "name": "response", "dataAsMap": { "id": "chatcmpl-fa68bc91-48d2-43ee-aaad-0870288b74ec", "model": "deepseek-r1:1.5b", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "<think>\n\n</think>\n\nHello! How can I assist you today? 😊" }, "finish_reason": "stop" } ] } } ], "status_code": 200 } ] }
However, when I try to use the agent with the _execute API, it fails.
Failing Agent Execution Request:
JSONPOST /_plugins/_ml/agents/<your_agent_id>/_execute { "parameters": { "question": "what's the population increase of Seattle from 2021 to 2023" } }
Error Response:
JSON{ "status": 400, "error": { "type": "IllegalArgumentException", "reason": "Invalid Request", "details": "Invalid payload: { \"model\": \"deepseek-r1:1.5b\", \"messages\": ${parameters.messages} }" } }
The error message, specifically Invalid payload: { ... "messages": ${parameters.messages} }, suggests that the ${parameters.messages} variable is not being correctly resolved or formatted before being sent to the model. It seems to be passing the variable name as a literal string instead of interpolating the conversation history.
Backend Logs:
The OpenSearch backend logs show a series of IllegalArgumentException, StringIndexOutOfBoundsException, and NullPointerException errors around the time of the agent execution requests.
{"type":"log","@timestamp":"2025-07-18T14:39:33Z","tags":["error","opensearch","data"],"pid":1,"message":"[IllegalArgumentException]: Invalid Request"} {"type":"log","@timestamp":"2025-07-18T14:40:22Z","tags":["error","opensearch","data"],"pid":1,"message":"[IllegalArgumentException]: Invalid Request"} {"type":"log","@timestamp":"2025-07-18T14:53:19Z","tags":["error","opensearch","data"],"pid":1,"message":"[IllegalArgumentException]: Invalid Request"} {"type":"log","@timestamp":"2025-07-18T14:55:14Z","tags":["error","opensearch","data"],"pid":1,"message":"[IllegalArgumentException]: Invalid Request"} {"type":"log","@timestamp":"2025-07-18T14:15:01Z","tags":["error","opensearch","data"],"pid":1,"message":"[StringIndexOutOfBoundsException]: System Error"} {"type":"log","@timestamp":"2025-07-18T14:19:46Z","tags":["error","opensearch","data"],"pid":1,"message":"[NullPointerException]: System Error"}
My Question for the Community:
- Could there be a misconfiguration in my agent definition? Specifically, how should the
llmblock andprompt.templatebe structured to ensure the${parameters.messages}variable is correctly handled for a remote, self-hosted model? - Does the agent framework require a specific request format that I am missing in the
_executecall? - Has anyone successfully configured a similar setup with a local model and could you share an example of your agent definition?
Any help or guidance on how to resolve this IllegalArgumentException would be greatly appreciated. Thank you in advance!