External Hosted Model Error: Invalid payload

Versions (OpenSearch Dashboard v 3.0.0):

Describe the issue:
Azure OpenAI external host connector is added. But the model return IllegalArgumentException: Invalid payload: { \"messages\": ${parameters.messages}, \"temperature\": 0.7 }
Strangely, this issue appear when I tried to test the model using this: RAG chatbot with a conversational flow agent - OpenSearch Documentation
But when I tried use the test described in the bluprint, it is working fine.
I tried to register an agent with below config but it also throws an error.

Configuration:
I follow the connector blueprint via ml-commons/docs/remote_inference_blueprints/azure_openai_connector_chat_blueprint.md at main · opensearch-project/ml-commons · GitHub

Relevant Logs or Screenshots:

  1. Agent Configuration
POST /_plugins/_ml/agents/_register
{
  "name": "Plan execute and reflect agent",
  "type": "plan_execute_and_reflect",
  "description": "this is a test agent",
  "llm": {
    "model_id": "dMEk-JYBuC4BoDwKx4IS"
  },
  "memory": {
    "type": "conversation_index"
  },
  "parameters": {
    "_llm_interface": "openai/v1/chat/completions"
  },
  "tools": [
    {
      "type": "ListIndexTool"
    },
    {
      "type": "SearchIndexTool"
    },
    {
      "type": "IndexMappingTool"
    }
  ],
  "app_type": "os_chat"
}
  1. Connector Config

Hi @wijamw

Simple predict and agent use require the request_bodies to be configured differently.

In order to use the agent with your open AI connector, you need to ensure that the connector is configured appropriately. To be specific, you need to ensure that the below fields are present in your request_body according to how Azure Open AI accepts these fields:

${parameters.system_prompt} -- where the system prompt must be passed in the body
${parameters._chat_history:-} -- how the chat history is passed
${parameters.prompt} -- where the actual prompt must be provided
${parameters._interactions:-} -- where interactions between tool uses are present
${parameters.tool_configs:-} -- how the tools are provided via function calling to the model

Refer to this document for how you can configure openai-gpt-4o with this agent:

It should be similar for Azure Open AI as well. Let me know if you need any additional help.

Thank you. I did what you recommend and use system_prompt parameter on the agent creation but I got this error when I test the agent.

I tried to look around but could not find any articles about this issue.
Do you have any insight of what could possibly be the problem here? My configuration or the AI Models endpoint?

Can you please share the latest connector/model creation and agent creation scripts?

The error is implying that the host is unreachable which probably means there is an issue with the connector setup.

I apologise for not including it before.
Agent creation script:

POST /_plugins/_ml/agents/_register
{
  "name": "My Plan Execute Reflect Agent",
  "type": "plan_execute_and_reflect",
  "description": "Agent for dynamic task planning and reasoning",
  "llm": {
    "model_id": "input_your_model_id",
    "parameters": {
      "prompt": "${parameters.question}"
    }
  },
  "memory": {
    "type": "conversation_index"
  },
  "parameters": {
    "_llm_interface": "openai/v1/chat/completions",
    "system_prompt": "You are part of an OpenSearch cluster. When you deliver your final result, include a comprehensive report. This report MUST:\n1. List every analysis or step you performed.\n2. Summarize the inputs, methods, tools, and data used at each step.\n3. Include key findings from all intermediate steps — do NOT omit them.\n4. Clearly explain how the steps led to your final conclusion.\n5. Return the full analysis and conclusion in the 'result' field, even if some of this was mentioned earlier.\n\nThe final response should be fully self-contained and detailed, allowing a user to understand the full investigation without needing to reference prior messages. Always respond in JSON format."
  },
  "tools": [
    { "type": "ListIndexTool" },
    { "type": "SearchIndexTool" },
    { "type": "IndexMappingTool" }
  ],
  "app_type": "os_chat"
}

Connector Settings:

{
  "name": "azure-openai-agent-connector",
  "version": "1",
  "description": "Connector for Azure OpenAI using GPT-4.1",
  "protocol": "http",
  "parameters": {
    "api-version": "2024-12-01-preview",
    "endpoint": "resource_endpoint.openai.azure.com",
    "temperature": "0.7",
    "model": "gpt-4.1",
    "deploy-name": "deployment_name"
  },
  "actions": [
    {
      "action_type": "PREDICT",
      "method": "POST",
      "url": "https://${parameters.endpoint}/openai/deployments/${parameters.deployment-name}/chat/completions?api-version=${parameters.api-version}",
      "headers": {
        "api-key": "${credential.api_key}"
      },
      "request_body": """{ "messages": [ {"role":"system","content":"${parameters.system_prompt}"},${parameters._chat_history:-}{"role":"user","content":"${parameters.user_prompt}"} ], "temperature": ${parameters.temperature}, "max_tokens": 4096 }"""
    }
  ],
  "created_time": 1750826267478,
  "last_updated_time": 1750826267478
}

Your request body should look like this:

"request_body": "{ \"model\": \"${parameters.model}\", \"messages\": [{\"role\":\"developer\",\"content\":\"${parameters.system_prompt}\"},${parameters._chat_history:-}{\"role\":\"user\",\"content\":\"${parameters.prompt}\"}${parameters._interactions:-}]${parameters.tool_configs:-} }"

In the request body please also ensure:
prompt, interactions, tool_configs are also part of it.

Secondly, your ${parameters.deployment-name} has no corresponding parameter to it. Because of which the url that is being hit is probably malformed or not found. Please make sure url is accurate. Internally, the parameter values are replaced in the url. If you wish, you can put in the entire url itself without using the parameters.