Error: interactionId is required, in Assistant Dashboards

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser): 2.18.0

Describe the issue:
Hi,

I have a local RAG developed that is working perfectly with a OpenAI apikey to use a llm model. I am trying to use the dashboards assistant plugin as an interface for my previous work. I have created a connector, an agent and linked that agent to “os_chat_root_agent”.

The first question works always correctly, but when I keep asking questions to the model, locally it works fine but in the dashboards app I can see the error, as you can see in the image:
Error from response
InteractionId is required

I have tried a lot of things, such as modifying my product so that an interactionID is always generated but it keeps working the same.

Does someone know hoy to solve this?
Thnks

Relevant Logs or Screenshots:

@alexc can you send the command you used to create the models, agent and connectors please. Also have you tried this with any other apis like ollama or Anthropic?

Hi @Anthony, thanks for your reply.

I have tried ollama, but the models that I used were not really good and decided to go through OpenAI.

Here you can find the way I created the models, agents and connectors:

Connector:

{
  "name": "mychat",
  "description": "Proxy to my FastAPI /chat",
  "version": 1,
  "protocol": "http",
  "parameters": {
    "endpoint": "http://127.0.0.1:8000/chat"
  },
  "actions": [
    {
      "action_type": "predict",
      "method": "POST",
      "url": "${parameters.endpoint}",
      "headers": { "Content-Type": "application/json" },
      "request_body": "{\"prompt\":\"${parameters.prompt}\"}",
      "response_filter": "$.response"

    }

  ]

}

Model:

{
  "name": "mymodel",
  "function_name": "remote",
  "connector_id": "previous_id_connector",
  "description": "..."
}

Agent:

{
  "name": "myagent",
  "type": "conversational_flow",
  "description": "...",
  "app_type": "os_chat",
  "memory": { "type": "conversation_index" },
  "tools": [
    {
      "type": "MLModelTool",
      "name": "response",
      "parameters": {
        "model_id": "previous_id_model",
        "prompt": "${parameters.question}",
        "response_field": "response"
      },
      "include_output_in_agent_response": true
    }
  ]
}

Then, I register the agent as os_chat_root_agent type. Running these AI models locally, they work perfectly. However, after all this time the above error persists, the Assistant Dashboards UI requires me an interactionId after the first answer. I haven´t been able to fix this.

Thanks @Anthony

@alexc This appears to be a bug in how OpenSearch Dashboard handles the conversation memory in OS2.18

I tested this in OS3.3 using the following configuration (wrapping flow agent around your conversational agent) and it works as expected

# For ease of testing
PUT /_cluster/settings
{
  "persistent": {
    "plugins.ml_commons.only_run_on_ml_node": false
  }
}

# Create connector
POST /_plugins/_ml/connectors/_create
{
  "name": "OpenAI Chat Connector",
  "description": "Connector for OpenAI with conversation memory",
  "version": 1,
  "protocol": "http",
  "parameters": {
    "endpoint": "api.openai.com",
    "model": "gpt-4o-mini"
  },
  "credential": {
    "openAI_key": "openAI_KEY"
  },
  "actions": [
    {
      "action_type": "predict",
      "method": "POST",
      "url": "https://${parameters.endpoint}/v1/chat/completions",
      "headers": {
        "Authorization": "Bearer ${credential.openAI_key}",
        "Content-Type": "application/json"
      },
      "request_body": "{ \"model\": \"${parameters.model}\", \"messages\": [{\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},{\"role\":\"user\",\"content\":\"${parameters.prompt}\"}] }"
    }
  ]
}

# Register a model
POST /_plugins/_ml/models/_register
{
  "name": "OpenAI gpt-4o-mini",
  "function_name": "remote",
  "connector_id": "<connector_ID>"
}

# Register an agent (conversational_flow)
POST /_plugins/_ml/agents/_register
{
  "name": "OpenAI Chat Agent",
  "type": "conversational_flow",
  "description": "Conversational agent with memory",
  "app_type": "os_chat",
  "memory": {
    "type": "conversation_index"
  },
  "tools": [
    {
      "type": "MLModelTool",
      "name": "chat_model",
      "description": "General purpose chat model",
      "parameters": {
        "model_id": "<Model_ID>",
        "prompt": "${parameters.chat_history:-}Human: ${parameters.question}\n\nAssistant:"
      },
      "include_output_in_agent_response": true
    }
  ]
} 


# Register a second agent (wrapper)
POST /_plugins/_ml/agents/_register
{
  "name": "Dashboard Root Agent",
  "type": "flow",
  "description": "Root agent for Dashboard Assistant",
  "tools": [
    {
      "type": "AgentTool",
      "name": "LLMResponseGenerator",
      "parameters": {
        "agent_id": "<Agent_ID_conversational>"
      },
      "include_output_in_agent_response": true
    }
  ],
  "memory": {
    "type": "conversation_index"
  }
}

PUT /.plugins-ml-config/_doc/os_chat
{
  "type": "os_chat_root_agent",
  "configuration": {
    "agent_id": "<Agent_ID_flow>"
  }
}

You should now be able to ask AI Assistant multiple consecutive questions. Hope this helps.

Thanks @Anthony !!! I’m really close now.

Understand the process, how can I introduce a previous llama model hosted locally which is the one doing the embeddings?

The process I am following in the FastApi built locally is doing embeddings of the user’s question and using the OpenAI model to answer, comparing the question’s embeddings and the ones stored in each document of an index, which were also built using the same llama embedding model.

Thanks!!!