Dashboards assistant "disabled"/grayed out

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

Describe the issue:

I’ve followed the docs/tutorials to enable dashboards assistant on my homelab (single server multiple docker containers, using openai as external LLM). it works.

I’ve then attempted to do same in a real world (work env), and got all the agents working from dev tools, all the ml_commons cluster/settings applied similarly as my homelab. ensured the settings in opensearch_dashboards.yaml and restarted dashboards.

The Assistant ui widget appears but is “disabled” or grayed out.
another symptom that may be related… all the responses from llm are empty string in all my conversations, yet I do see the response when I hit the agents in dev-tools.

I’m having trouble finding any logs indicating why disabled/grayed out chat ui field.
I’ve got ollama exposed as a public service, configured with mistral llm.

I completely realize this is probably not a “supported” config pattern, just looking for any pointers to troubleshooting why the chat ui is “disabled”/grey’d out.


opensearch dashboards yaml
assistant.chat.enabled: true
observability.query_assist.enabled: true
observability.query_assist.ppl_agent_name: “PPL agent”
### is my url that I dont want to share here.
“plugins”: {
“ml_commons”: {
“rag_pipeline_feature_enabled”: “true”,
“agent_framework_enabled”: “true”,
“memory_feature_enabled”: “true”,
“only_run_on_ml_node”: “true”,
“trusted_connector_endpoints_regex”: [
“model_access_control_enabled”: “true”,
“native_memory_threshold”: “100”,
“trusted_url_regex”: "^https://###/.

Relevant Logs or Screenshots:

I think I found the code that determines setting the disabled param in the form field, at dashboards-assistant/public/tabs/chat/chat_page.tsx at 3991de2f7a732ddd87c29ae30ad03e8ee768cd10 · opensearch-project/dashboards-assistant · GitHub

so that at least gives me an idea what is wrong/going on there.

I do find that if I hack the form field with dev-tools into not “disabled” state, I am able to submit a “hello” and do get an expected response from the agent/llm.