GenerativeQAResponseProcessor failed in precessing response

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser): 2.12.0

Describe the issue:

I have run through this: Conversational search - OpenSearch Documentation and all seems to work.

Setting it up to use my own index I get a 500 error:

GenerativeQAResponseProcessor failed in precessing response

Just wondering if anybody know what this means?

Configuration:

Relevant Logs or Screenshots:

for 500 error you have to check your log files which provide the proper cause of the error.

I am running this in docker without -d and logs have not been much help. I have actually worked out what the problem was - even though I thought I was deploying the model(doing the setup via a script) it was not being deployed - deploying the model manually afterwards seems to have made it work

I do still get this error occasionally. I have got running a index with a neural search and a search pipeline which uses a huggingface model for the embedding and openai for the chat completion - generally it works quite well but I do still get the error a I first mentioned sometimes

Hi @matpol, I get that same error, what do you mean by ‘deploying the model manually’ please ?

Depending on what you are doing you can test whether the model is working by using postman or similar. If you look here: Neural search tutorial - OpenSearch Documentation it tells you how to test an embedding . It also tells you how to deploy it. Similar for a search pipeline model: Conversational search - OpenSearch Documentation

Thank you for your help

I think another reason you may get this error is that the context contains to many tokens. I have reduced the context value from 5 to 2 and I get the error a lot less