[Feedback] Conversational Search and Retrieval Augmented Generation Using Search Pipeline - Experimental Release

In OpenSearch 2.10, we launched two new features that bring capabilities of GenAI to OpenSearch. The first one is Memory and it is meant to serve as a building block for search applications and agents to store and retrieve conversational history. The second is a new search processor that handles RAG (Retrieval Augmented Generation) which combines search results and large language models and conversational memory to answer users’ questions. RAG in OpenSearch relies on the remote inference framework and the connector feature. When you put all of these pieces together to have conversations over your data, we also recommend that you try it using Hybrid Search combining both BM25 and KNN to get the most out of OpenSearch.

We are looking forward to getting the community’s feedback on these features. We are excited to make them available in the 2.10 release and have people try out conversational search. We think this new mode of interacting with the data enables users to get better search results. Please, try it out and help us make it even better.

For a more detailed discussion on this, you can check out our RFC - https://github.com/opensearch-project/ml-commons/issues/1150.

How can I build a RAG with a model other than Open AI, Cohere and Sagemaker.
Can i use hugging face transformer or BERT model for predicting sentences? without having any hugging face key

i am using version 2.10

If yes how to build a http connector for it? or how can i load the model

I tried this below one

POST /_plugins/_ml/models/_upload

{
  "name": "huggingface/TheBloke/vicuna-13B-1.1-GPTQ",
  "version": "1.0.1",
  "model_format": "TORCH_SCRIPT"
}

But after having the model, i get this below error. Can i load any other LLM model for RAG?

{
    "error": {
        "root_cause": [
            {
                "type": "m_l_exception",
                "reason": "plugins.ml_commons.rag_pipeline_feature_enabled is not enabled."
            }
        ],
        "type": "m_l_exception",
        "reason": "plugins.ml_commons.rag_pipeline_feature_enabled is not enabled."
    },
    "status": 500
}

I have already enabled rag, still i get the above error

{
  "persistent": {
    "plugins.ml_commons.rag_pipeline_feature_enabled": "true"
  }
}

This is fixed by adding trusted endpoint

Some examples are wrong in the conversational search documentation; please correct them.

  1. Extra space on OpenAI Connector request_body in temperature
    Awesome Screenshot

  2. The model group’s creation endpoint is wrong

POST /_plugins/_ml/model_groups/_register

https://www.awesomescreenshot.com/image/44260054?key=33f05fb84a8c6a60a640407a927edb23

It would be nice to have a RAG that uses the results of the Highlight as well as the _source.

@sribalajivelan , these two are fixed now.