In OpenSearch 2.10, we launched two new features that bring capabilities of GenAI to OpenSearch. The first one is Memory and it is meant to serve as a building block for search applications and agents to store and retrieve conversational history. The second is a new search processor that handles RAG (Retrieval Augmented Generation) which combines search results and large language models and conversational memory to answer users’ questions. RAG in OpenSearch relies on the remote inference framework and the connector feature. When you put all of these pieces together to have conversations over your data, we also recommend that you try it using Hybrid Search combining both BM25 and KNN to get the most out of OpenSearch.
We are looking forward to getting the community’s feedback on these features. We are excited to make them available in the 2.10 release and have people try out conversational search. We think this new mode of interacting with the data enables users to get better search results. Please, try it out and help us make it even better.
How can I build a RAG with a model other than Open AI, Cohere and Sagemaker.
Can i use hugging face transformer or BERT model for predicting sentences? without having any hugging face key
i am using version 2.10
If yes how to build a http connector for it? or how can i load the model