This diagram illustrates an idea of implementing an AI based chatbot for Kontext on Azure. It leverages open source frameworks and LLMs (large language models) to implement a RAG (retrieval augmented generation).
The chatbot will get the domain specific information from Kontext search engine as context information and provided it into backend API service which leverages Microsoft Semantic Kernel to construct the question and send to LLM service to generate the answers and then sends back to frontend.
Alternatives
In the diagram, LLMs can be replaced by other models like Open AI services of Azure Open AI services or any other relevant language models. Microsoft Semantic Kernel can also be replaced with LangChain too.