This interactive notebook uses Langchain to split fictional workplace documents into passages and uses OpenAI to transform these passages into embeddings and store them into Elasticsearch.
Then when we ask a question, we retrieve the relevant passages from the vector store and use langchain and OpenAI to provide a summary for the question.
Install required packages
Connect to Elasticsearch
ℹ️ We're using an Elastic Cloud deployment of Elasticsearch for this notebook. If you don't have an Elastic Cloud deployment, sign up here for a free trial.
We'll use the Cloud ID to identify our deployment, because we are using Elastic Cloud deployment. To find the Cloud ID for your deployment, go to https://cloud.elastic.co/deployments and select your deployment.
We will use ElasticsearchStore to connect to our elastic cloud deployment. This would help create and index data easily. In the ElasticsearchStore instance, will set embedding to OpenAIEmbeddings to embed the texts and elasticsearch index name that will be used in this example.
Indexing Data into Elasticsearch
Let's download the sample dataset and deserialize the document.
Split Documents into Passages
We’ll chunk documents into passages in order to improve the retrieval specificity and to ensure that we can provide multiple passages within the context window of the final question answering prompt.
Here we are chunking documents into 800 token passages with an overlap of 400 tokens.
Here we are using a simple splitter but Langchain offers more advanced splitters to reduce the chance of context being lost.
Bulk Import Passages
Now that we have split each document into the chunk size of 800, we will now index data to elasticsearch using ElasticsearchStore.from_documents.
We will use Cloud ID, Password and Index name values set in the Create cloud deployment step.
Asking a question
Now that we have the passages stored in Elasticsearch, we can now ask a question to get the relevant passages.
Add Source Tracing
RAG can provide clear traceability of the source knowledge used to answer a question. This is important for compliance and regulatory reasons and limiting LLM hallucinations. This is known as source tracking.
In this example, we extend the Prompt template to ask the LLM to cite the source of the answer.
Returning Passages with Answer
In this example, we extend the chain to return the passages back with the answer. This is helpful for the UI to display the source passages, should the user want to read more on the topic.
Conversational Question Answering
We have achieved getting answers to questions, but what if we want to ask follow up questions? We can use the answer from the previous question as the context for the next question. This is known as conversational question answering.
In this example, we extend the chain to use the answer from the previous question as the context for the next question.
Next Steps
We have shown how to use Langchain to build a question answering system. We have shown how to index data into Elasticsearch, ask a question and use the answer from the previous question as the context for the next question.