Estimated reading time: 3 minutes
- QA RAG with Self Evaluation II
- Using RunnableParallel to carry forward intermediate outputs
- Using Global variables to save intermediate steps
- Using callbacks
- About The Author
QA RAG with Self Evaluation II
For this variation, we make a change to the evaluation procedure. In addition to the question-answer pair, we also pass the retrieved context to the evaluator LLM.
To accomplish this, we add an additional itemgetter function in the second RunnableParallel to collect the context string and pass it to the new qa_eval_prompt_with_context prompt template.
rag_chain = (
RunnableParallel(context = retriever | format_docs, question = RunnablePassthrough() ) |
RunnableParallel(answer= qa_prompt | llm | retrieve_answer, question = itemgetter(“question”), context = itemgetter(“context”) ) |
qa_eval_prompt_with_context |
llm_selfeval |
json_parser
)
Implementation Flowchart :
One of the common pain points with using a chain implementation like LCEL is the difficulty in accessing the intermediate variables, which is important for debugging pipelines. We look at few options where we can still access any intermediate variables we are interested using manipulations of the LCEL
Using RunnableParallel to carry forward intermediate outputs
As we saw earlier, RunnableParallel allows us to carry multiple arguments forward to the next step in the chain. So we use this ability of RunnableParallel to carry forward the required intermediate values all the way till the end.
In the below example, we modify the original self eval RAG chain to output the retrieved context text along with the final self evaluation output. The primary change is that we add a RunnableParallel object to every step of the process to carry forward the context variable.
Additionally, we also use the itemgetter function to clearly specify the inputs for the subsequent steps. For example, for the last two RunnableParallel objects, we use itemgetter(‘input’) to ensure that only the input argument from the previous step is passed on to the LLM/ Json parser objects.
rag_chain = (
RunnableParallel(context = retriever | format_docs, question = RunnablePassthrough() ) |
RunnableParallel(answer= qa_prompt | llm | retrieve_answer, question = itemgetter(“question”), context = itemgetter(“context”) ) |
RunnableParallel(input = qa_eval_prompt, context = itemgetter(“context”)) |
RunnableParallel(input = itemgetter(“input”) | llm_selfeval , context = itemgetter(“context”) ) |
RunnableParallel(input = itemgetter(“input”) | json_parser, context = itemgetter(“context”) )
)
The output from this chain looks like the following :
A more concise variation:
rag_chain = (
RunnableParallel(context = retriever | format_docs, question = RunnablePassthrough() ) |
RunnableParallel(answer= qa_prompt | llm | retrieve_answer, question = itemgetter(“question”), context = itemgetter(“context”) ) |
RunnableParallel(input = qa_eval_prompt | llm_selfeval | json_parser, context = itemgetter(“context”))
)
Using Global variables to save intermediate steps
This method essentially uses the principle of a logger. We introduce a new function that saves its input to a global variable, thus allowing us access to the intermediate variable through the global variable
global context
def save_context(x):
global context
context = x
return x
rag_chain = (
RunnableParallel(context = retriever | format_docs | save_context, question = RunnablePassthrough() ) |
RunnableParallel(answer= qa_prompt | llm | retrieve_answer, question = itemgetter(“question”) ) |
qa_eval_prompt |
llm_selfeval |
json_parser
)
Here we define a global variable called context and a function called save_context that saves its input value to the global context variable before returning the same input. In the chain, we add the save_context function as the last step of the context retrieval step.
This option allows you to access any intermediate steps without making major changes to the chain.
Accessing intermediate variables using global variables
Using callbacks
Attaching callbacks to your chain is another common method used for logging intermediate variable values. Theres a lot to cover on the topic of callbacks in LangChain, so I will be covering this in detail in a different post.
About The Author
Discover more from Artificial Race!
Subscribe to get the latest posts sent to your email.