Conversational 增强检索会话
Conversational 增强检索会话(Conversational Search)是一种结合自然语言处理和上下文理解能力的智能搜索技术,它能够理解用户的查询意图,并在多轮对话中保持上下文一致性,提供更精准的搜索结果。
核心特点
-
上下文理解:
- 能够记住对话历史(通常3-5轮)
- 理解指代关系(如”它”、“这个”等)
- 示例:用户问”北京天气如何?“,后续问”那上海呢?“系统能理解”上海”指代的是天气查询
-
意图识别:
- 区分不同查询类型(信息查询、事务处理、建议获取等)
- 理解隐含意图(如”我头疼”可能隐含医疗查询意图)
-
多模态响应:
- 结合文本、图像、视频等多种形式
- 提供结构化答案(如表格、图表)而不仅仅是网页链接
技术实现
-
架构组成:
- 对话状态跟踪器(DST)
- 自然语言理解模块(NLU)
- 检索增强生成(RAG)系统
- 响应生成器
-
关键技术:
- Transformer架构(如BERT、GPT等)
- 向量数据库存储和检索
- 知识图谱集成
- 强化学习优化
应用场景
-
客户服务:
- 处理复杂问题咨询
- 自动化故障排除
- 示例:电信客服处理”我的网络为什么这么慢?“的完整诊断流程
-
电子商务:
- 个性化产品推荐
- 跨品类商品比较
- 示例:“我想要一款2000元以下的蓝牙耳机,适合运动使用”
-
医疗健康:
- 症状初步分析
- 药物信息查询
- 注意事项提醒
挑战与解决方案
-
挑战:
- 歧义消除
- 长对话一致性维护
- 领域知识更新
-
解决方案:
- 主动澄清机制(“您是指X还是Y?”)
- 对话历史摘要技术
- 实时知识库同步
最新进展
-
混合架构:
- 结合检索式和生成式方法
- 示例:使用向量检索获取相关文档,再用LLM生成回答
-
个性化适配:
- 用户画像整合
- 交互风格学习
- 长期偏好记忆
-
多语言支持:
- 跨语言查询理解
- 文化适配响应生成
安装依赖
pip install --upgrade --quiet langchain-core langchain-community langchain-openai
代码实现
from langchain_core.messages import AIMessage, HumanMessage, get_buffer_string
from langchain_core.prompts import format_document
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_openai.chat_models import ChatOpenAI
from langchain_openai import OpenAIEmbeddings
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.chat import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from operator import itemgetter
from langchain_community.vectorstores import DocArrayInMemorySearch
vectorstore = DocArrayInMemorySearch.from_texts(
["wuzikang worked at earth", "sam worked at home", "harrison worked at kensho"], embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
ANSWER_PROMPT = ChatPromptTemplate.from_template(template)
DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")
def _combine_documents(
docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"
):
doc_strings = [format_document(doc, document_prompt) for doc in docs]
return document_separator.join(doc_strings)
_inputs = RunnableParallel(
standalone_question=RunnablePassthrough.assign(
chat_history=lambda x: get_buffer_string(x["chat_history"])
)
| CONDENSE_QUESTION_PROMPT
| ChatOpenAI(temperature=0)
| StrOutputParser(),
)
_context = {
"context": itemgetter("standalone_question") | retriever | _combine_documents,
"question": lambda x: x["standalone_question"],
}
conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()
message1 = conversational_qa_chain.invoke(
{
"question": "what is his name?",
"chat_history": [],
}
)
print(f"message1: {message1}")
message2 = conversational_qa_chain.invoke(
{
"question": "where did sam work?",
"chat_history": [],
}
)
print(f"message2: {message2}")
message3 = conversational_qa_chain.invoke(
{
"question": "where did he work?",
"chat_history": [
HumanMessage(content="Who wrote this notebook?"),
AIMessage(content="Harrison"),
],
}
)
print(f"message3: {message3}")
代码解释
在初始化中,定义了一些文档内容:
- “wuzikang worked at earth”
- “sam worked at home”
- “harrison worked at kensho”
在后续的代码中,使用了一个模板,并做了三个问题的会话:
- “question”: “what is his name?”
- “question”: “where did sam work?”
- “question”: “where did he work?”
此时并没有指明 his sam he,但是大模型通过定义的文档内容和相对应的 chat history 来推测出答案。
运行结果
message1: The name of the person we were just talking about is Wuzikang.
message2: Sam worked at home.
message3: Harrison worked at Kensho.