LangChain 07 - Multiple Chains
Overview - Multiple Chain Chaining
Multiple Chain Chaining is an advanced technique in LLM application development. It connects multiple LLM calls (Chains) in a specific order to build more complex processing flows. This pattern is similar to the pipeline concept in programming, where each Chain handles a specific sub-task and passes output to the next Chain as input.
Key Features
- Modular Design: Decompose complex tasks into multiple independent processing steps
- Sequential Execution: Output from previous Chain automatically becomes input for next Chain
- Flexible Combination: Freely combine different types of Chains as needed
- State Passing: Context information flows through the entire chain
Typical Application Scenarios
- Multi-stage Text Processing: Like first generating summary, then performing sentiment analysis
- Q&A System: First retrieve relevant documents, then generate answers based on documents
- Content Moderation: First detect sensitive content, then decide whether to proceed with subsequent processing
- Data Analysis: First extract structured data, then perform statistical analysis
Implementation Method
In LangChain framework, multiple chain chaining can be achieved through SequentialChain.
Advanced Usage
- Conditional Branching: Decide subsequent execution path based on intermediate results
- Parallel Processing: Some steps can execute in parallel then merge results
- Loop Structure: Iterate certain steps until conditions are met
- Error Handling: Set up fallback Chain to handle exceptions
Best Practices
- Design clear input-output specifications for each Chain
- Limit complexity of individual Chains, maintainability
- Add appropriate intermediate result verification mechanism
- Consider adding monitoring and logging functions
- Consider adding caching for long chains
Performance Considerations
- Chaining multiple Chains increases overall latency
- May need to manage more token consumption
- Errors propagate along the chain, need proper handling
- In some scenarios may need to consider async execution
Runnable Interface
Runnables provide a simple and flexible way to combine multiple processing chains (Chains). Through the Runnable interface, developers can easily chain multiple processing steps together to build complex processing flows.
Specifically, Runnable chaining has these characteristics:
- Simple Connection: Can use pipe() method to connect multiple Runnable instances, forming processing pipeline
- Type Safety: Runnables automatically check if input-output types match
- Flexible Combination: Support multiple combination methods (sequential execution, parallel execution, conditional branching, loop processing)
- Easy Debugging: Can insert logs or checkpoints at any point in the chain
Code Example
from operator import itemgetter
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
prompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")
prompt2 = ChatPromptTemplate.from_template(
"what country is the city {city} in? respond in {language}"
)
model = ChatOpenAI(
model="gpt-3.5-turbo",
)
chain1 = prompt1 | model | StrOutputParser()
chain2 = (
{"city": chain1, "language": itemgetter("language")}
| prompt2
| model
| StrOutputParser()
)
message = chain2.invoke({"person": "obama", "language": "spanish"})
print(f"message: {message}")
Running Result
message: Chicago, Illinois, se encuentra en los Estados Unidos.