LangChain 07 - Multiple Chains

Overview - Multiple Chain Chaining

Multiple Chain Chaining is an advanced technique in LLM application development. It connects multiple LLM calls (Chains) in a specific order to build more complex processing flows. This pattern is similar to the pipeline concept in programming, where each Chain handles a specific sub-task and passes output to the next Chain as input.

Key Features

  1. Modular Design: Decompose complex tasks into multiple independent processing steps
  2. Sequential Execution: Output from previous Chain automatically becomes input for next Chain
  3. Flexible Combination: Freely combine different types of Chains as needed
  4. State Passing: Context information flows through the entire chain

Typical Application Scenarios

  1. Multi-stage Text Processing: Like first generating summary, then performing sentiment analysis
  2. Q&A System: First retrieve relevant documents, then generate answers based on documents
  3. Content Moderation: First detect sensitive content, then decide whether to proceed with subsequent processing
  4. Data Analysis: First extract structured data, then perform statistical analysis

Implementation Method

In LangChain framework, multiple chain chaining can be achieved through SequentialChain.

Advanced Usage

  1. Conditional Branching: Decide subsequent execution path based on intermediate results
  2. Parallel Processing: Some steps can execute in parallel then merge results
  3. Loop Structure: Iterate certain steps until conditions are met
  4. Error Handling: Set up fallback Chain to handle exceptions

Best Practices

  1. Design clear input-output specifications for each Chain
  2. Limit complexity of individual Chains, maintainability
  3. Add appropriate intermediate result verification mechanism
  4. Consider adding monitoring and logging functions
  5. Consider adding caching for long chains

Performance Considerations

  1. Chaining multiple Chains increases overall latency
  2. May need to manage more token consumption
  3. Errors propagate along the chain, need proper handling
  4. In some scenarios may need to consider async execution

Runnable Interface

Runnables provide a simple and flexible way to combine multiple processing chains (Chains). Through the Runnable interface, developers can easily chain multiple processing steps together to build complex processing flows.

Specifically, Runnable chaining has these characteristics:

  1. Simple Connection: Can use pipe() method to connect multiple Runnable instances, forming processing pipeline
  2. Type Safety: Runnables automatically check if input-output types match
  3. Flexible Combination: Support multiple combination methods (sequential execution, parallel execution, conditional branching, loop processing)
  4. Easy Debugging: Can insert logs or checkpoints at any point in the chain

Code Example

from operator import itemgetter

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

prompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")
prompt2 = ChatPromptTemplate.from_template(
    "what country is the city {city} in? respond in {language}"
)

model = ChatOpenAI(
    model="gpt-3.5-turbo",
)

chain1 = prompt1 | model | StrOutputParser()

chain2 = (
    {"city": chain1, "language": itemgetter("language")}
    | prompt2
    | model
    | StrOutputParser()
)

message = chain2.invoke({"person": "obama", "language": "spanish"})
print(f"message: {message}")

Running Result

message: Chicago, Illinois, se encuentra en los Estados Unidos.