LangChain 10 - Agents Langchainhub Guide

LangChainHub Shared Prompts

langchainhub is an open-source Prompt sharing platform, mainly for developers using LangChain framework. It provides the following core functions for developers:

Platform Features

  1. Prompt Sharing and Discovery: Users can upload and download tested Prompt templates, support browsing by task type and domain, provide Prompt scoring and user feedback system
  2. Prompt Version Control: Support version control of Prompts, record iteration improvement process of different versions, allow rollback to historical versions
  3. Collaboration Features: Can fork Prompts shared by others for secondary development, support team collaborative editing, provide discussion area for users to exchange Prompt optimization experiences
  4. Testing and Verification: Built-in Prompt testing environment, support batch testing of different LLM model effects, provide performance metric evaluation

Technical Features

  1. Deep Integration with LangChain: Support direct export as LangChain-usable Prompt templates, compatible with various LangChain components, provide API for programmatic access
  2. Standardized Format: Unified Prompt description format, include metadata, support variable interpolation and conditional logic
  3. Security Mechanisms: Sensitive information filtering, use permission control, Prompt source verification

Application Scenarios

  1. Rapid Development: New projects can directly reuse verified Prompts, reduce duplicate design time, cross-team knowledge sharing
  2. Education Research: Learn quality Prompt design patterns, analyze effect differences between different Prompts
  3. Enterprise Applications: Build internal Prompt knowledge base, standardize Prompt development process

Usage Suggestions

  1. Search Tips: Use precise keyword combinations, pay attention to high-scoring and actively maintained Prompts
  2. Contribution Guidelines: Provide clear Prompt descriptions, include test cases and expected outputs
  3. Optimization Process: Start from basic Prompts and gradually improve, record effect changes for each modification

Currently, the platform supports Prompt management for multiple mainstream large language models (like GPT, Claude, LLaMA, etc.) and continues to expand more features to meet developer needs.

The idea of LangChainHub is really good. Through the Hub method, Prompts are shared, and everyone can use shared Prompts through just a few lines of code. The author is very optimistic about this project. LangChain officially recommends using LangChainHub, but it hasn’t been updated on GitHub for a year, and the data is still being updated.


Install Dependencies

pip install langchainhub

Prompt Template

HUMAN

You are a helpful assistant. Help the user answer any questions.

You have access to the following tools:

{tools}

In order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>

For example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:

<tool>search</tool><tool_input>weather in SF</tool_input>

<observation>64 degrees</observation>

When you are done, respond with a final answer between <final_answer></final_answer>. For example:

<final_answer>The weather in SF is 64 degrees</final_answer>

Begin!

Previous Conversation:

{chat_history}

Question: {input}

{agent_scratchpad}

Write Code

The main part of the code is defining a tool, letting Agent execute it, simulating a search engine, letting GPT use tools to expand its own content, thereby completing complex tasks.

from langchain import hub
from langchain.agents import AgentExecutor, tool
from langchain.agents.output_parsers import XMLAgentOutputParser
from langchain_openai import ChatOpenAI

model = ChatOpenAI(
    model="gpt-3.5-turbo",
)


@tool
def search(query: str) -> str:
    """Search things about current events."""
    return "32 degrees"


tool_list = [search]
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/xml-agent-convo")


# Logic for going from intermediate steps to a string to pass into model
# This is pretty tied to the prompt
def convert_intermediate_steps(intermediate_steps):
    log = ""
    for action, observation in intermediate_steps:
        log += (
            f"<tool>{action.tool}</tool><tool_input>{action.tool_input}"
            f"</tool_input><observation>{observation}</observation>"
        )
    return log


# Logic for converting tools to string to go in prompt
def convert_tools(tools):
    return "\n".join([f"{tool.name}: {tool.description}" for tool in tools])


agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: convert_intermediate_steps(
            x["intermediate_steps"]
        ),
    }
    | prompt.partial(tools=convert_tools(tool_list))
    | model.bind(stop=["</tool_input>", "</final_answer>"])
    | XMLAgentOutputParser()
)

agent_executor = AgentExecutor(agent=agent, tools=tool_list)
message = agent_executor.invoke({"input": "whats the weather in New york?"})
print(f"message: {message}")

Running Result

➜ python3 test10.py
message: {'input': 'whats the weather in New york?', 'output': 'The weather in New York is 32 degrees'}