实例的完整过程在这里
该实例的作用和功能是:
重点:
- State 类在定义时要足够全面,以便跟踪所有重要信息。
message对象是 用户定义的 Prompt 内容。- 一个 Node 执行结束之前要更新
message。即 append 内容到之前的message。 - 每一个 Node 的返回值应该是 states updates。
- LLM 的作用是在每一个 Node 中,
model.invoke(prompt)来让 model 做动作。(符合 LLM 的功能) - 所有组件定义好之后,通过创建 StateGraph 将所有组件添加到这个数据结构中,组装成图。记得将 END 节点加入 StateGraph 中。
- 使用可观测性工具来跟踪和监控代理。比如
LangFuse。Langfuse 的API Key本身是免费申请的,使用 Hobby 计划,不需要绑定信用卡,包含50,000个单位/月的免费额度,支持两名用户,数据保留30天,以及社区支持。
LangChain 的设计更面向对象,而 smolagents / LlamaIndex 等常使用轻量级字典,但语义和功能一致。
smolagents 中的 messages:
messages = [
{"role": "system", "content": "You are a tech support agent."},
{"role": "user", "content": "I can't log in."},
{"role": "assistant", "content": "Have you tried resetting your password?"},
{"role": "user", "content": "Yes, but I didn't receive the email."}
]
与 LangChain 的 messages 对象功能完全相同。它们是不同框架对同一概念,即对话上下文的不同封装方式,核心思想一致。
Setup
通过 environment.yml 创建 conda 虚拟环境
name: aiagent
dependencies:
- python=3.10.12
- pip
- pip:
- langgraph
- langchain_ollama
conda env create --file environment.yml
具体步骤:
import os
from typing import TypedDict, List, Dict, Any, Optional
from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
1 定义 State:其中的messages 会随着
class EmailState(TypedDict):
# The email being processed
email: Dict[str, Any] # Contains subject, sender, body, etc.
# Category of the email (inquiry, complaint, etc.)
email_category: Optional[str]
# Reason why the email was marked as spam
spam_reason: Optional[str]
# Analysis and decisions
is_spam: Optional[bool]
# Response generation
email_draft: Optional[str]
# Processing metadata
messages: List[Dict[str, Any]] # Track conversation with LLM for analysis
2 定义 Nodes:
# Initialize LLM
model = ChatOpenAI(temperature=0)
def read_email(state: EmailState):
"""Alfred reads and logs the incoming email"""
email = state["email"]
# Here we might do some initial preprocessing
print(f"Alfred is processing an email from {email['sender']} with subject: {email['subject']}")
# No state changes needed here
return {}
def classify_email(state: EmailState):
"""Alfred uses an LLM to determine if the email is spam or legitimate"""
email = state["email"]
# Prepare our prompt for the LLM
prompt = f"""
As Alfred the butler, analyze this email and determine if it is spam or legitimate.
Email:
From: {email['sender']}
Subject: {email['subject']}
Body: {email['body']}
First, determine if this email is spam. If it is spam, explain why.
If it is legitimate, categorize it (inquiry, complaint, thank you, etc.).
"""
# Call the LLM
messages = [HumanMessage(content=prompt)]
response = model.invoke(messages)
# Simple logic to parse the response (in a real app, you'd want more robust parsing)
response_text = response.content.lower()
is_spam = "spam" in response_text and "not spam" not in response_text
# Extract a reason if it's spam
spam_reason = None
if is_spam and "reason:" in response_text:
spam_reason = response_text.split("reason:")[1].strip()
# Determine category if legitimate
email_category = None
if not is_spam:
categories = ["inquiry", "complaint", "thank you", "request", "information"]
for category in categories:
if category in response_text:
email_category = category
break
# Update messages for tracking
new_messages = state.get("messages", []) + [
{"role": "user", "content": prompt},
{"role": "assistant", "content": response.content}
]
# Return state updates
return {
"is_spam": is_spam,
"spam_reason": spam_reason,
"email_category": email_category,
"messages": new_messages
}
def handle_spam(state: EmailState):
"""Alfred discards spam email with a note"""
print(f"Alfred has marked the email as spam. Reason: {state['spam_reason']}")
print("The email has been moved to the spam folder.")
# We're done processing this email
return {}
def draft_response(state: EmailState):
"""Alfred drafts a preliminary response for legitimate emails"""
email = state["email"]
category = state["email_category"] or "general"
# Prepare our prompt for the LLM
prompt = f"""
As Alfred the butler, draft a polite preliminary response to this email.
Email:
From: {email['sender']}
Subject: {email['subject']}
Body: {email['body']}
This email has been categorized as: {category}
Draft a brief, professional response that Mr. Hugg can review and personalize before sending.
"""
# Call the LLM
messages = [HumanMessage(content=prompt)]
response = model.invoke(messages)
# Update messages for tracking
new_messages = state.get("messages", []) + [
{"role": "user", "content": prompt},
{"role": "assistant", "content": response.content}
]
# Return state updates
return {
"email_draft": response.content,
"messages": new_messages
}
def notify_mr_hugg(state: EmailState):
"""Alfred notifies Mr. Hugg about the email and presents the draft response"""
email = state["email"]
print("\n" + "="*50)
print(f"Sir, you've received an email from {email['sender']}.")
print(f"Subject: {email['subject']}")
print(f"Category: {state['email_category']}")
print("\nI've prepared a draft response for your review:")
print("-"*50)
print(state["email_draft"])
print("="*50 + "\n")
# We're done processing this email
return {}
3 定义 Routing Logic 来判断分类后的路径
def route_email(state: EmailState) -> str:
"""Determine the next step based on spam classification"""
if state["is_spam"]:
return "spam"
else:
return "legitimate"
4 创建 StateGraph 并将所有内容链接起来:
# Create the graph
email_graph = StateGraph(EmailState)
# Add nodes
email_graph.add_node("read_email", read_email)
email_graph.add_node("classify_email", classify_email)
email_graph.add_node("handle_spam", handle_spam)
email_graph.add_node("draft_response", draft_response)
email_graph.add_node("notify_mr_hugg", notify_mr_hugg)
# Start the edges
email_graph.add_edge(START, "read_email")
# Add edges - defining the flow
email_graph.add_edge("read_email", "classify_email")
# Add conditional branching from classify_email
email_graph.add_conditional_edges(
"classify_email",
route_email,
{
"spam": "handle_spam",
"legitimate": "draft_response"
}
)
# Add the final edges
email_graph.add_edge("handle_spam", END)
email_graph.add_edge("draft_response", "notify_mr_hugg")
email_graph.add_edge("notify_mr_hugg", END)
# Compile the graph
compiled_graph = email_graph.compile()
5 执行 Application:
# Example legitimate email
legitimate_email = {
"sender": "john.smith@example.com",
"subject": "Question about your services",
"body": "Dear Mr. Hugg, I was referred to you by a colleague and I'm interested in learning more about your consulting services. Could we schedule a call next week? Best regards, John Smith"
}
# Example spam email
spam_email = {
"sender": "winner@lottery-intl.com",
"subject": "YOU HAVE WON $5,000,000!!!",
"body": "CONGRATULATIONS! You have been selected as the winner of our international lottery! To claim your $5,000,000 prize, please send us your bank details and a processing fee of $100."
}
# Process the legitimate email
print("\nProcessing legitimate email...")
legitimate_result = compiled_graph.invoke({
"email": legitimate_email,
"is_spam": None,
"spam_reason": None,
"email_category": None,
"email_draft": None,
"messages": []
})
# Process the spam email
print("\nProcessing spam email...")
spam_result = compiled_graph.invoke({
"email": spam_email,
"is_spam": None,
"spam_reason": None,
"email_category": None,
"email_draft": None,
"messages": []
})
6 可视化图
compiled_graph.get_graph().draw_mermaid_png()
这展示了如何通过 LangGraph 利用 LLMs,提现了 LangGraph 的 orchestrate 能力,即编排复杂工作流程的能力。
问题
model = ChatOpenAI(api_key="your OpenAI_API_Key", temperature=0) 你需要有 openai 的 API key。
使用本地的 Ollama 部署的 LLM
import os
# 11434 是ollama 服务的端口
os.environ["http_proxy"] = "http://127.0.0.1:11434"
os.environ["https_proxy"] = "http://127.0.0.1:11434"
# 指定你本地 Ollama 模型的名称和本地API地址
infer_server_url = "http://localhost:11434/v1"
model_name = "deepseek-r1:1.5b"
model = ChatOllama(
model=model_name,
openai_api_base=infer_server_url,
openai_api_key="none", # Ollama 不需要真实API Key,设置为 none
temperature=0,
)
执行脚本之前,需要确保 Ollama server 已经开启。
KAQ: 确实使用了 local LLM,但是输出似乎不符合预期?
KAQ: 只有命令行的 Linux 不能显示 Graph 结构,如何查看?
输出 mermaid 源码,然后拷贝到在线的 mermaid 编辑器中查看。