LangGraph范式-Plan-and-Execute

其核心思想是首先制定一个多步骤的计划,然后逐项执行该计划。完成特定任务后,您可以重新审视该计划并进行适当的修改。

扩展来看,该范式受到 BabyAGI 项目和 Plan-and-Solve 论文的启发,核心思想是将任务分解为多个步骤(计划),然后逐一执行这些步骤,并在必要时重新规划。与传统的 ReAct 范式(边思考边行动)不同,Plan-and-Execute 强调明确的长期规划,适合需要多阶段推理的任务。

LangGraph范式-Plan-and-Execute

上图中主要的步骤有:

规划(Planning)

  • 使用 LLM(大型语言模型)生成一个多步骤计划(plan),每个步骤是一个独立的任务。
  • 计划以列表形式表示(例如 [“步骤1”, “步骤2”, “步骤3”]),确保步骤清晰且有序。

执行(Execution)

  • 一个执行智能体(agent executor)逐一完成计划中的步骤,使用工具(如搜索、计算)或 LLM 推理。
  • 每个步骤的结果记录在状态中,可能触发计划的调整。

重新规划(Replanning)

  • 根据执行结果,重新评估计划,调整剩余步骤或生成新计划。
  • 确保任务在动态环境中保持正确性。

我们之前研究了其余的一些范式,这里正好做个对比。

与其他范式的对比

  • ReAct(Reasoning + Acting)
    • 边思考边行动,逐一决定下一步,适合简单任务。俗话说,边走边看了。。。
    • 缺点:缺乏长期规划,可能陷入局部最优或循环。
  • Multi-Agent Supervisor
    • 中心化监督者协调多个工作者,适合中等复杂任务。
    • 缺点:监督者可能是单点瓶颈。(因为汇报给一个顶点Supervisor)
  • Hierarchical Agent Teams
    • 层次化监督者管理子团队,适合复杂、多团队任务。
    • 缺点:架构复杂,通信开销高。(层叠多)
  • Multi-Agent Collaboration
    • 去中心化协作,智能体通过共享状态交互,适合动态任务。
    • 缺点:状态管理复杂,可能缺乏明确方向。(状态多了就复杂了)
  • Plan-and-Execute
    • 先规划后执行,强调长期规划和动态调整。
    • 优点:适合复杂任务,可使用较小的模型执行步骤。
    • 缺点:规划阶段依赖 LLM 性能,初始计划可能不完善。

总体来看,每一种范式都不是说单一使用,可以混合使用。根据你自己的真实场景。这些单一的范式领出来就是给你一种开发的设计思路。

下图是对应了第一张图。只是更代码形式了。

LangGraph范式-Plan-and-Execute
import os
import f_common
import asyncio
###############################################################################################
# 
###############################################################################################
from langchain_community.tools.tavily_search import TavilySearchResults

tools = [TavilySearchResults(max_results=3)]
###############################################################################################
#现在我们将创建用于执行任务的执行代理。请注意,在本例中,我们将为每个任务使用相同的执行代理,但这并非必须。
###############################################################################################
from langchain import hub
from langchain_openai import ChatOpenAI

from langgraph.prebuilt import create_react_agent

# Choose the LLM that will drive the agent
#llm = ChatOpenAI(model="gpt-4-turbo-preview")
llm=f_common.my_grok_llm

prompt = "You are a helpful assistant."
agent_executor = create_react_agent(llm, tools, prompt=prompt)

agent_executor.invoke({"messages": [("user", "who is the winnner of the us open")]})
###############################################################################################
# Define the State 
# 现在,我们先来定义该代理的状态和轨迹。
# 首先,我们需要跟踪当前计划。我们将其表示为一个字符串列表。
# 接下来,我们需要跟踪之前执行的步骤。我们将其表示为一个元组列表(这些元组将包含步骤和结果)。
# 最后,我们需要一些状态来表示最终响应以及原始输入。
###############################################################################################
import operator
from typing import Annotated, List, Tuple
from typing_extensions import TypedDict

class PlanExecute(TypedDict):
    input: str
    plan: List[str]
    past_steps: Annotated[List[Tuple], operator.add]
    response: str
###############################################################################################
# 
###############################################################################################
from pydantic import BaseModel, Field

class Plan(BaseModel):
    """Plan to follow in future"""

    steps: List[str] = Field(
        description="different steps to follow, should be in sorted order"
    )

from langchain_core.prompts import ChatPromptTemplate

planner_prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            """For the given objective, come up with a simple step by step plan. \
This plan should involve individual tasks, that if executed correctly will yield the correct answer. Do not add any superfluous steps. \
The result of the final step should be the final answer. Make sure that each step has all the information needed - do not skip steps.""",
        ),
        ("placeholder", "{messages}"),
    ]
)
planner = planner_prompt | f_common.my_grok_llm.with_structured_output(Plan)

planner.invoke(
    {
        "messages": [
            ("user", "what is the hometown of the current Australia open winner?")
        ]
    }
)
###############################################################################################
# Replan
###############################################################################################
from typing import Union
class Response(BaseModel):
    """Response to user."""

    response: str

class Act(BaseModel):
    """Action to perform."""

    action: Union[Response, Plan] = Field(
        description="Action to perform. If you want to respond to user, use Response. "
        "If you need to further use tools to get the answer, use Plan."
    )

replanner_prompt = ChatPromptTemplate.from_template(
    """For the given objective, come up with a simple step by step plan. \
This plan should involve individual tasks, that if executed correctly will yield the correct answer. Do not add any superfluous steps. \
The result of the final step should be the final answer. Make sure that each step has all the information needed - do not skip steps.

Your objective was this:
{input}

Your original plan was this:
{plan}

You have currently done the follow steps:
{past_steps}

Update your plan accordingly. If no more steps are needed and you can return to the user, then respond with that. Otherwise, fill out the plan. Only add steps to the plan that still NEED to be done. Do not return previously done steps as part of the plan."""
)

replanner = replanner_prompt | f_common.my_grok_llm.with_structured_output(Act)
###############################################################################################
# 
###############################################################################################
from typing import Literal
from langgraph.graph import END

async def execute_step(state: PlanExecute):
    plan = state["plan"]
    plan_str = "\n".join(f"{i+1}. {step}" for i, step in enumerate(plan))
    task = plan[0]
    task_formatted = f"""For the following plan:
{plan_str}\n\nYou are tasked with executing step {1}, {task}."""
    agent_response = await agent_executor.ainvoke(
        {"messages": [("user", task_formatted)]}
    )
    return {
        "past_steps": [(task, agent_response["messages"][-1].content)],
    }

async def plan_step(state: PlanExecute):
    plan = await planner.ainvoke({"messages": [("user", state["input"])]})
    return {"plan": plan.steps}

async def replan_step(state: PlanExecute):
    output = await replanner.ainvoke(state)
    if isinstance(output.action, Response):
        return {"response": output.action.response}
    else:
        return {"plan": output.action.steps}

def should_end(state: PlanExecute):
    if "response" in state and state["response"]:
        return END
    else:
        return "agent"

from langgraph.graph import StateGraph, START
workflow = StateGraph(PlanExecute)
# Add the plan node
workflow.add_node("planner", plan_step)
# Add the execution step
workflow.add_node("agent", execute_step)
# Add a replan node
workflow.add_node("replan", replan_step)
workflow.add_edge(START, "planner")
# From plan we go to agent
workflow.add_edge("planner", "agent")
# From agent, we replan
workflow.add_edge("agent", "replan")
workflow.add_conditional_edges(
    "replan",
    # Next, we pass in the function that will determine which node is called next.
    should_end,
    ["agent", END],
)

# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
app = workflow.compile()

config = {"recursion_limit": 50}
inputs = {"input": "what is the hometown of the mens 2024 Australia open winner?"}

async def stream_events():
    async for event in app.astream(inputs, config=config):
        for k, v in event.items():
            if k != "__end__":
                print(v)

asyncio.run(stream_events())

Demo最终输出内容的参考

LangGraph范式-Plan-and-Execute

RA/SD 衍生者AI训练营。发布者:稻草人,转载请注明出处:https://www.shxcj.com/archives/9639

(0)
上一篇 3小时前
下一篇 2025-03-08 10:37 下午

相关推荐

发表回复

登录后才能评论
本文授权以下站点有原版访问授权 https://www.shxcj.com https://www.2img.ai https://www.2video.cn