Information-Gathering Prompting 是指在 LangGraph 中通过提示工程(Prompt Engineering)设计的工作流,旨在从用户或外部数据源中系统性地收集信息。这种方法通常用于需要多轮交互的任务,例如客户信息收集、问卷调查、诊断对话或任务分解。LangGraph 的图结构特别适合此类任务。主要特征
- 动态调整提示以收集缺失信息或者说根据用户的输入来确定最终的提示词内容
- 根据用户输入路由到不同节点(如澄清问题、验证数据)。
- 保持交互状态,确保对话连贯性和信息完整性。

提示工程
Information-Gathering Prompting 的核心在于提示设计。以下是关键策略:

- 动态提示:使用模板变量(如 {missing_info})根据状态生成具体问题。
- 上下文保持:将对话历史(messages)纳入提示,确保生成的提问符合上下文。
- 用户友好:提示应简洁、清晰,避免歧义。例如,“请提供您的电子邮件地址”优于“请输入联系方式”。
- 错误处理:提示中包含引导用户纠正错误的指令,例如“电话号码格式无效,请使用 123-456-7890 格式重试”。

from typing import List
from openai import OpenAI
from langchain_community.llms import Tongyi
import f_common
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
###############################################################################################
# #
###############################################################################################
template = """Your job is to get information from a user about what type of prompt template they want to create.
You should get the following information from them:
- What the objective of the prompt is
- What variables will be passed into the prompt template
- Any constraints for what the output should NOT do
- Any requirements that the output MUST adhere to
If you are not able to discern this info, ask them to clarify! Do not attempt to wildly guess.
After you are able to discern all the information, call the relevant tool."""
def get_messages_info(messages):
return [SystemMessage(content=template)] + messages
# 这里定义规范了提示词需要的参数和格式
class PromptInstructions(BaseModel):
"""Instructions on how to prompt the LLM."""
objective: str
variables: List[str]
constraints: List[str]
requirements: List[str]
# 这里的大模型你可以选择你的。
llm_with_tool = f_common.myQwen_LLM.bind_tools([PromptInstructions])
def info_chain(state):
messages = get_messages_info(state["messages"])
response = llm_with_tool.invoke(messages)
return {"messages": [response]}
###############################################################################################
# #
###############################################################################################
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
# New system prompt
prompt_system = """Based on the following requirements, write a good prompt template:
{reqs}"""
# Function to get the messages for the prompt
# Will only get messages AFTER the tool call
def get_prompt_messages(messages: list):
tool_call = None
other_msgs = []
for m in messages:
if isinstance(m, AIMessage) and m.tool_calls:
tool_call = m.tool_calls[0]["args"]
elif isinstance(m, ToolMessage):
continueelif tool_call is not None:
other_msgs.append(m)
return [SystemMessage(content=prompt_system.format(reqs=tool_call))] + other_msgs
def prompt_gen_chain(state):
messages = get_prompt_messages(state["messages"])
response = f_common.myQwen_LLM.invoke(messages)
return {"messages": [response]}
from typing import Literal
from langgraph.graph import END
def get_state(state):
messages = state["messages"]
if isinstance(messages[-1], AIMessage) and messages[-1].tool_calls:
return "add_tool_message"
elif not isinstance(messages[-1], HumanMessage):
return END
return "info"
###############################################################################################
# # 常规的Graph内容
###############################################################################################
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from typing import Annotated
from typing_extensions import TypedDict
class State(TypedDict):
messages: Annotated[list, add_messages]
memory = MemorySaver()
workflow = StateGraph(State)
workflow.add_node("info", info_chain)
workflow.add_node("prompt", prompt_gen_chain)
@workflow.add_node
def add_tool_message(state: State):
return {
"messages": [
ToolMessage(
content="Prompt generated!",
tool_call_id=state["messages"][-1].tool_calls[0]["id"],
)
]
}
workflow.add_conditional_edges("info", get_state, ["add_tool_message", "info", END])
workflow.add_edge("add_tool_message", "prompt")
workflow.add_edge("prompt", END)
workflow.add_edge(START, "info")
graph = workflow.compile(checkpointer=memory)
import uuid
cached_human_responses = ["hi!", "rag prompt", "1 rag, 2 none, 3 no, 4 no", "red", "q"]
cached_response_index = 0
config = {"configurable": {"thread_id": str(uuid.uuid4())}}
while True:
try:
user = input("User (q/Q to quit): ")
except:
user = cached_human_responses[cached_response_index]
cached_response_index += 1
#print(f"User (q/Q to quit): {user}")
if user in {"q", "Q"}:
print("AI: Byebye")
break
output = None
for output in graph.stream(
{"messages": [HumanMessage(content=user)]}, config=config, stream_mode="updates"
):
last_message = next(iter(output.values()))["messages"][-1]
last_message.pretty_print()
if output and "prompt" in output:
print("Done!")
以下是部分截图,可以看到通过用户的输入,提供给AIRobot更多的讯息整理最终的提示词内容。

RA/SD 衍生者AI训练营。发布者:稻草人,转载请注明出处:https://www.shxcj.com/archives/9615