3_optimization-design-ptn/04_memory-management/01_chat-memory-LangGraph.ipynb (911 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"id": "ea566837",
"metadata": {},
"source": [
"# Long-term Chat Memory Agent with LangGraph\n",
"---\n",
"\n",
"This tutorial walks you through how to build an agent with long-term memory capabilities. \n",
"With this setup, the agent can store, recall, and leverage memories to create richer, more personalized interactions with users.\n",
"\n",
"In this tutorial, \"memory\" takes two main forms:\n",
"\n",
"- Text-based insights generated by the memory agent\n",
"- Structured knowledge about entities, stored as (subject, predicate, object) triples\n",
"\n",
"These memories can later be retrieved or queried semantically, allowing the agent to provide user-specific context across conversations.\n",
"The key idea here is that by saving memories, the agent can retain information about users that persists across multiple conversation threads.\n",
"\n",
"- Reference: https://python.langchain.com/docs/versions/migrating_memory/long_term_memory_agent/"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f2d5666f",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from openai import AzureOpenAI\n",
"from dotenv import load_dotenv, find_dotenv\n",
"\n",
"load_dotenv()\n",
"\n",
"aoai_api_endpoint = os.getenv(\"AZURE_OPENAI_ENDPOINT\")\n",
"aoai_api_key = os.getenv(\"AZURE_OPENAI_API_KEY\")\n",
"aoai_api_version = os.getenv(\"AZURE_OPENAI_API_VERSION\")\n",
"aoai_deployment_name = os.getenv(\"AZURE_OPENAI_DEPLOYMENT_NAME\")\n",
"aoai_emb_deployment_name = os.getenv(\"AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME\")\n",
"\n",
"if not aoai_api_version:\n",
" aoai_api_version = os.getenv(\"OPENAI_API_VERSION\")\n",
"\n",
"try:\n",
" print(\"=== Initialized AzuureOpenAI client ===\")\n",
" print(f\"AZURE_OPENAI_ENDPOINT={aoai_api_endpoint}\")\n",
" print(f\"AZURE_OPENAI_API_VERSION={aoai_api_version}\")\n",
" print(f\"AZURE_OPENAI_DEPLOYMENT_NAME={aoai_deployment_name}\")\n",
" print(f\"AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME={aoai_emb_deployment_name}\")\n",
"except (ValueError, TypeError) as e:\n",
" print(e)"
]
},
{
"cell_type": "markdown",
"id": "14eb04d1",
"metadata": {},
"source": [
"<br>\n",
"\n",
"## π§ͺ 1. Preparation and Define the Agentic Architecture\n",
"---\n",
"### PDF retrieval"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "25957ecf",
"metadata": {},
"outputs": [],
"source": [
"from azure_genai_utils.rag.pdf import PDFRetrievalChain\n",
"\n",
"pdf_path = \"../../sample-docs/AutoGen-paper.pdf\"\n",
"\n",
"pdf = PDFRetrievalChain(\n",
" source_uri=[pdf_path],\n",
" loader_type=\"PDFPlumber\",\n",
" model_name=aoai_deployment_name,\n",
" embedding_name=aoai_emb_deployment_name,\n",
" chunk_size=500,\n",
" chunk_overlap=50,\n",
").create_chain()\n",
"\n",
"pdf_retriever = pdf.retriever\n",
"pdf_chain = pdf.chain"
]
},
{
"cell_type": "markdown",
"id": "b22ef5da",
"metadata": {},
"source": [
"### Define vector store "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36b9c940",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"from typing import List, Literal, Optional\n",
"\n",
"import tiktoken\n",
"from langchain_core.documents import Document\n",
"from langchain_core.messages import get_buffer_string\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableConfig\n",
"from langchain_core.tools import tool\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"from langchain_openai import AzureChatOpenAI\n",
"from langchain_openai.embeddings import AzureOpenAIEmbeddings\n",
"from langgraph.checkpoint.memory import MemorySaver\n",
"from langgraph.graph import END, START, MessagesState, StateGraph\n",
"from langgraph.prebuilt import ToolNode\n",
"\n",
"embeddings = AzureOpenAIEmbeddings(\n",
" model=aoai_emb_deployment_name,\n",
" chunk_size=1000,\n",
")\n",
"recall_vector_store = InMemoryVectorStore(embeddings)"
]
},
{
"cell_type": "markdown",
"id": "644bcb72",
"metadata": {},
"source": [
"### Define tools"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "65772621",
"metadata": {},
"outputs": [],
"source": [
"import uuid\n",
"\n",
"\n",
"def get_user_id(config: RunnableConfig) -> str:\n",
" user_id = config[\"configurable\"].get(\"user_id\")\n",
" if user_id is None:\n",
" raise ValueError(\"User ID needs to be provided to save a memory.\")\n",
"\n",
" return user_id\n",
"\n",
"\n",
"@tool\n",
"def save_recall_memory(memory: str, config: RunnableConfig) -> str:\n",
" \"\"\"Save memory to vectorstore for later semantic retrieval.\"\"\"\n",
" user_id = get_user_id(config)\n",
" document = Document(\n",
" page_content=memory, id=str(uuid.uuid4()), metadata={\"user_id\": user_id}\n",
" )\n",
" recall_vector_store.add_documents([document])\n",
" return memory\n",
"\n",
"\n",
"@tool\n",
"def search_recall_memories(query: str, config: RunnableConfig) -> List[str]:\n",
" \"\"\"Search for relevant memories.\"\"\"\n",
" user_id = get_user_id(config)\n",
"\n",
" def _filter_function(doc: Document) -> bool:\n",
" return doc.metadata.get(\"user_id\") == user_id\n",
"\n",
" documents = recall_vector_store.similarity_search(\n",
" query, k=3, filter=_filter_function\n",
" )\n",
" return [document.page_content for document in documents]\n",
"\n",
"\n",
"@tool\n",
"def pdf_retrieve(query: str, config: RunnableConfig):\n",
" \"\"\"Retrieve information regarding AutoGen paper. If the query asks for details about AutoGen, use this tool.\"\"\"\n",
" print(\"\\n==== [RETRIEVE] ====\\n\")\n",
" documents = pdf_retriever.invoke(query)\n",
" return [document.page_content for document in documents]"
]
},
{
"cell_type": "markdown",
"id": "310e87f4",
"metadata": {},
"source": [
"### Define Web search tool"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9b760cbc",
"metadata": {},
"outputs": [],
"source": [
"from azure_genai_utils.tools import BingSearch\n",
"\n",
"WEB_SEARCH_FORMAT_OUTPUT = False\n",
"\n",
"web_search_tool = BingSearch(\n",
" max_results=1,\n",
" locale=\"en-US\",\n",
" include_news=False,\n",
" include_entity=False,\n",
" format_output=WEB_SEARCH_FORMAT_OUTPUT,\n",
")\n",
"\n",
"# Define the tools to be used in the state graph\n",
"tools = [save_recall_memory, search_recall_memories, pdf_retrieve, web_search_tool]"
]
},
{
"cell_type": "markdown",
"id": "282f7b31",
"metadata": {},
"source": [
"### Define the prompt template for the agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b47dcb90",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"\"\"\n",
"You are a helpful assistant with advanced long-term memory capabilities. \n",
"Powered by a stateless LLM, you must rely on external memory to store information between conversations. \n",
"Utilize the available memory tools to store and retrieve important details that will help you better attend to the user's needs and understand their context.\n",
"\n",
"## Memory Usage Guidelines:\n",
"1. Actively use memory tools (save_core_memory, save_recall_memory) to build a comprehensive understanding of the user.\n",
"2. Make informed suppositions and extrapolations based on stored memories.\n",
"3. Regularly reflect on past interactions to identify patterns and preferences.\n",
"4. Update your mental model of the user with each new piece of information.\n",
"5. Cross-reference new information with existing memories for consistency.\n",
"6. Prioritize storing emotional context and personal values alongside facts.\n",
"7. Use memory to anticipate needs and tailor responses to the user's style.\n",
"8. Recognize and acknowledge changes in the user's situation or perspectives over time.\n",
"9. Leverage memories to provide personalized examples and analogies.\n",
"10. Recall past challenges or successes to inform current problem-solving.\n",
"\n",
"## Constraint\n",
"1. Review the provided context thoroughly and extract key details related to the question.\n",
"2. Craft a precise answer based on the relevant information.\n",
"3. Keep the answer concise but logical/natural/in-depth.\n",
"4. If the retrieved context does not contain relevant information or no context is available, respond with: 'I can't find the answer to that question in the context.'\n",
"\n",
"## Recall Memories\n",
"Recall memories are contextually retrieved based on the current conversation:\n",
"{recall_memories}\n",
"\n",
"## Instructions\n",
"Engage with the user naturally, as a trusted colleague or friend. There's no need to explicitly mention your memory capabilities. \n",
"Instead, seamlessly incorporate your understanding of the user into your responses. \n",
"Be attentive to subtle cues and underlying emotions. Adapt your communication style to match the user's preferences and current emotional state. \n",
"Use tools to persist information you want to retain in the next conversation. \n",
"If you do call tools, all text preceding the tool call is an internal message. \n",
"Respond AFTER calling the tool, once you have confirmation that the tool completed successfully.\n",
"\"\"\"\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system_prompt),\n",
" (\"placeholder\", \"{messages}\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "287f45f5",
"metadata": {},
"source": [
"### Define state, nodes and edges\n",
"\n",
"- agent: Process the current state (state contains previous messages and memory) and generate a response. \n",
"- load_memories: Load memories for the current conversation.\n",
"- route_tools: Determine whether to use tools or end the conversation based on the last message."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0b9b2001",
"metadata": {},
"outputs": [],
"source": [
"model = AzureChatOpenAI(model_name=aoai_deployment_name)\n",
"model_with_tools = model.bind_tools(tools)\n",
"tokenizer = tiktoken.encoding_for_model(aoai_deployment_name)\n",
"\n",
"from typing import List\n",
"from typing_extensions import TypedDict, Annotated\n",
"\n",
"\n",
"class State(MessagesState):\n",
" # add memories that will be retrieved based on the conversation context\n",
" recall_memories: Annotated[List[str], \"List of recall memories\"]\n",
"\n",
"\n",
"def agent(state: State) -> State:\n",
" \"\"\"Process the current state and generate a response using the LLM.\n",
"\n",
" Args:\n",
" state (schemas.State): The current state of the conversation.\n",
"\n",
" Returns:\n",
" schemas.State: The updated state with the agent's response.\n",
" \"\"\"\n",
" bound = prompt | model_with_tools\n",
" recall_str = (\n",
" \"<recall_memory>\\n\" + \"\\n\".join(state[\"recall_memories\"]) + \"\\n</recall_memory>\"\n",
" )\n",
" prediction = bound.invoke(\n",
" {\n",
" \"messages\": state[\"messages\"],\n",
" # \"context\": format_docs(state[\"documents\"]),\n",
" \"recall_memories\": recall_str,\n",
" }\n",
" )\n",
" return {\n",
" \"messages\": [prediction],\n",
" }\n",
"\n",
"\n",
"def load_memories(state: State, config: RunnableConfig) -> State:\n",
" \"\"\"Load memories for the current conversation.\n",
"\n",
" Args:\n",
" state (schemas.State): The current state of the conversation.\n",
" config (RunnableConfig): The runtime configuration for the agent.\n",
"\n",
" Returns:\n",
" State: The updated state with loaded memories.\n",
" \"\"\"\n",
" convo_str = get_buffer_string(state[\"messages\"])\n",
" convo_str = tokenizer.decode(tokenizer.encode(convo_str)[:2048])\n",
" recall_memories = search_recall_memories.invoke(convo_str, config)\n",
" return {\n",
" \"recall_memories\": recall_memories,\n",
" }\n",
"\n",
"\n",
"def route_tools(state: State):\n",
" \"\"\"Determine whether to use tools or end the conversation based on the last message.\n",
"\n",
" Args:\n",
" state (schemas.State): The current state of the conversation.\n",
"\n",
" Returns:\n",
" Literal[\"tools\", \"__end__\"]: The next step in the graph.\n",
" \"\"\"\n",
" msg = state[\"messages\"][-1]\n",
" if msg.tool_calls:\n",
" return \"tools\"\n",
"\n",
" return END"
]
},
{
"cell_type": "markdown",
"id": "3a0e8229",
"metadata": {},
"source": [
"### Construct the state graph"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b20e1537",
"metadata": {},
"outputs": [],
"source": [
"# Create the graph and add nodes\n",
"builder = StateGraph(State)\n",
"\n",
"builder.add_node(load_memories)\n",
"builder.add_node(agent)\n",
"builder.add_node(\"tools\", ToolNode(tools))\n",
"\n",
"# Add edges to the graph\n",
"builder.add_edge(START, \"load_memories\")\n",
"builder.add_edge(\"load_memories\", \"agent\")\n",
"builder.add_conditional_edges(\"agent\", route_tools, [\"tools\", END])\n",
"builder.add_edge(\"tools\", \"agent\")\n",
"\n",
"# Compile the graph\n",
"memory = MemorySaver()\n",
"graph = builder.compile(checkpointer=memory)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd7aec2c",
"metadata": {},
"outputs": [],
"source": [
"from azure_genai_utils.graphs import visualize_langgraph\n",
"\n",
"visualize_langgraph(graph, xray=True)"
]
},
{
"cell_type": "markdown",
"id": "723ac0bd",
"metadata": {},
"source": [
"<br>\n",
"\n",
"## π§ͺ 2. Run the agent\n",
"---"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6697192b",
"metadata": {},
"outputs": [],
"source": [
"def pretty_print_stream_chunk(chunk):\n",
" for node, updates in chunk.items():\n",
" print(f\"π Update from Node: \\033[1;36m{node}\\033[0m π\")\n",
" if \"messages\" in updates:\n",
" updates[\"messages\"][-1].pretty_print()\n",
" else:\n",
" print(updates)\n",
"\n",
" print(\"\\n\")"
]
},
{
"cell_type": "markdown",
"id": "82e3009c",
"metadata": {},
"source": [
"### Person 1. Daekeun\n",
"\n",
"This is a scenario for user Daekeun. Daekeun mentions his role and interests when starting a conversation, and these are stored in memory. In subsequent questions, the agent extracts the most relevant context to the memory as top_k and conducts a conversation based on that context. The memory is continuously managed for the same user, and only similar memories are extracted as top_k, so the context is not forgotten even if the conversation is long."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a6f0563c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableConfig\n",
"\n",
"config = RunnableConfig(\n",
" recursion_limit=10, configurable={\"user_id\": \"1\", \"thread_id\": \"1\"}\n",
")\n",
"\n",
"for chunk in graph.stream(\n",
" {\n",
" \"messages\": [\n",
" (\n",
" \"user\",\n",
" \"Daekeun is a Machine Learning geek. He loves to learn AIML new things.\",\n",
" )\n",
" ]\n",
" },\n",
" config=config,\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "markdown",
"id": "61044068",
"metadata": {},
"source": [
"Note that the response. You can see recall_memories are updated."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "19fd1c58",
"metadata": {},
"outputs": [],
"source": [
"for chunk in graph.stream(\n",
" {\"messages\": [(\"user\", \"Daekeun provides AIML technology support at Microsoft.\")]},\n",
" config=config,\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "markdown",
"id": "01061e5b",
"metadata": {},
"source": [
"This code cell shows the agent tries to retrieve the information from the PDF document."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "776b9339",
"metadata": {},
"outputs": [],
"source": [
"for chunk in graph.stream(\n",
" {\"messages\": [(\"user\", \"Daekeun wants to know AutoGen\")]}, config=config\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c034f9ac",
"metadata": {},
"outputs": [],
"source": [
"for chunk in graph.stream(\n",
" {\"messages\": [(\"user\", \"What is AutoGen's main featrues?\")]}, config=config\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "markdown",
"id": "f7d42301",
"metadata": {},
"source": [
"The agent tries to find materials from the web with the saved memories."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45f92e32",
"metadata": {},
"outputs": [],
"source": [
"for chunk in graph.stream(\n",
" {\n",
" \"messages\": [\n",
" (\n",
" \"user\",\n",
" \"Daekeun wants to study AutoGen in 2 weeks. Please recommend Microsoft's website or appropriate learning material.\",\n",
" )\n",
" ]\n",
" },\n",
" config=config,\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "markdown",
"id": "05a7ce1d",
"metadata": {},
"source": [
"### Person 2. Hyo\n",
"\n",
"Switch the topic to another User Hyo. ββSince memory is managed per user, please make sure that the memory is empty for the new user."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f8908707",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableConfig\n",
"\n",
"config = RunnableConfig(\n",
" recursion_limit=10, configurable={\"user_id\": \"2\", \"thread_id\": \"1\"}\n",
")\n",
"\n",
"\n",
"for chunk in graph.stream(\n",
" {\"messages\": [(\"user\", \"Hyo is a big fan of Microsoft\")]}, config=config\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d44ab64e",
"metadata": {},
"outputs": [],
"source": [
"for chunk in graph.stream(\n",
" {\"messages\": [(\"user\", \"Hyo is interested in AutoGen and Semantic Kernel\")]},\n",
" config=config,\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d26af86c",
"metadata": {},
"outputs": [],
"source": [
"for chunk in graph.stream(\n",
" {\"messages\": [(\"user\", \"Where is learning materials?\")]}, config=config\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36fa1929",
"metadata": {},
"outputs": [],
"source": [
"for chunk in graph.stream(\n",
" {\"messages\": [(\"user\", \"what's the address for joe's in greenwich village?\")]},\n",
" config=config,\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "markdown",
"id": "3de5c10f",
"metadata": {},
"source": [
"<br>\n",
"\n",
"## π§ͺ 3. Adding structured memories\n",
"---\n",
"\n",
"We have represented memory as a string so far. This is the baseline for storing memory in vector storage and is simple to implement. However, if you find it useful to use a different persistence backend, such as a graph database, you can update your application to create a memory."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "889ca9ae",
"metadata": {},
"outputs": [],
"source": [
"recall_vector_store = InMemoryVectorStore(embeddings)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2f3c7ef4",
"metadata": {},
"outputs": [],
"source": [
"from typing_extensions import TypedDict\n",
"\n",
"\n",
"class KnowledgeTriple(TypedDict):\n",
" subject: str\n",
" predicate: str\n",
" object_: str\n",
"\n",
"\n",
"### Baseline implementation of save_recall_memory (Represented memory as a string)\n",
"# @tool\n",
"# def save_recall_memory(memory: str, config: RunnableConfig) -> str:\n",
"# \"\"\"Save memory to vectorstore for later semantic retrieval.\"\"\"\n",
"# user_id = get_user_id(config)\n",
"# document = Document(\n",
"# page_content=memory, id=str(uuid.uuid4()), metadata={\"user_id\": user_id}\n",
"# )\n",
"# recall_vector_store.add_documents([document])\n",
"# return memory\n",
"\n",
"\n",
"@tool\n",
"def save_recall_memory(memories: List[KnowledgeTriple], config: RunnableConfig) -> str:\n",
" \"\"\"Save memory to vectorstore for later semantic retrieval.\"\"\"\n",
" user_id = get_user_id(config)\n",
" for memory in memories:\n",
" serialized = \" \".join(memory.values())\n",
" document = Document(\n",
" serialized,\n",
" id=str(uuid.uuid4()),\n",
" metadata={\n",
" \"user_id\": user_id,\n",
" **memory,\n",
" },\n",
" )\n",
" recall_vector_store.add_documents([document])\n",
" return memories"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83c3281c",
"metadata": {},
"outputs": [],
"source": [
"tools = [save_recall_memory, search_recall_memories, pdf_retrieve, web_search_tool]\n",
"model_with_tools = model.bind_tools(tools)\n",
"\n",
"# Create the graph and add nodes\n",
"builder = StateGraph(State)\n",
"builder.add_node(load_memories)\n",
"builder.add_node(agent)\n",
"builder.add_node(\"tools\", ToolNode(tools))\n",
"\n",
"# Add edges to the graph\n",
"builder.add_edge(START, \"load_memories\")\n",
"builder.add_edge(\"load_memories\", \"agent\")\n",
"builder.add_conditional_edges(\"agent\", route_tools, [\"tools\", END])\n",
"builder.add_edge(\"tools\", \"agent\")\n",
"\n",
"# Compile the graph\n",
"memory = MemorySaver()\n",
"graph = builder.compile(checkpointer=memory)"
]
},
{
"cell_type": "markdown",
"id": "f4c0ae28",
"metadata": {},
"source": [
"### Person 3. Wonchan\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6d017b3f",
"metadata": {},
"outputs": [],
"source": [
"config = {\"configurable\": {\"user_id\": \"3\", \"thread_id\": \"1\"}}\n",
"\n",
"for chunk in graph.stream({\"messages\": [(\"user\", \"Hi I am Wonchan.\")]}, config=config):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "42f8d3a1",
"metadata": {},
"outputs": [],
"source": [
"for chunk in graph.stream(\n",
" {\n",
" \"messages\": [\n",
" (\n",
" \"user\",\n",
" \"I am non-tech, but interested in Microsoft's multi-agent strategy and tech stack like AutoGen.\",\n",
" )\n",
" ]\n",
" },\n",
" config=config,\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "markdown",
"id": "30b523c0",
"metadata": {},
"source": [
"The memories generated from one thread are accessed in another thread from the same user:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "06ebca1a",
"metadata": {},
"outputs": [],
"source": [
"config = {\"configurable\": {\"user_id\": \"3\", \"thread_id\": \"2\"}}\n",
"\n",
"for chunk in graph.stream(\n",
" {\n",
" \"messages\": [\n",
" (\"user\", \"Recommend me a website where I can easily try AutoGen hands-on\")\n",
" ]\n",
" },\n",
" config=config,\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b78bec73",
"metadata": {},
"outputs": [],
"source": [
"for chunk in graph.stream(\n",
" {\n",
" \"messages\": [\n",
" (\n",
" \"user\",\n",
" \"Recommend other multi-agent frameworks to me inorder to learn about other companies' multi-agent strategies\",\n",
" )\n",
" ]\n",
" },\n",
" config=config,\n",
"):\n",
" pretty_print_stream_chunk(chunk)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5051041a",
"metadata": {},
"outputs": [],
"source": [
"records = recall_vector_store.similarity_search(\n",
" \"multi-agent\", k=3, filter=lambda doc: doc.metadata[\"user_id\"] == \"3\"\n",
")\n",
"print(records)"
]
},
{
"cell_type": "markdown",
"id": "78f11853",
"metadata": {},
"source": [
"Optionally, for illustrative purposes we can visualize the knowledge graph extracted by the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "564738b8",
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import networkx as nx\n",
"\n",
"# Fetch records\n",
"records = recall_vector_store.similarity_search(\n",
" \"multi-agent\", k=2, filter=lambda doc: doc.metadata[\"user_id\"] == \"3\"\n",
")\n",
"\n",
"\n",
"# Plot graph\n",
"plt.figure(figsize=(6, 4), dpi=80)\n",
"G = nx.DiGraph()\n",
"\n",
"for record in records:\n",
" G.add_edge(\n",
" record.metadata[\"subject\"],\n",
" record.metadata[\"object_\"],\n",
" label=record.metadata[\"predicate\"],\n",
" )\n",
"\n",
"pos = nx.spring_layout(G)\n",
"nx.draw(\n",
" G,\n",
" pos,\n",
" with_labels=True,\n",
" node_size=3000,\n",
" node_color=\"lightblue\",\n",
" font_size=10,\n",
" font_weight=\"bold\",\n",
" arrows=True,\n",
")\n",
"edge_labels = nx.get_edge_attributes(G, \"label\")\n",
"nx.draw_networkx_edge_labels(G, pos, edge_labels=edge_labels, font_color=\"red\")\n",
"plt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "py312-dev",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}