notebooks/zh-CN/agents.ipynb (815 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 使用 Transformers Agents 构建具有工具调用超能力的智能体 🦸\n",
"\n",
"_作者: [Aymeric Roucher](https://huggingface.co/m-ric)_\n",
"\n",
"\n",
"这个 notebook 展示了如何使用 [**Transformers Agents**](https://huggingface.co/docs/transformers/en/agents) 来构建出色的**智能体**!\n",
"\n",
"什么是**智能体**?智能体是由大型语言模型(LLM)驱动的系统,它们使得 LLM(通过精心设计的提示和输出解析)能够使用特定的*工具*来解决问题。\n",
"\n",
"这些*工具*基本上是 LLM 自身无法很好执行的功能:例如,对于像 [Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) 这样的文本生成 LLM,这可能是一个图像生成工具、网络搜索工具、计算器...\n",
"\n",
"什么是 **Transformers Agents** ?它是我们 `transformers` 库的一个扩展,提供了构建自己的智能体的构建块!在[文档](https://huggingface.co/docs/transformers/en/agents)中了解更多信息。\n",
"\n",
"让我们看看如何使用它,以及它能解决哪些用例。\n",
"\n",
"我们从源代码安装 transformers agents ,你可以使用 `pip install transformers[agents]` 轻松安装。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: smolagents in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (1.0.0)\n",
"Requirement already satisfied: torch in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (2.3.0)\n",
"Requirement already satisfied: torchaudio in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (2.3.0)\n",
"Requirement already satisfied: torchvision in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (0.18.0)\n",
"Requirement already satisfied: transformers>=4.0.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (4.47.1)\n",
"Requirement already satisfied: requests>=2.32.3 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (2.32.3)\n",
"Requirement already satisfied: rich>=13.9.4 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (13.9.4)\n",
"Requirement already satisfied: pandas>=2.2.3 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (2.2.3)\n",
"Requirement already satisfied: jinja2>=3.1.4 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (3.1.4)\n",
"Requirement already satisfied: pillow>=11.0.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (11.1.0)\n",
"Requirement already satisfied: markdownify>=0.14.1 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (0.14.1)\n",
"Requirement already satisfied: gradio>=5.8.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (5.9.1)\n",
"Requirement already satisfied: duckduckgo-search>=6.3.7 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (7.2.0)\n",
"Requirement already satisfied: python-dotenv>=1.0.1 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (1.0.1)\n",
"Requirement already satisfied: e2b-code-interpreter>=1.0.3 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (1.0.3)\n",
"Requirement already satisfied: litellm>=1.55.10 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from smolagents) (1.57.0)\n",
"Requirement already satisfied: click>=8.1.7 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from duckduckgo-search>=6.3.7->smolagents) (8.1.7)\n",
"Requirement already satisfied: primp>=0.9.3 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from duckduckgo-search>=6.3.7->smolagents) (0.9.3)\n",
"Requirement already satisfied: lxml>=5.3.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from duckduckgo-search>=6.3.7->smolagents) (5.3.0)\n",
"Requirement already satisfied: attrs>=21.3.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from e2b-code-interpreter>=1.0.3->smolagents) (23.2.0)\n",
"Requirement already satisfied: e2b<2.0.0,>=1.0.4 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from e2b-code-interpreter>=1.0.3->smolagents) (1.0.5)\n",
"Requirement already satisfied: httpx<1.0.0,>=0.20.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from e2b-code-interpreter>=1.0.3->smolagents) (0.27.2)\n",
"Requirement already satisfied: aiofiles<24.0,>=22.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (23.2.1)\n",
"Requirement already satisfied: anyio<5.0,>=3.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (3.7.1)\n",
"Requirement already satisfied: fastapi<1.0,>=0.115.2 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.115.6)\n",
"Requirement already satisfied: ffmpy in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.4.0)\n",
"Requirement already satisfied: gradio-client==1.5.2 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (1.5.2)\n",
"Requirement already satisfied: huggingface-hub>=0.25.1 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.27.1)\n",
"Requirement already satisfied: markupsafe~=2.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (2.1.5)\n",
"Requirement already satisfied: numpy<3.0,>=1.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (2.1.3)\n",
"Requirement already satisfied: orjson~=3.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (3.10.11)\n",
"Requirement already satisfied: packaging in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (24.2)\n",
"Requirement already satisfied: pydantic>=2.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (2.9.2)\n",
"Requirement already satisfied: pydub in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.25.1)\n",
"Requirement already satisfied: python-multipart>=0.0.18 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.0.20)\n",
"Requirement already satisfied: pyyaml<7.0,>=5.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (6.0.1)\n",
"Requirement already satisfied: ruff>=0.2.2 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.3.4)\n",
"Requirement already satisfied: safehttpx<0.2.0,>=0.1.6 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.1.6)\n",
"Requirement already satisfied: semantic-version~=2.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (2.10.0)\n",
"Requirement already satisfied: starlette<1.0,>=0.40.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.41.3)\n",
"Requirement already satisfied: tomlkit<0.14.0,>=0.12.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.12.0)\n",
"Requirement already satisfied: typer<1.0,>=0.12 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.12.5)\n",
"Requirement already satisfied: typing-extensions~=4.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (4.12.2)\n",
"Requirement already satisfied: uvicorn>=0.14.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio>=5.8.0->smolagents) (0.30.6)\n",
"Requirement already satisfied: fsspec in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio-client==1.5.2->gradio>=5.8.0->smolagents) (2024.3.1)\n",
"Requirement already satisfied: websockets<15.0,>=10.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from gradio-client==1.5.2->gradio>=5.8.0->smolagents) (12.0)\n",
"Requirement already satisfied: aiohttp in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from litellm>=1.55.10->smolagents) (3.9.3)\n",
"Requirement already satisfied: importlib-metadata>=6.8.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from litellm>=1.55.10->smolagents) (8.5.0)\n",
"Requirement already satisfied: jsonschema<5.0.0,>=4.22.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from litellm>=1.55.10->smolagents) (4.22.0)\n",
"Requirement already satisfied: openai>=1.55.3 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from litellm>=1.55.10->smolagents) (1.59.3)\n",
"Requirement already satisfied: tiktoken>=0.7.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from litellm>=1.55.10->smolagents) (0.8.0)\n",
"Requirement already satisfied: tokenizers in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from litellm>=1.55.10->smolagents) (0.21.0)\n",
"Requirement already satisfied: beautifulsoup4<5,>=4.9 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from markdownify>=0.14.1->smolagents) (4.12.3)\n",
"Requirement already satisfied: six<2,>=1.15 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from markdownify>=0.14.1->smolagents) (1.16.0)\n",
"Requirement already satisfied: python-dateutil>=2.8.2 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from pandas>=2.2.3->smolagents) (2.9.0.post0)\n",
"Requirement already satisfied: pytz>=2020.1 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from pandas>=2.2.3->smolagents) (2024.1)\n",
"Requirement already satisfied: tzdata>=2022.7 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from pandas>=2.2.3->smolagents) (2024.1)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from requests>=2.32.3->smolagents) (3.3.2)\n",
"Requirement already satisfied: idna<4,>=2.5 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from requests>=2.32.3->smolagents) (3.6)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from requests>=2.32.3->smolagents) (2.0.7)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from requests>=2.32.3->smolagents) (2023.11.17)\n",
"Requirement already satisfied: markdown-it-py>=2.2.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from rich>=13.9.4->smolagents) (3.0.0)\n",
"Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from rich>=13.9.4->smolagents) (2.18.0)\n",
"Requirement already satisfied: filelock in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from transformers>=4.0.0->smolagents) (3.13.1)\n",
"Requirement already satisfied: regex!=2019.12.17 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from transformers>=4.0.0->smolagents) (2024.5.10)\n",
"Requirement already satisfied: safetensors>=0.4.1 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from transformers>=4.0.0->smolagents) (0.4.3)\n",
"Requirement already satisfied: tqdm>=4.27 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from transformers>=4.0.0->smolagents) (4.66.1)\n",
"Requirement already satisfied: sympy in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from torch->smolagents) (1.12)\n",
"Requirement already satisfied: networkx in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from torch->smolagents) (3.3)\n",
"Requirement already satisfied: sniffio>=1.1 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from anyio<5.0,>=3.0->gradio>=5.8.0->smolagents) (1.3.0)\n",
"Requirement already satisfied: soupsieve>1.2 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from beautifulsoup4<5,>=4.9->markdownify>=0.14.1->smolagents) (2.5)\n",
"Requirement already satisfied: httpcore<2.0.0,>=1.0.5 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from e2b<2.0.0,>=1.0.4->e2b-code-interpreter>=1.0.3->smolagents) (1.0.7)\n",
"Requirement already satisfied: protobuf<6.0.0,>=3.20.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from e2b<2.0.0,>=1.0.4->e2b-code-interpreter>=1.0.3->smolagents) (5.29.0)\n",
"Requirement already satisfied: h11<0.15,>=0.13 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from httpcore<2.0.0,>=1.0.5->e2b<2.0.0,>=1.0.4->e2b-code-interpreter>=1.0.3->smolagents) (0.14.0)\n",
"Requirement already satisfied: zipp>=3.20 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from importlib-metadata>=6.8.0->litellm>=1.55.10->smolagents) (3.21.0)\n",
"Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from jsonschema<5.0.0,>=4.22.0->litellm>=1.55.10->smolagents) (2023.12.1)\n",
"Requirement already satisfied: referencing>=0.28.4 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from jsonschema<5.0.0,>=4.22.0->litellm>=1.55.10->smolagents) (0.35.1)\n",
"Requirement already satisfied: rpds-py>=0.7.1 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from jsonschema<5.0.0,>=4.22.0->litellm>=1.55.10->smolagents) (0.18.1)\n",
"Requirement already satisfied: mdurl~=0.1 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from markdown-it-py>=2.2.0->rich>=13.9.4->smolagents) (0.1.2)\n",
"Requirement already satisfied: distro<2,>=1.7.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from openai>=1.55.3->litellm>=1.55.10->smolagents) (1.8.0)\n",
"Requirement already satisfied: jiter<1,>=0.4.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from openai>=1.55.3->litellm>=1.55.10->smolagents) (0.7.1)\n",
"Requirement already satisfied: annotated-types>=0.6.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from pydantic>=2.0->gradio>=5.8.0->smolagents) (0.6.0)\n",
"Requirement already satisfied: pydantic-core==2.23.4 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from pydantic>=2.0->gradio>=5.8.0->smolagents) (2.23.4)\n",
"Requirement already satisfied: shellingham>=1.3.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from typer<1.0,>=0.12->gradio>=5.8.0->smolagents) (1.5.4)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from aiohttp->litellm>=1.55.10->smolagents) (1.3.1)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from aiohttp->litellm>=1.55.10->smolagents) (1.4.1)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from aiohttp->litellm>=1.55.10->smolagents) (6.0.5)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from aiohttp->litellm>=1.55.10->smolagents) (1.9.4)\n",
"Requirement already satisfied: mpmath>=0.19 in /Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages (from sympy->torch->smolagents) (1.3.0)\n"
]
}
],
"source": [
"!pip install smolagents"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"!pip install datasets huggingface_hub langchain sentence-transformers faiss-cpu serpapi google-search-results openai -q"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. 🏞️ 多模态 + 🌐 网络浏览助手\n",
"\n",
"对于这个用例,我们想要展示一个能够浏览网络并能够生成图像的智能体。\n",
"\n",
"为了构建它,我们只需要准备两个工具:图像生成和网络搜索。\n",
"- 对于图像生成,我们从 Hub 加载一个工具,该工具使用 HF 推理 API(无服务器)使用 Stable Diffusion 生成图像。\n",
"- 对于网络搜索,我们加载一个 LangChain 工具。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
},
{
"ename": "ImportError",
"evalue": "cannot import name 'InferenceClientModel' from 'transformers' (/Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages/transformers/__init__.py)",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mImportError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[3], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtransformers\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Tool, load_tool, CodeAgent, InferenceClientModel\n\u001b[1;32m 3\u001b[0m \u001b[38;5;66;03m# Import tool from Hub\u001b[39;00m\n\u001b[1;32m 4\u001b[0m image_generation_tool \u001b[38;5;241m=\u001b[39m load_tool(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mm-ric/text-to-image\u001b[39m\u001b[38;5;124m\"\u001b[39m, trust_remote_code\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n",
"\u001b[0;31mImportError\u001b[0m: cannot import name 'InferenceClientModel' from 'transformers' (/Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages/transformers/__init__.py)"
]
}
],
"source": [
"from transformers import Tool, load_tool, CodeAgent, InferenceClientModel\n",
"\n",
"# Import tool from Hub\n",
"image_generation_tool = load_tool(\"m-ric/text-to-image\", trust_remote_code=True)\n",
"\n",
"# Import tool from LangChain\n",
"from langchain.agents import load_tools\n",
"\n",
"search_tool = Tool.from_langchain(load_tools([\"serpapi\"])[0])\n",
"\n",
"\n",
"model = InferenceClientModel(\"meta-llama/Llama-3.1-70B-Instruct\")\n",
"# Initialize the agent with both tools\n",
"agent = CodeAgent(\n",
" tools=[image_generation_tool, search_tool], model=model\n",
")\n",
"\n",
"# Run it!\n",
"result = agent.run(\n",
" \"Generate me a photo of the car that James bond drove in the latest movie.\",\n",
")\n",
"result"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. 📚💬 带有迭代查询优化和来源选择的 RAG\n",
"快速定义:检索增强生成(RAG)是 ___“使用大型语言模型(LLM)来回答用户查询,但基于从知识库检索到的信息来构建答案”___。\n",
"\n",
"这种方法相比使用普通或微调的 LLM 有许多优势:列举一些,它允许将答案建立在真实事实的基础上并减少虚构,它允许为 LLM 提供特定领域的知识,并且它允许对知识库中的信息访问进行细粒度控制。\n",
"\n",
"- 现在假设我们想要执行 RAG,但增加了动态生成某些参数的约束。例如,根据用户查询,我们可能想要将搜索限制在知识库的特定子集,或者我们可能想要调整检索到的文档数量。难点在于:**如何根据用户查询动态调整这些参数?**\n",
"\n",
"- RAG 的一个常见失败案例是基于用户查询的检索没有返回任何相关的支持文档。**有没有一种方法,在之前的结果不相关时,通过修改查询重新调用检索器来进行迭代?**\n",
"\n",
"🔧 好吧,我们可以以简单的方式解决上述问题:我们将**让我们的智能体控制检索器的参数!**\n",
"\n",
"➡️ 让我们展示如何做到这一点。我们首先加载一个我们想要执行 RAG 的知识库:这个数据集是许多 `huggingface` 包的文档页面汇总,以 markdown 格式存储。\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/aymeric/.pyenv/versions/3.12.0/lib/python3.12/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
}
],
"source": [
"import datasets\n",
"\n",
"knowledge_base = datasets.load_dataset(\"m-ric/huggingface_doc\", split=\"train\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在我们通过处理数据集并将其存储到向量数据库中来准备知识库,以便检索器使用。我们将使用 LangChain,因为它具有用于向量数据库的优秀工具:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain.vectorstores import FAISS\n",
"from langchain_community.embeddings import HuggingFaceEmbeddings\n",
"\n",
"source_docs = [\n",
" Document(page_content=doc[\"text\"], metadata={\"source\": doc[\"source\"].split(\"/\")[1]})\n",
" for doc in knowledge_base\n",
"]\n",
"\n",
"docs_processed = RecursiveCharacterTextSplitter(chunk_size=500).split_documents(\n",
" source_docs\n",
")[:1000]\n",
"\n",
"embedding_model = HuggingFaceEmbeddings(model_name=\"thenlper/gte-small\")\n",
"vectordb = FAISS.from_documents(documents=docs_processed, embedding=embedding_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在我们已经准备好了数据库,让我们构建一个基于它回答用户查询的 RAG 系统!\n",
"\n",
"我们希望我们的系统根据查询只从最相关的信息来源中选择。\n",
"\n",
"我们的文档页面来自以下来源:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['evaluate', 'course', 'deep-rl-class', 'peft', 'hf-endpoints-documentation', 'blog', 'gradio', 'datasets', 'datasets-server', 'transformers', 'optimum', 'hub-docs', 'pytorch-image-models', 'diffusers']\n"
]
}
],
"source": [
"all_sources = list(set([doc.metadata[\"source\"] for doc in docs_processed]))\n",
"print(all_sources)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"from smolagents import Tool\n",
"from langchain_core.vectorstores import VectorStore\n",
"\n",
"\n",
"class RetrieverTool(Tool):\n",
" name = \"retriever\"\n",
" description = \"Retrieves some documents from the knowledge base that have the closest embeddings to the input query.\"\n",
" inputs = {\n",
" \"query\": {\n",
" \"type\": \"text\",\n",
" \"description\": \"The query to perform. This should be semantically close to your target documents. Use the affirmative form rather than a question.\",\n",
" },\n",
" \"source\": {\"type\": \"text\", \"description\": \"\"},\n",
" \"number_of_documents\": {\n",
" \"type\": \"text\",\n",
" \"description\": \"the number of documents to retrieve. Stay under 10 to avoid drowning in docs\",\n",
" },\n",
" }\n",
" output_type = \"text\"\n",
"\n",
" def __init__(self, vectordb: VectorStore, all_sources: str, **kwargs):\n",
" super().__init__(**kwargs)\n",
" self.vectordb = vectordb\n",
" self.inputs[\"source\"][\n",
" \"description\"\n",
" ] = f\"The source of the documents to search, as a str representation of a list. Possible values in the list are: {all_sources}. If this argument is not provided, all sources will be searched.\"\n",
"\n",
" def forward(self, query: str, source: str = None, number_of_documents=7) -> str:\n",
" assert isinstance(query, str), \"Your search query must be a string\"\n",
" number_of_documents = int(number_of_documents)\n",
"\n",
" if source:\n",
" if isinstance(source, str) and \"[\" not in str(\n",
" source\n",
" ): # if the source is not representing a list\n",
" source = [source]\n",
" source = json.loads(str(source).replace(\"'\", '\"'))\n",
"\n",
" docs = self.vectordb.similarity_search(\n",
" query,\n",
" filter=({\"source\": source} if source else None),\n",
" k=number_of_documents,\n",
" )\n",
"\n",
" if len(docs) == 0:\n",
" return \"No documents found with this filtering. Try removing the source filter.\"\n",
" return \"Retrieved documents:\\n\\n\" + \"\\n===Document===\\n\".join(\n",
" [doc.page_content for doc in docs]\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 可选:将你的检索器工具分享到 Hub\n",
"\n",
"要将你的工具分享到 Hub,首先将检索器工具定义单元格中的代码复制粘贴到一个名为例如 `retriever.py` 的新文件中。\n",
"\n",
"当工具从单独的文件加载后,你可以使用以下代码将其推送到 Hub(确保使用具有`写入`访问权限的 token 登录)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"share_to_hub = False\n",
"\n",
"if share_to_hub:\n",
" from huggingface_hub import login\n",
" from retriever import RetrieverTool\n",
"\n",
" login(\"your_token\")\n",
"\n",
" tool = RetrieverTool(vectordb, all_sources)\n",
"\n",
" tool.push_to_hub(repo_id=\"m-ric/retriever-tool\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 运行智能体!"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"A new version of the following files was downloaded from https://huggingface.co/spaces/m-ric/retriever-tool:\n",
"- retriever.py\n",
". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n",
"\u001b[33;1m======== New task ========\u001b[0m\n",
"\u001b[37;1mPlease show me a LORA finetuning script\u001b[0m\n",
"\u001b[33;1mCalling tool: 'retriever' with arguments: {'number_of_documents': '5', 'query': 'LORA finetuning script', 'source': \"['transformers', 'blog']\"}\u001b[0m\n",
"\u001b[33;1mCalling tool: 'retriever' with arguments: {'number_of_documents': '5', 'query': 'LORA finetuning script'}\u001b[0m\n",
"\u001b[33;1mCalling tool: 'retriever' with arguments: {'number_of_documents': '5', 'query': 'train_text_to_image_lora.py'}\u001b[0m\n",
"\u001b[33;1mCalling tool: 'final_answer' with arguments: https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py\u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Final output:\n",
"https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py\n"
]
}
],
"source": [
"from smolagents import HfModel, ToolCallingAgent, load_tool\n",
"\n",
"model = HfModel(\"meta-llama/Meta-Llama-3-70B-Instruct\")\n",
"\n",
"retriever_tool = load_tool(\n",
" \"m-ric/retriever-tool\", vectordb=vectordb, all_sources=all_sources\n",
")\n",
"agent = ToolCallingAgent(tools=[retriever_tool], model=model, verbose=0)\n",
"\n",
"agent_output = agent.run(\"Please show me a LORA finetuning script\")\n",
"\n",
"print(\"Final output:\")\n",
"print(agent_output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"发生了什么?首先,智能体启动了检索器,并考虑了特定的来源(`['transformers', 'blog']`)。\n",
"\n",
"但是这次检索没有产生足够的结果 ⇒ 没关系!智能体可以迭代之前的结果,因此它只是用不那么严格的搜索参数重新运行了它的检索。\n",
"\n",
"因此,研究成功了!\n",
"\n",
"请注意,**使用调用检索器作为工具并可以动态修改查询和其他检索参数的 LLM 智能体**是 RAG 的**更一般的表述**,这也涵盖了像迭代查询优化这样的许多 RAG 改进技术。\n",
"\n",
"## 3. 💻 调试 Python 代码"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[33;1m======== New task ========\u001b[0m\n",
"\u001b[37;1mI have some code that creates a bug: please debug it and return the final code\n",
"You have been provided with these initial arguments: {'code': '\\nlist=[0, 1, 2]\\n\\nfor i in range(4):\\n print(list(i))\\n'}.\u001b[0m\n",
"\u001b[33;1m==== Agent is executing the code below:\u001b[0m\n",
"\u001b[0m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;109;01m=\u001b[39;00m\u001b[38;5;7m[\u001b[39m\u001b[38;5;139m0\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m1\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m2\u001b[39m\u001b[38;5;7m]\u001b[39m\n",
"\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m)\u001b[39m\n",
"\u001b[38;5;109;01mfor\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;109;01min\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;109mrange\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;139m4\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m:\u001b[39m\n",
"\u001b[38;5;7m \u001b[39m\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m[\u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m]\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[0m\n",
"\u001b[33;1m====\u001b[0m\n",
"\u001b[31;20mFailed while trying to execute the code below:\n",
"\u001b[0mlist=[0, 1, 2]\n",
"print(list)\n",
"for i in range(4):\n",
" print(list[i])\u001b[0m\n",
"This failed due to the following error:\n",
"list index out of range\u001b[0m\n",
"Traceback (most recent call last):\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/agents.py\", line 823, in step\n",
" result = self.python_evaluator(code_action, available_tools, state=self.state)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/python_interpreter.py\", line 511, in evaluate_python_code\n",
" line_result = evaluate_ast(node, state, tools)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/python_interpreter.py\", line 404, in evaluate_ast\n",
" return evaluate_for(expression, state, tools)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/python_interpreter.py\", line 313, in evaluate_for\n",
" line_result = evaluate_ast(node, state, tools)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/python_interpreter.py\", line 401, in evaluate_ast\n",
" return evaluate_ast(expression.value, state, tools)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/python_interpreter.py\", line 365, in evaluate_ast\n",
" return evaluate_call(expression, state, tools)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/python_interpreter.py\", line 215, in evaluate_call\n",
" args = [evaluate_ast(arg, state, tools) for arg in call.args]\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/python_interpreter.py\", line 423, in evaluate_ast\n",
" return evaluate_subscript(expression, state, tools)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/python_interpreter.py\", line 236, in evaluate_subscript\n",
" return value[int(index)]\n",
" ~~~~~^^^^^^^^^^^^\n",
"IndexError: list index out of range\n",
"\n",
"During handling of the above exception, another exception occurred:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/agents.py\", line 623, in run\n",
" final_answer = self.step()\n",
" ^^^^^^^^^^^\n",
" File \"/Users/aymeric/Documents/Code/original_transformers/transformers/src/transformers/agents/agents.py\", line 832, in step\n",
" raise AgentExecutionError(error_msg)\n",
"transformers.agents.agents.AgentExecutionError: Failed while trying to execute the code below:\n",
"\u001b[0mlist=[0, 1, 2]\n",
"print(list)\n",
"for i in range(4):\n",
" print(list[i])\u001b[0m\n",
"This failed due to the following error:\n",
"list index out of range\n",
"\u001b[33;1m==== Agent is executing the code below:\u001b[0m\n",
"\u001b[0m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;109;01m=\u001b[39;00m\u001b[38;5;7m[\u001b[39m\u001b[38;5;139m0\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m1\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m2\u001b[39m\u001b[38;5;7m]\u001b[39m\n",
"\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m)\u001b[39m\n",
"\u001b[38;5;109;01mfor\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;109;01min\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;109mrange\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;139m3\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m:\u001b[39m\n",
"\u001b[38;5;7m \u001b[39m\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m[\u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m]\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[0m\n",
"\u001b[33;1m====\u001b[0m\n",
"\u001b[33;1mPrint outputs:\u001b[0m\n",
"\u001b[32;20m[0, 1, 2]\n",
"0\n",
"1\n",
"2\n",
"\u001b[0m\n",
"\u001b[33;1m==== Agent is executing the code below:\u001b[0m\n",
"\u001b[0m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;109;01m=\u001b[39;00m\u001b[38;5;7m[\u001b[39m\u001b[38;5;139m0\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m1\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m2\u001b[39m\u001b[38;5;7m]\u001b[39m\n",
"\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m)\u001b[39m\n",
"\u001b[38;5;109;01mfor\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;109;01min\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;109mrange\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;139m3\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m:\u001b[39m\n",
"\u001b[38;5;7m \u001b[39m\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m[\u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m]\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[0m\n",
"\u001b[33;1m====\u001b[0m\n",
"\u001b[33;1mPrint outputs:\u001b[0m\n",
"\u001b[32;20m[0, 1, 2]\n",
"0\n",
"1\n",
"2\n",
"\u001b[0m\n",
"\u001b[33;1m==== Agent is executing the code below:\u001b[0m\n",
"\u001b[0m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;109;01m=\u001b[39;00m\u001b[38;5;7m[\u001b[39m\u001b[38;5;139m0\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m1\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m2\u001b[39m\u001b[38;5;7m]\u001b[39m\n",
"\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m)\u001b[39m\n",
"\u001b[38;5;109;01mfor\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;109;01min\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;109mrange\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlen\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m:\u001b[39m\n",
"\u001b[38;5;7m \u001b[39m\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m[\u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m]\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[0m\n",
"\u001b[33;1m====\u001b[0m\n",
"\u001b[33;1mPrint outputs:\u001b[0m\n",
"\u001b[32;20m[0, 1, 2]\n",
"0\n",
"1\n",
"2\n",
"\u001b[0m\n",
"\u001b[33;1m==== Agent is executing the code below:\u001b[0m\n",
"\u001b[0m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;109;01m=\u001b[39;00m\u001b[38;5;7m[\u001b[39m\u001b[38;5;139m0\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m1\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m2\u001b[39m\u001b[38;5;7m]\u001b[39m\n",
"\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m)\u001b[39m\n",
"\u001b[38;5;109;01mfor\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;109;01min\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;109mrange\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;139m3\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m:\u001b[39m\n",
"\u001b[38;5;7m \u001b[39m\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlist\u001b[39m\u001b[38;5;7m[\u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m]\u001b[39m\u001b[38;5;7m)\u001b[39m\n",
"\u001b[38;5;7mfinal_answer\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;7mcode\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[0m\n",
"\u001b[33;1m====\u001b[0m\n",
"\u001b[33;1mPrint outputs:\u001b[0m\n",
"\u001b[32;20m[0, 1, 2]\n",
"0\n",
"1\n",
"2\n",
"\u001b[0m\n",
"\u001b[33;1m>>> Final answer:\u001b[0m\n",
"\u001b[32;20m\n",
"list=[0, 1, 2]\n",
"\n",
"for i in range(4):\n",
" print(list(i))\n",
"\u001b[0m\n"
]
}
],
"source": [
"from smolagents import CodeAgent\n",
"\n",
"agent = CodeAgent(tools=[])\n",
"\n",
"code = \"\"\"\n",
"list=[0, 1, 2]\n",
"\n",
"for i in range(4):\n",
" print(list(i))\n",
"\"\"\"\n",
"\n",
"final_answer = agent.run(\n",
" \"I have some code that creates a bug: please debug it and return the final code\",\n",
" code=code,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"正如你所看到的,智能体尝试了给定的代码,遇到错误,分析错误,纠正代码,并在验证代码可以正常工作后返回它!\n",
"\n",
"最终的代码是纠正后的代码:\n"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"list=[0, 1, 2]\n",
"\n",
"for i in range(4):\n",
" print(list(i))\n",
"\n"
]
}
],
"source": [
"print(final_answer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. 创建你自己的 LLM 引擎(OpenAI)\n",
"\n",
"设置你自己的 LLM 引擎真的非常简单:\n",
"它只需要一个具有以下标准的`__call__`方法:\n",
"1. 接受[ChatML 格式](https://huggingface.co/docs/transformers/main/en/chat_templating#introduction)的消息列表作为输入并输出答案。\n",
"2. 接受一个 `stop_sequences` 参数,以传递生成停止的序列。\n",
"3. 根据你的 LLM 接受哪种类型的消息角色,你可能还需要转换一些消息角色。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[33;1m======== New task ========\u001b[0m\n",
"\u001b[37;1mI have some code that creates a bug: please debug it and return the final code\n",
"You have been provided with these initial arguments: {'code': '\\nlist=[0, 1, 2]\\n\\nfor i in range(4):\\n print(list(i))\\n'}.\u001b[0m\n",
"\u001b[33;1m==== Agent is executing the code below:\u001b[0m\n",
"\u001b[0m\u001b[38;5;7mmy_list\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;109;01m=\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;7m[\u001b[39m\u001b[38;5;139m0\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m1\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m2\u001b[39m\u001b[38;5;7m]\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;60;03m# Renamed the list to avoid using the built-in name\u001b[39;00m\n",
"\n",
"\u001b[38;5;109;01mfor\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;109;01min\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;109mrange\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlen\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;7mmy_list\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m:\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;60;03m# Changed the range to be within the length of the list\u001b[39;00m\n",
"\u001b[38;5;7m \u001b[39m\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;7mmy_list\u001b[39m\u001b[38;5;7m[\u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m]\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;60;03m# Corrected the list access syntax\u001b[39;00m\u001b[0m\n",
"\u001b[33;1m====\u001b[0m\n",
"\u001b[33;1mPrint outputs:\u001b[0m\n",
"\u001b[32;20m0\n",
"1\n",
"2\n",
"\u001b[0m\n",
"\u001b[33;1m==== Agent is executing the code below:\u001b[0m\n",
"\u001b[0m\u001b[38;5;7mmy_list\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;109;01m=\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;7m[\u001b[39m\u001b[38;5;139m0\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m1\u001b[39m\u001b[38;5;7m,\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;139m2\u001b[39m\u001b[38;5;7m]\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;60;03m# Renamed the list to avoid using the built-in name\u001b[39;00m\n",
"\n",
"\u001b[38;5;109;01mfor\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;109;01min\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;109mrange\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;109mlen\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;7mmy_list\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m:\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;60;03m# Changed the range to be within the length of the list\u001b[39;00m\n",
"\u001b[38;5;7m \u001b[39m\u001b[38;5;109mprint\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;7mmy_list\u001b[39m\u001b[38;5;7m[\u001b[39m\u001b[38;5;7mi\u001b[39m\u001b[38;5;7m]\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;60;03m# Corrected the list access syntax\u001b[39;00m\u001b[0m\n",
"\u001b[33;1m====\u001b[0m\n",
"\u001b[33;1mPrint outputs:\u001b[0m\n",
"\u001b[32;20m0\n",
"1\n",
"2\n",
"\u001b[0m\n",
"\u001b[33;1m==== Agent is executing the code below:\u001b[0m\n",
"\u001b[0m\u001b[38;5;7mcorrected_code\u001b[39m\u001b[38;5;7m \u001b[39m\u001b[38;5;109;01m=\u001b[39;00m\u001b[38;5;7m \u001b[39m\u001b[38;5;144m'''\u001b[39m\n",
"\u001b[38;5;144mmy_list = [0, 1, 2] # Renamed the list to avoid using the built-in name\u001b[39m\n",
"\n",
"\u001b[38;5;144mfor i in range(len(my_list)): # Changed the range to be within the length of the list\u001b[39m\n",
"\u001b[38;5;144m print(my_list[i]) # Corrected the list access syntax\u001b[39m\n",
"\u001b[38;5;144m'''\u001b[39m\n",
"\n",
"\u001b[38;5;7mfinal_answer\u001b[39m\u001b[38;5;7m(\u001b[39m\u001b[38;5;7manswer\u001b[39m\u001b[38;5;109;01m=\u001b[39;00m\u001b[38;5;7mcorrected_code\u001b[39m\u001b[38;5;7m)\u001b[39m\u001b[0m\n",
"\u001b[33;1m====\u001b[0m\n",
"\u001b[33;1mPrint outputs:\u001b[0m\n",
"\u001b[32;20m\u001b[0m\n",
"\u001b[33;1m>>> Final answer:\u001b[0m\n",
"\u001b[32;20m\n",
"my_list = [0, 1, 2] # Renamed the list to avoid using the built-in name\n",
"\n",
"for i in range(len(my_list)): # Changed the range to be within the length of the list\n",
" print(my_list[i]) # Corrected the list access syntax\n",
"\u001b[0m\n"
]
}
],
"source": [
"import os\n",
"from openai import OpenAI\n",
"from smolagents.model import MessageRole, get_clean_message_list\n",
"\n",
"openai_role_conversions = {\n",
" MessageRole.TOOL_RESPONSE: \"user\",\n",
"}\n",
"\n",
"\n",
"class OpenAIModel:\n",
" def __init__(self, model_name=\"gpt-4o-2024-05-13\"):\n",
" self.model_name = model_name\n",
" self.client = OpenAI(\n",
" api_key=os.getenv(\"OPENAI_API_KEY\"),\n",
" )\n",
"\n",
" def __call__(self, messages, stop_sequences=[]):\n",
" # Get clean message list\n",
" messages = get_clean_message_list(\n",
" messages, role_conversions=openai_role_conversions\n",
" )\n",
"\n",
" # Get LLM output\n",
" response = self.client.chat.completions.create(\n",
" model=self.model_name,\n",
" messages=messages,\n",
" stop=stop_sequences,\n",
" )\n",
" return response.choices[0].message.content\n",
"\n",
"\n",
"openai_engine = OpenAIModel()\n",
"agent = CodeAgent(model=openai_engine, tools=[])\n",
"\n",
"code = \"\"\"\n",
"list=[0, 1, 2]\n",
"\n",
"for i in range(4):\n",
" print(list(i))\n",
"\"\"\"\n",
"\n",
"final_answer = agent.run(\n",
" \"I have some code that creates a bug: please debug it and return the final code\",\n",
" code=code,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"my_list = [0, 1, 2] # Renamed the list to avoid using the built-in name\n",
"\n",
"for i in range(len(my_list)): # Changed the range to be within the length of the list\n",
" print(my_list[i]) # Corrected the list access syntax\n",
"\n"
]
}
],
"source": [
"print(final_answer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ➡️ 结论\n",
"\n",
"上述用例应该让你对我们智能体框架的可能性有了初步了解!\n",
"\n",
"想要了解更多高级用法,请阅读[文档](https://huggingface.co/docs/transformers/en/transformers_agents), 以及[此实验](https://github.com/aymeric-roucher/agent_reasoning_benchmark/blob/main/benchmark_gaia.ipynb),它让我们能够基于 Llama-3-70B 构建自己的智能体,并在非常困难的[GAIA 排行榜](https://huggingface.co/spaces/gaia-benchmark/leaderboard)上击败许多 GPT-4 智能体!\n",
"\n",
"欢迎所有反馈,这将帮助我们改进框架! 🚀"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "test2",
"language": "python",
"name": "test2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}