2-notebooks/4-frameworks/2-autogen-multi-agent-rag.ipynb (224 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "id": "title-cell", "metadata": {}, "source": [ "# šŸ‹ļø‍ā™‚ļø Multi-Agent RAG for Fitness & Health with Azure AI Foundry šŸ„‘\n", "\n", "Welcome to this fun, self-guided workshop where we'll build a multi-agent Retrieval-Augmented Generation (RAG) pipeline using AutoGen 0.4.7 with Azure AI Foundry. Our team of agents will collaborate to answer your fitness and health questions in an engaging way!" ] }, { "cell_type": "markdown", "id": "setup-markdown", "metadata": {}, "source": [ "## 1. Setup\n", "\n", "Let's import the necessary libraries and set up our model client using Azure AI Foundry. Make sure your environment variable `GITHUB_TOKEN` is set with your personal access token." ] }, { "cell_type": "code", "execution_count": null, "id": "setup-code", "metadata": {}, "outputs": [], "source": [ "import os\n", "import asyncio\n", "\n", "# Import AutoGen agents and required modules\n", "from autogen_agentchat.agents import AssistantAgent\n", "from autogen_agentchat.teams import RoundRobinGroupChat\n", "from autogen_agentchat.messages import TextMessage\n", "from autogen_core import CancellationToken\n", "\n", "# Import the Azure AI Foundry model client from AutoGen extensions\n", "from autogen_ext.models.azure import AzureAIChatCompletionClient\n", "from autogen_core.models import UserMessage\n", "from azure.core.credentials import AzureKeyCredential\n", "\n", "# Create the model client using Azure AI Foundry\n", "model_client = AzureAIChatCompletionClient(\n", " model=os.environ[\"MODEL_DEPLOYMENT_NAME\"],\n", " endpoint=\"https://models.inference.ai.azure.com\",\n", " credential=AzureKeyCredential(os.environ[\"GITHUB_TOKEN\"]),\n", " model_info={\n", " \"json_output\": False,\n", " \"function_calling\": False,\n", " \"vision\": False,\n", " \"family\": \"unknown\"\n", " }\n", ")\n", "\n", "print(\"āœ… Azure AI Foundry model client created successfully!\")" ] }, { "cell_type": "markdown", "id": "health-data-markdown", "metadata": {}, "source": [ "## 2. Create Sample Health Data and Retrieval Tool\n", "\n", "We'll define a small list of health tips and a simple retrieval function. This function simulates retrieving relevant health tips based on keywords in the user's query." ] }, { "cell_type": "code", "execution_count": null, "id": "health-data-code", "metadata": {}, "outputs": [], "source": [ "# Define sample health tips\n", "health_tips = [\n", " {\"id\": \"tip1\", \"content\": \"Do a 10-minute HIIT workout to boost your metabolism.\", \"source\": \"Fitness Guru\"},\n", " {\"id\": \"tip2\", \"content\": \"Take a brisk 15-minute walk to clear your mind and improve circulation.\", \"source\": \"Health Coach\"},\n", " {\"id\": \"tip3\", \"content\": \"Stretch for 5 minutes every hour if you're sitting at a desk.\", \"source\": \"Wellness Expert\"},\n", " {\"id\": \"tip4\", \"content\": \"Incorporate strength training twice a week for overall fitness.\", \"source\": \"Personal Trainer\"},\n", " {\"id\": \"tip5\", \"content\": \"Drink water regularly to stay hydrated during workouts.\", \"source\": \"Nutritionist\"}\n", "]\n", "\n", "def retrieve_tips(query: str) -> str:\n", " \"\"\"Return health tips whose content contains keywords from the query.\"\"\"\n", " query_lower = query.lower()\n", " relevant = []\n", " for tip in health_tips:\n", " # Check if any word in the query is in the tip content\n", " if any(word in tip[\"content\"].lower() for word in query_lower.split()):\n", " relevant.append(f\"Source: {tip['source']} => {tip['content']}\")\n", " if not relevant:\n", " # If no tips match, return all tips (for demo purposes)\n", " relevant = [f\"Source: {tip['source']} => {tip['content']}\" for tip in health_tips]\n", " return \"\\n\".join(relevant)\n", "\n", "print(\"āœ… Sample health tips and retrieval tool created!\")" ] }, { "cell_type": "markdown", "id": "pipeline-markdown", "metadata": {}, "source": [ "## 3. Define Our Multi-Agent RAG Pipeline\n", "\n", "We will create two agents:\n", "\n", "1. **RetrieverAgent**: Uses the `retrieve_tips` tool to fetch relevant fitness and health tips based on the user's query.\n", "2. **ResponderAgent**: Crafts a fun, engaging response using the retrieved tips along with the user's query.\n", "\n", "These agents will be combined in a RoundRobinGroupChat to simulate collaborative problem-solving." ] }, { "cell_type": "code", "execution_count": null, "id": "pipeline-code", "metadata": {}, "outputs": [], "source": [ "# Define the RetrieverAgent with the retrieve_tips tool\n", "retriever = AssistantAgent(\n", " name=\"RetrieverAgent\",\n", " system_message=(\n", " \"You are a smart retrieval agent. Your task is to fetch relevant fitness and health tips based on the user's query. \"\n", " \"Use the provided tool 'retrieve_tips' to get the information.\"\n", " ),\n", " model_client=model_client,\n", " tools=[retrieve_tips], # Register the retrieval tool\n", " reflect_on_tool_use=True\n", ")\n", "\n", "# Define the ResponderAgent which uses the retrieved tips to craft a response\n", "responder = AssistantAgent(\n", " name=\"ResponderAgent\",\n", " system_message=(\n", " \"You are a friendly fitness coach. Use the health tips provided to craft a fun and engaging response to the user's question.\"\n", " ),\n", " model_client=model_client\n", ")\n", "\n", "# Combine the two agents in a RoundRobinGroupChat\n", "group_chat = RoundRobinGroupChat(\n", " agents=[retriever, responder],\n", " termination_condition=None, # For simplicity, we run a fixed number of turns\n", " max_turns=4 # Limit the conversation to 4 turns\n", ")\n", "\n", "print(\"āœ… Multi-agent RAG pipeline defined!\")" ] }, { "cell_type": "markdown", "id": "try-query-markdown", "metadata": {}, "source": [ "## 4. Try a Query\n", "\n", "Let's test our multi-agent RAG system with a fun fitness query. For example:\n", "\n", "> **User Query:** _I'm very busy but want to stay fit. What quick exercises can I do?_" ] }, { "cell_type": "code", "execution_count": null, "id": "try-query-code", "metadata": {}, "outputs": [], "source": [ "async def run_query():\n", " user_query = \"I'm very busy but want to stay fit. What quick exercises can I do?\"\n", "\n", " # Run the group chat to process the query\n", " final_result = None\n", " async for message in group_chat.run_stream(task=user_query):\n", " # Print each message as it is produced\n", " if hasattr(message, 'content'):\n", " print(f\"{message.source}: {message.content}\")\n", " final_result = message\n", " \n", " print(\"\\nāœ… Final response:\", final_result.content if final_result else \"No response\")\n", "\n", "asyncio.run(run_query())" ] }, { "cell_type": "markdown", "id": "conclusion-markdown", "metadata": {}, "source": [ "## 5. Conclusion\n", "\n", "In this notebook, we built a fun multi-agent Retrieval-Augmented Generation pipeline using AutoGen 0.4.7 with Azure AI Foundry and a fitness and health theme. We created a RetrieverAgent that fetches relevant health tips and a ResponderAgent that crafts an engaging answer to the user's query.\n", "\n", "Feel free to modify and expand this notebook to explore more advanced multi-agent collaborations. Happy coding and stay fit! šŸ’ŖšŸ„¦" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.11" } }, "nbformat": 4, "nbformat_minor": 5 }