1-introduction/3-quick_start.ipynb (272 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"id": "c2e86a0d",
"metadata": {},
"source": [
"# Quick Start Guide - Azure AI Foundry\n",
"\n",
"This notebook provides a hands-on introduction to Azure AI Foundry. You'll learn how to:\n",
"1. Initialize the AI Project client\n",
"2. List available models\n",
"3. Create a simple chat completion request\n",
"4. Create a basic AI agent\n",
"5. Handle basic error scenarios\n",
"\n",
"## Prerequisites\n",
"- Completed environment setup from previous notebook\n",
"- Azure credentials configured"
]
},
{
"cell_type": "markdown",
"id": "f3b65a7d",
"metadata": {},
"source": [
"## Import Required Libraries and Setup\n",
"\n",
"In the next cell, we'll:\n",
"1. Import the necessary Azure SDK libraries for authentication and AI Projects\n",
"2. Import standard Python libraries for environment variables and JSON handling\n",
"3. Initialize Azure credentials using DefaultAzureCredential\n",
" - This will automatically use your logged-in Azure CLI credentials\n",
" - Alternatively, it can use other authentication methods like environment variables or managed identity\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b1a355de",
"metadata": {},
"outputs": [],
"source": [
"# Import required libraries\n",
"from azure.identity import DefaultAzureCredential\n",
"from azure.ai.projects import AIProjectClient\n",
"import os\n",
"import json\n",
"\n",
"# Initialize credentials\n",
"credential = DefaultAzureCredential()"
]
},
{
"cell_type": "markdown",
"id": "bd18d4ef",
"metadata": {},
"source": [
"## Initialize AI Project Client\n",
"\n",
"> **Note:** Before proceeding, ensure you:\n",
"> 1. Copy your `.env.example` file to `.env` from the root directory\n",
"> 2. Update the project connection string in your `.env` file\n",
"> 3. Have a Hub and Project already provisioned in Azure AI Foundry\n",
"\n",
"You can find your project connection string in [Azure AI Foundry](https://ai.azure.com) under your project's settings:\n",
"\n",
"<img src=\"proj-conn-string.png\" alt=\"Project Connection String Location\" width=\"600\"/>\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "60e5ebd2",
"metadata": {},
"source": [
"## Creating the AI Project Client\n",
"\n",
"In the next cell, we'll create an AI Project client using the connection string from our `.env` file.\n",
"> **Note:** This example uses the synchronous client. For higher performance scenarios, you can also create an asynchronous client by importing `asyncio` and using the async methods from `AIProjectClient`.\n",
"\n",
"The client will be used to:\n",
"- Connect to your Azure AI Project using the connection string\n",
"- Authenticate using Azure credentials\n",
"- Enable making inference requests to your deployed models\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c1b96006",
"metadata": {},
"outputs": [],
"source": [
"from dotenv import load_dotenv\n",
"from pathlib import Path\n",
"\n",
"# Load environment variables\n",
"notebook_path = Path().absolute()\n",
"parent_dir = notebook_path.parent\n",
"load_dotenv(parent_dir / '.env')\n",
"\n",
"try:\n",
" client = AIProjectClient.from_connection_string(\n",
" conn_str=os.getenv(\"PROJECT_CONNECTION_STRING\"),\n",
" credential=credential\n",
" )\n",
" print(\"✓ Successfully initialized AIProjectClient\")\n",
"except Exception as e:\n",
" print(f\"× Error initializing client: {str(e)}\")"
]
},
{
"cell_type": "markdown",
"id": "8d77e602",
"metadata": {},
"source": [
"## Create a Simple Completion\n",
"Let's try a basic completion request:\n",
"\n",
"Now that we have an authenticated client, let's use it to make a chat completion request.\n",
"The code below demonstrates how to:\n",
"1. Get a ChatCompletionsClient from the azure-ai-inference package\n",
"2. Use it to make a simple completion request\n",
"\n",
"We'll use the MODEL_DEPLOYMENT_NAME from our `.env` file, making it easy to switch between different\n",
"deployed models without changing code. This could be an Azure OpenAI model, Microsoft model, or other providers\n",
"that support chat completions.\n",
"\n",
"> Note: Make sure you have the azure-ai-inference package installed (from requirements.txt or as mentioned in [README.md](../README.md#-quick-start))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3774ed1e",
"metadata": {},
"outputs": [],
"source": [
"from azure.ai.inference.models import UserMessage\n",
"\n",
"model_deployment_name = os.getenv(\"MODEL_DEPLOYMENT_NAME\")\n",
"\n",
"try:\n",
" chat_client = client.inference.get_chat_completions_client()\n",
" response = chat_client.complete(\n",
" model=model_deployment_name, \n",
" messages=[UserMessage(content=\"How to be healthy in one sentence?\")]\n",
" )\n",
" print(response.choices[0].message.content)\n",
"except Exception as e:\n",
" print(f\"An error occurred: {str(e)}\")"
]
},
{
"cell_type": "markdown",
"id": "8d7864f9",
"metadata": {},
"source": [
"## Create a simple Agent\n",
"\n",
"Using AI Agent Service, we can create a simple agent to answer health related questions.\n",
"\n",
"Let's explore Azure AI Agent Service, a powerful tool for building intelligent agents.\n",
"\n",
"Azure AI Agent Service is a fully managed service that helps developers build, deploy, and scale AI agents\n",
"without managing infrastructure. It combines large language models with tools that allow agents to:\n",
"- Answer questions using RAG (Retrieval Augmented Generation)\n",
"- Perform actions through tool calling \n",
"- Automate complex workflows\n",
"\n",
"The code below demonstrates how to:\n",
"1. Create an agent with a code interpreter tool\n",
"2. Create a conversation thread\n",
"3. Send a message requesting BMI analysis \n",
"4. Process the request and get results\n",
"5. Save any generated visualizations\n",
"\n",
"The agent will use the model specified in our .env file (MODEL_DEPLOYMENT_NAME) and will have access\n",
"to a code interpreter tool for creating visualizations. This showcases how agents can combine\n",
"natural language understanding with computational capabilities.\n",
"\n",
"> The visualization will be saved as a PNG file in the same folder as this notebook.\n",
" \n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9afb12b2",
"metadata": {},
"outputs": [],
"source": [
"from azure.ai.projects.models import CodeInterpreterTool\n",
"\n",
"try:\n",
" # Initialize the Code Interpreter Tool\n",
" code_interpreter = CodeInterpreterTool()\n",
" \n",
" # Create an AI agent with the code interpreter tool\n",
" agent = client.agents.create_agent(\n",
" model=model_deployment_name,\n",
" name=\"bmi-calculator\",\n",
" instructions=(\n",
" \"You are a health analyst who calculates BMI using US metrics (pounds, feet/inches). \"\n",
" \"Use average US female measurements: 5'4\\\" (69 inches) and 130 pounds. \"\n",
" \"Create a visualization showing where this BMI falls on the scale.\"\n",
" ),\n",
" tools=code_interpreter.definitions,\n",
" tool_resources=code_interpreter.resources,\n",
" )\n",
" \n",
" # Create a new conversation thread\n",
" thread = client.agents.create_thread()\n",
" \n",
" # Create a message requesting BMI analysis and visualization\n",
" message = client.agents.create_message(\n",
" thread_id=thread.id,\n",
" role=\"user\",\n",
" content=(\n",
" \"Calculate BMI for an average US female (5'4\\\", 130 lbs). \"\n",
" \"Create a visualization showing where this BMI falls on the standard BMI scale from 15 to 35. \"\n",
" \"Include the standard BMI categories (Underweight, Normal, Overweight, Obese) in the visualization.\"\n",
" )\n",
" )\n",
" \n",
" # Process the request by creating and running a thread run\n",
" run = client.agents.create_and_process_run(thread_id=thread.id, agent_id=agent.id)\n",
" \n",
" # Retrieve and save any generated visualizations\n",
" messages = client.agents.list_messages(thread_id=thread.id)\n",
" for image_content in messages.image_contents:\n",
" file_name = f\"bmi_analysis_{image_content.image_file.file_id}.png\"\n",
" client.agents.save_file(file_id=image_content.image_file.file_id, file_name=file_name)\n",
" print(f\"Analysis saved as: {file_name}\")\n",
" \n",
" # Print the analysis text from the assistant\n",
" print(f\"Messages: {messages}\")\n",
" if last_msg := messages.get_last_text_message_by_role(\"assistant\"):\n",
" print(f\"Analysis: {last_msg.text.value}\")\n",
" \n",
" # Cleanup by deleting the agent\n",
" client.agents.delete_agent(agent.id)\n",
" \n",
"except Exception as e:\n",
" print(f\"An error occurred: {str(e)}\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}