2-notebooks/1-chat_completion/1-basic-chat-completion.ipynb (229 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "id": "56be2132", "metadata": {}, "source": [ "# 🍎 Chat Completions with AIProjectClient 🍏\n", "\n", "In this notebook, we'll demonstrate how to perform **Chat Completions** using the **Azure AI Foundry** SDK. We'll combine **`azure-ai-projects`** and **`azure-ai-inference`** packages to:\n", "\n", "1. **Initialize** an `AIProjectClient`.\n", "2. **Obtain** a Chat Completions client to do direct LLM calls.\n", "3. **Use** a **prompt template** to add system context.\n", "4. **Send** user prompts in a health & fitness theme.\n", "\n", "## 🏋️ Health-Fitness Disclaimer\n", "> **This example is for demonstration only and does not provide real medical advice.** Always consult a professional for health or medical-related questions.\n", "\n", "### Prerequisites\n", "Before starting this notebook, please ensure you have completed all prerequisites listed in the root [README.md](../../README.md#-prerequisites).\n", "\n", "Let's get started! 🎉\n", "\n", "<img src=\"./seq-diagrams/1-chat.png\" width=\"30%\"/>\n" ] }, { "cell_type": "markdown", "id": "ce5fedcc", "metadata": {}, "source": [ "## 1. Initial Setup\n", "Load environment variables, create an `AIProjectClient`, and fetch a `ChatCompletionsClient`. We'll also define a **prompt template** to show how you might structure a system message.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "07dd1b4d", "metadata": {}, "outputs": [], "source": [ "import os\n", "from dotenv import load_dotenv\n", "from pathlib import Path\n", "from azure.identity import DefaultAzureCredential\n", "\n", "from azure.ai.projects import AIProjectClient\n", "from azure.ai.inference.models import UserMessage, SystemMessage # for chat messages\n", "\n", "# Load environment variables\n", "notebook_path = Path().absolute()\n", "parent_dir = notebook_path.parent\n", "load_dotenv(parent_dir / '.env')\n", "\n", "# Retrieve from environment\n", "connection_string = os.environ.get(\"PROJECT_CONNECTION_STRING\")\n", "model_deployment = os.environ.get(\"MODEL_DEPLOYMENT_NAME\")\n", "\n", "try:\n", " # Create the project client\n", " project_client = AIProjectClient.from_connection_string(\n", " credential=DefaultAzureCredential(),\n", " conn_str=connection_string,\n", " )\n", " print(\"✅ Successfully created AIProjectClient\")\n", "except Exception as e:\n", " print(\"❌ Error initializing client:\", e)\n" ] }, { "cell_type": "markdown", "id": "f03c9a87", "metadata": {}, "source": [ "### Prompt Template\n", "We'll define a quick **system** message that sets the context as a friendly, disclaimer-providing fitness assistant.\n", "\n", "```txt\n", "SYSTEM PROMPT (template):\n", "You are FitChat GPT, a helpful fitness assistant.\n", "Always remind users: I'm not a medical professional.\n", "Be friendly, provide general advice.\n", "...\n", "```\n", "\n", "We'll then pass user content as a **user** message.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "ab052b5e", "metadata": {}, "outputs": [], "source": [ "# We'll define a function that runs chat completions with a system prompt & user prompt\n", "def chat_with_fitness_assistant(user_input: str):\n", " \"\"\"Use chat completions to get a response from our LLM, with system instructions.\"\"\"\n", " # Our system message template\n", " system_text = (\n", " \"You are FitChat GPT, a friendly fitness assistant.\\n\"\n", " \"Always remind users: I'm not a medical professional.\\n\"\n", " \"Answer with empathy and disclaimers.\"\n", " )\n", "\n", " # We'll open the chat completions client\n", " with project_client.inference.get_chat_completions_client() as chat_client:\n", " # Construct messages: system + user\n", " system_message = SystemMessage(content=system_text)\n", " user_message = UserMessage(content=user_input)\n", "\n", " # Send the request\n", " response = chat_client.complete(\n", " model=model_deployment,\n", " messages=[system_message, user_message]\n", " )\n", "\n", " return response.choices[0].message.content # simplest approach: get top choice's content\n", "\n", "print(\"Defined a helper function to do chat completions.\")" ] }, { "cell_type": "markdown", "id": "273d7bdd", "metadata": {}, "source": [ "## 2. Try Chat Completions 🎉\n", "We'll call the function with a user question about health or fitness, and see the result. Feel free to modify the question or run multiple times!\n" ] }, { "cell_type": "code", "execution_count": null, "id": "ee675bd5", "metadata": {}, "outputs": [], "source": [ "user_question = \"How can I start a beginner workout routine at home?\"\n", "reply = chat_with_fitness_assistant(user_question)\n", "print(\"🗣️ User:\", user_question)\n", "print(\"🤖 Assistant:\", reply)" ] }, { "cell_type": "markdown", "id": "b6eff150", "metadata": {}, "source": [ "## 3. Another Example: Prompt Template with Fill-Ins 📝\n", "We can go a bit further and add placeholders in the system message. For instance, imagine we have a **userName** or **goal**. We'll show a minimal example.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "cfec1e22", "metadata": {}, "outputs": [], "source": [ "def chat_with_template(user_input: str, user_name: str, goal: str):\n", " # Construct a system template with placeholders\n", " system_template = (\n", " \"You are FitChat GPT, an AI personal trainer for {name}.\\n\"\n", " \"Your user wants to achieve: {goal}.\\n\"\n", " \"Remind them you're not a medical professional. Offer friendly advice.\"\n", " )\n", "\n", " # Fill in placeholders\n", " system_prompt = system_template.format(name=user_name, goal=goal)\n", "\n", " with project_client.inference.get_chat_completions_client() as chat_client:\n", " system_msg = SystemMessage(content=system_prompt)\n", " user_msg = UserMessage(content=user_input)\n", "\n", " response = chat_client.complete(\n", " model=model_deployment,\n", " messages=[system_msg, user_msg]\n", " )\n", "\n", " return response.choices[0].message.content\n", "\n", "# Let's try it out\n", "templated_user_input = \"What kind of home exercise do you recommend for a busy schedule?\"\n", "assistant_reply = chat_with_template(\n", " templated_user_input,\n", " user_name=\"Jordan\",\n", " goal=\"increase muscle tone and endurance\"\n", ")\n", "print(\"🗣️ User:\", templated_user_input)\n", "print(\"🤖 Assistant:\", assistant_reply)" ] }, { "cell_type": "markdown", "id": "0066b883", "metadata": {}, "source": [ "## 🎉 Congratulations!\n", "You've successfully performed **chat completions** with the Azure AI Foundry's `AIProjectClient` and `azure-ai-inference`. You've also seen how to incorporate **prompt templates** to tailor your system instructions.\n", "\n", "#### Head to [2-embeddings.ipynb](2-embeddings.ipynb) for the next part of the workshop! 🎯" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.11" } }, "nbformat": 4, "nbformat_minor": 5 }