2-notebooks/3-quality_attributes/1-Observability.ipynb (693 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "id": "54f0b7d7", "metadata": {}, "source": [ "# 🍏 Observability & Tracing Demo with `azure-ai-projects` and `azure-ai-inference` 🍎\n", "\n", "Welcome to this **Health & Fitness**-themed notebook, where we'll explore how to set up **observability** and **tracing** for:\n", "\n", "1. **Basic LLM calls** using an `AIProjectClient`.\n", "2. **Multi-step** interactions using an **Agent** (such as a Health Resource Agent).\n", "3. **Tracing** your local usage in **console** (stdout) or via an **OTLP endpoint** (like **Prompty** or **Aspire**).\n", "4. Sending those **traces** to **Azure Monitor** (Application Insights) so you can view them in **Azure AI Foundry**.\n", "\n", "> **Disclaimer**: This is a fun demonstration of AI and observability! Any references to workouts, diets, or health routines in the code or prompts are purely for **educational** purposes. Always consult a professional for health advice.\n", "\n", "## Contents\n", "1. **Initialization**: Setting up environment, creating clients.\n", "2. **Basic LLM Call**: Quick demonstration of retrieving model completions.\n", "3. **Connections**: Listing project connections.\n", "4. **Observability & Tracing**\n", " - **Console / Local** tracing\n", " - **Prompty / Aspire**: piping traces to a local OTLP endpoint\n", " - **Azure Monitor** tracing: hooking up to Application Insights\n", " - **Verifying** your traces in Azure AI Foundry\n", "5. **Agent-based Example**:\n", " - Creating a simple \"Health Resource Agent\" referencing sample docs.\n", " - Multi-turn conversation with tracing.\n", " - Cleanup.\n", "\n", "<img src=\"./seq-diagrams/1-observability.png\" width=\"50%\"/>" ] }, { "cell_type": "markdown", "id": "0e13f9f3", "metadata": {}, "source": [ "## 1. Initialization & Setup\n", "**Prerequisites**:\n", "- A `.env` file containing `PROJECT_CONNECTION_STRING` (and optionally `MODEL_DEPLOYMENT_NAME`).\n", "- Roles/permissions in Azure AI Foundry that let you do inference & agent creation.\n", "- A local environment with `azure-ai-projects`, `azure-ai-inference`, `opentelemetry` packages installed.\n", "\n", "**What we do**:\n", "- Load environment variables.\n", "- Initialize `AIProjectClient`.\n", "- Check that we can talk to a model (like `gpt-4o`)." ] }, { "cell_type": "code", "execution_count": null, "id": "d1ccdace", "metadata": {}, "outputs": [], "source": [ "import os\n", "import sys\n", "import time\n", "from pathlib import Path\n", "from dotenv import load_dotenv\n", "from azure.identity import DefaultAzureCredential\n", "from azure.ai.projects import AIProjectClient\n", "from azure.ai.inference.models import UserMessage, CompletionsFinishReason\n", "\n", "# Load environment variables\n", "notebook_path = Path().absolute()\n", "env_path = notebook_path.parent.parent / '.env' # Adjust path as needed\n", "load_dotenv(env_path)\n", "\n", "connection_string = os.environ.get(\"PROJECT_CONNECTION_STRING\")\n", "if not connection_string:\n", " raise ValueError(\"🚨 PROJECT_CONNECTION_STRING not set in .env.\")\n", "\n", "# Initialize AIProjectClient\n", "try:\n", " project_client = AIProjectClient.from_connection_string(\n", " credential=DefaultAzureCredential(),\n", " conn_str=connection_string\n", " )\n", " print(\"βœ… Successfully created AIProjectClient!\")\n", "except Exception as e:\n", " print(f\"❌ Error creating AIProjectClient: {e}\")" ] }, { "cell_type": "markdown", "id": "7e24461b", "metadata": {}, "source": [ "## 2. Basic LLM Call\n", "We'll do a **quick** chat completion request to confirm everything is working. We'll ask a simple question: \"How many feet are in a mile?\"" ] }, { "cell_type": "code", "execution_count": null, "id": "d7fcdaba", "metadata": {}, "outputs": [], "source": [ "try:\n", " # Create a ChatCompletions client\n", " inference_client = project_client.inference.get_chat_completions_client()\n", " # Default to \"gpt-4o\" if no env var is set\n", " model_name = os.environ.get(\"MODEL_DEPLOYMENT_NAME\", \"gpt-4o\")\n", "\n", " user_question = \"How many feet are in a mile?\"\n", " response = inference_client.complete(\n", " model=model_name,\n", " messages=[UserMessage(content=user_question)]\n", " )\n", " print(\"\\nπŸ’‘Response:\")\n", " print(response.choices[0].message.content)\n", " print(\"\\nFinish reason:\", response.choices[0].finish_reason)\n", "\n", "except Exception as e:\n", " print(\"❌ Could not complete the chat request:\", e)" ] }, { "cell_type": "markdown", "id": "8b83517e", "metadata": {}, "source": [ "## 3. List & Inspect Connections\n", "Check out the **connections** your project has: these might be Azure OpenAI or other resource attachments. We'll just list them here for demonstration." ] }, { "cell_type": "code", "execution_count": null, "id": "b70793c6", "metadata": {}, "outputs": [], "source": [ "from azure.ai.projects.models import ConnectionType\n", "\n", "all_conns = project_client.connections.list()\n", "print(f\"πŸ”Ž Found {len(all_conns)} total connections.\")\n", "for idx, c in enumerate(all_conns):\n", " print(f\"{idx+1}) Name: {c.name}, Type: {c.connection_type}, Endpoint: {c.endpoint_url}\")\n", "\n", "# Filter for Azure OpenAI connections\n", "aoai_conns = project_client.connections.list(connection_type=ConnectionType.AZURE_OPEN_AI)\n", "print(f\"\\nπŸŒ€ Found {len(aoai_conns)} Azure OpenAI connections:\")\n", "for c in aoai_conns:\n", " print(f\" -> {c.name}\")\n", "\n", "# Get default connection of type AZURE_AI_SERVICES\n", "default_conn = project_client.connections.get_default(connection_type=ConnectionType.AZURE_AI_SERVICES,\n", " include_credentials=False)\n", "if default_conn:\n", " print(\"\\n⭐ Default Azure AI Services connection:\")\n", " print(default_conn)\n", "else:\n", " print(\"No default connection found for Azure AI Services.\")" ] }, { "cell_type": "markdown", "id": "bce0c8f7", "metadata": {}, "source": [ "# 4. Observability & Tracing\n", "\n", "We want to **collect telemetry** from our LLM calls, for example:\n", "- Timestamps of requests.\n", "- Latency.\n", "- Potential errors.\n", "- Optionally, the actual prompts & responses (if you enable content recording).\n", "\n", "We'll show how to set up:\n", "1. **Console** or local OTLP endpoint instrumentation.\n", "2. **Azure Monitor** instrumentation with Application Insights.\n", "3. **Viewing** your traces in Azure AI Foundry's portal.\n", "\n", "## 4.1 Local Console Debugging\n", "We'll install instrumentation packages and enable them. Then we'll do a quick chat call to see if logs appear in **stdout**.\n", "\n", "**Note**: If you want to see more advanced local dashboards, you can:\n", "- Use [Prompty](https://github.com/microsoft/prompty).\n", "- Use [Aspire Dashboard](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash) to visualize your OTLP traces." ] }, { "cell_type": "code", "execution_count": null, "id": "16d366bb", "metadata": {}, "outputs": [], "source": [ "# You only need to install these once.\n", "!pip install opentelemetry-instrumentation-openai-v2 opentelemetry-exporter-otlp-proto-grpc" ] }, { "cell_type": "markdown", "id": "4767143a", "metadata": {}, "source": [ "### 4.1.1 Enable OpenTelemetry for Azure AI Inference\n", "We set environment variables to ensure:\n", "1. **Prompt content** is captured (optional!)\n", "2. The **Azure SDK** uses OpenTelemetry as its tracing implementation.\n", "3. We call `AIInferenceInstrumentor().instrument()` to patch and enable the instrumentation.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "0ef06776", "metadata": {}, "outputs": [], "source": [ "import os\n", "from azure.ai.inference.tracing import AIInferenceInstrumentor\n", "\n", "# (Optional) capture prompt & completion contents in traces\n", "os.environ[\"AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED\"] = \"true\" # or 'false'\n", "\n", "# Let the Azure SDK know we want to use OpenTelemetry\n", "os.environ[\"AZURE_SDK_TRACING_IMPLEMENTATION\"] = \"opentelemetry\"\n", "\n", "# Instrument the Azure AI Inference client library\n", "AIInferenceInstrumentor().instrument()\n", "print(\"βœ… Azure AI Inference instrumentation enabled.\")" ] }, { "cell_type": "markdown", "id": "480fbc30", "metadata": {}, "source": [ "### 4.1.2 Point Traces to Console or Local OTLP\n", "The simplest is to pipe them to **stdout**. If you want to send them to **Prompty** or **Aspire**, specify the local OTLP endpoint URL (usually `\"http://localhost:4317\"` or similar)." ] }, { "cell_type": "code", "execution_count": null, "id": "d202f67d", "metadata": {}, "outputs": [], "source": [ "project_client.telemetry.enable(destination=sys.stdout)\n", "# Or, to send to a local OTLP collector (Prompty/Aspire), do:\n", "# project_client.telemetry.enable(destination=\"http://localhost:4317\")\n", "\n", "try:\n", " local_client = project_client.inference.get_chat_completions_client()\n", " user_prompt = \"What's a simple 5-minute warmup routine?\"\n", " local_resp = local_client.complete(\n", " model=os.environ.get(\"MODEL_DEPLOYMENT_NAME\", \"gpt-4o\"),\n", " messages=[UserMessage(content=user_prompt)]\n", " )\n", " print(\"\\nπŸ€– Response:\", local_resp.choices[0].message.content)\n", "except Exception as exc:\n", " print(f\"❌ Error in local-tracing example: {exc}\")" ] }, { "cell_type": "markdown", "id": "d3c0fdd4", "metadata": {}, "source": [ "## 4.2 Azure Monitor Tracing (Application Insights)\n", "Now we'll set up tracing to **Application Insights**, which will forward your logs to the **Azure AI Foundry** **Tracing** page.\n", "\n", "**Steps**:\n", "1. In AI Foundry, go to your project’s **Tracing** tab, attach (or create) an **Application Insights** resource.\n", "2. In code, call `project_client.telemetry.get_connection_string()` to retrieve the instrumentation key.\n", "3. Use `azure.monitor.opentelemetry.configure_azure_monitor(...)` with that connection.\n", "4. Make an inference call -> logs appear in the Foundry portal (and in Azure Monitor itself).\n" ] }, { "cell_type": "code", "execution_count": null, "id": "0207221c", "metadata": {}, "outputs": [], "source": [ "%pip install azure-monitor-opentelemetry" ] }, { "cell_type": "code", "execution_count": null, "id": "552014a7", "metadata": {}, "outputs": [], "source": [ "from azure.monitor.opentelemetry import configure_azure_monitor\n", "from azure.ai.inference.models import UserMessage\n", "\n", "app_insights_conn_str = project_client.telemetry.get_connection_string()\n", "if app_insights_conn_str:\n", " print(\"πŸ”§ Found App Insights connection string, configuring...\")\n", " configure_azure_monitor(connection_string=app_insights_conn_str)\n", " # Optionally add more instrumentation (for openai or langchain):\n", " project_client.telemetry.enable()\n", " \n", " # Let's do a test call that logs to AI Foundry's Tracing page\n", " try:\n", " with project_client.inference.get_chat_completions_client() as client:\n", " prompt_msg = \"Any easy at-home cardio exercise recommendations?\"\n", " response = client.complete(\n", " model=os.environ.get(\"MODEL_DEPLOYMENT_NAME\", \"gpt-4o\"),\n", " messages=[UserMessage(content=prompt_msg)]\n", " )\n", " print(\"\\nπŸ€– Response (logged to App Insights):\")\n", " print(response.choices[0].message.content)\n", " except Exception as e:\n", " print(\"❌ Chat completions with Azure Monitor example failed:\", e)\n", "else:\n", " print(\"No Application Insights connection string is configured in this project.\")" ] }, { "cell_type": "markdown", "id": "f4991833", "metadata": {}, "source": [ "### 4.3 Viewing Traces in Azure AI Foundry\n", "After running the above code:\n", "1. Go to your AI Foundry project.\n", "2. Click **Tracing** on the sidebar.\n", "3. You should see the logs from your calls.\n", "4. Filter, expand, or explore them as needed.\n", "\n", "Also, if you want more advanced dashboards, you can open your **Application Insights** resource from the Foundry. In the App Insights portal, you get additional features like **end-to-end transaction** details, query logs, etc.\n" ] }, { "cell_type": "markdown", "id": "31dbb932", "metadata": {}, "source": [ "# 5. Agent-based Example\n", "We'll now create a **Health Resource Agent** that references sample docs about recipes or guidelines, then demonstrate:\n", "1. Creating an Agent with instructions.\n", "2. Creating a conversation thread.\n", "3. Running multi-step queries with **observability** enabled.\n", "4. Optionally cleaning up resources at the end.\n", "\n", "> The agent approach is helpful when you want more sophisticated conversation flows or **tool usage** (like file search)." ] }, { "cell_type": "markdown", "id": "303ad934", "metadata": {}, "source": [ "## 5.1 Create Sample Files & Vector Store\n", "We'll create dummy `.md` files about recipes/guidelines, then push them into a **vector store** so our agent can do semantic search.\n", "\n", "(*This portion is a quick summary—see [the other file-search tutorial] if you need more details.)" ] }, { "cell_type": "code", "execution_count": null, "id": "1e09113d", "metadata": {}, "outputs": [], "source": [ "from azure.ai.projects.models import (\n", " FileSearchTool,\n", " FilePurpose,\n", " MessageTextContent,\n", " MessageRole\n", ")\n", "\n", "def create_sample_files():\n", " \"\"\"Create some local .md files with sample text.\"\"\"\n", " recipes_md = (\n", " \"\"\"# Healthy Recipes Database\\n\\n\"\n", " \"## Gluten-Free Recipes\\n\"\n", " \"1. Quinoa Bowl\\n\"\n", " \" - Ingredients: quinoa, vegetables, olive oil\\n\"\n", " \" - Instructions: Cook quinoa, add vegetables\\n\\n\"\n", " \"2. Rice Pasta\\n\"\n", " \" - Ingredients: rice pasta, mixed vegetables\\n\"\n", " \" - Instructions: Boil pasta, sauté vegetables\\n\\n\"\n", " \"## Diabetic-Friendly Recipes\\n\"\n", " \"1. Low-Carb Stir Fry\\n\"\n", " \" - Ingredients: chicken, vegetables, tamari sauce\\n\"\n", " \" - Instructions: Cook chicken, add vegetables\\n\\n\"\n", " \"## Heart-Healthy Recipes\\n\"\n", " \"1. Baked Salmon\\n\"\n", " \" - Ingredients: salmon, lemon, herbs\\n\"\n", " \" - Instructions: Season salmon, bake\\n\\n\"\n", " \"2. Mediterranean Bowl\\n\"\n", " \" - Ingredients: chickpeas, vegetables, tahini\\n\"\n", " \" - Instructions: Combine ingredients\\n\"\"\"\n", " )\n", "\n", " guidelines_md = (\n", " \"\"\"# Dietary Guidelines\\n\\n\"\n", " \"## General Guidelines\\n\"\n", " \"- Eat a variety of foods\\n\"\n", " \"- Control portion sizes\\n\"\n", " \"- Stay hydrated\\n\\n\"\n", " \"## Special Diets\\n\"\n", " \"1. Gluten-Free Diet\\n\"\n", " \" - Avoid wheat, barley, rye\\n\"\n", " \" - Focus on naturally gluten-free foods\\n\\n\"\n", " \"2. Diabetic Diet\\n\"\n", " \" - Monitor carbohydrate intake\\n\"\n", " \" - Choose low glycemic foods\\n\\n\"\n", " \"3. Heart-Healthy Diet\\n\"\n", " \" - Limit saturated fats\\n\"\n", " \" - Choose lean proteins\\n\"\"\"\n", " )\n", "\n", " with open(\"recipes.md\", \"w\", encoding=\"utf-8\") as f:\n", " f.write(recipes_md)\n", " with open(\"guidelines.md\", \"w\", encoding=\"utf-8\") as f:\n", " f.write(guidelines_md)\n", "\n", " print(\"πŸ“„ Created sample resource files: recipes.md, guidelines.md\")\n", " return [\"recipes.md\", \"guidelines.md\"]\n", "\n", "sample_files = create_sample_files()\n", "\n", "def create_vector_store(files, store_name=\"my_health_resources\"):\n", " try:\n", " uploaded_ids = []\n", " for fp in files:\n", " upl = project_client.agents.upload_file_and_poll(\n", " file_path=fp,\n", " purpose=FilePurpose.AGENTS # Add FilePurpose.AGENTS here\n", " )\n", " uploaded_ids.append(upl.id)\n", " print(f\"βœ… Uploaded: {fp} -> File ID: {upl.id}\")\n", "\n", " # Create vector store from these file IDs\n", " vs = project_client.agents.create_vector_store_and_poll(\n", " file_ids=uploaded_ids,\n", " name=store_name\n", " )\n", " print(f\"πŸŽ‰ Created vector store '{store_name}', ID: {vs.id}\")\n", " return vs, uploaded_ids\n", " except Exception as e:\n", " print(f\"❌ Error creating vector store: {e}\")\n", " return None, []\n", "\n", "vector_store, file_ids = None, []\n", "if sample_files:\n", " vector_store, file_ids = create_vector_store(sample_files, store_name=\"health_resources_example\")" ] }, { "cell_type": "markdown", "id": "145eb186", "metadata": {}, "source": [ "## 5.2 Create a Health Resource Agent\n", "We'll create a **FileSearchTool** referencing the vector store, then create an agent with instructions that it should:\n", "1. Provide disclaimers.\n", "2. Offer general nutrition or recipe tips.\n", "3. Cite sources if possible.\n", "4. Encourage professional consultation for deeper medical advice.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "7f604175", "metadata": {}, "outputs": [], "source": [ "from azure.ai.projects.models import FileSearchTool, FilePurpose\n", "from azure.ai.projects.models import ConnectionType, MessageTextContent, MessageRole\n", "\n", "def create_health_agent(vs_id):\n", " try:\n", " # The tool references our vector store so the agent can search it\n", " file_search_tool = FileSearchTool(vector_store_ids=[vs_id])\n", " \n", " instructions = \"\"\"\n", " You are a health resource advisor with access to dietary and recipe files.\n", " You:\n", " 1. Always present disclaimers (you're not a medical professional)\n", " 2. Provide references to files when possible\n", " 3. Focus on general nutrition or recipe tips.\n", " 4. Encourage professional consultation for more detailed advice.\n", " \"\"\"\n", "\n", " agent = project_client.agents.create_agent(\n", " model=os.environ.get(\"MODEL_DEPLOYMENT_NAME\", \"gpt-4o\"),\n", " name=\"health-search-agent\",\n", " instructions=instructions,\n", " tools=file_search_tool.definitions,\n", " tool_resources=file_search_tool.resources\n", " )\n", " print(f\"πŸŽ‰ Created agent '{agent.name}' with ID: {agent.id}\")\n", " return agent\n", " except Exception as e:\n", " print(f\"❌ Error creating health agent: {e}\")\n", " return None\n", "\n", "health_agent = None\n", "if vector_store:\n", " health_agent = create_health_agent(vector_store.id)" ] }, { "cell_type": "markdown", "id": "1f6995a6", "metadata": {}, "source": [ "## 5.3 Using the Agent\n", "Let's create a new conversation **thread** and ask the agent some questions. We'll rely on the **observability** settings we already configured so each step is traced.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "86e5b4f9", "metadata": {}, "outputs": [], "source": [ "def create_thread():\n", " try:\n", " thread = project_client.agents.create_thread()\n", " print(f\"πŸ“ Created new thread, ID: {thread.id}\")\n", " return thread\n", " except Exception as e:\n", " print(f\"❌ Could not create thread: {e}\")\n", " return None\n", "\n", "def ask_question(thread_id, agent_id, user_question):\n", " try:\n", " # 1) Add user message\n", " msg = project_client.agents.create_message(\n", " thread_id=thread_id,\n", " role=\"user\",\n", " content=user_question\n", " )\n", " print(f\"User asked: '{user_question}'\")\n", " # 2) Create & process a run\n", " run = project_client.agents.create_and_process_run(\n", " thread_id=thread_id,\n", " agent_id=agent_id\n", " )\n", " print(f\"Run finished with status: {run.status}\")\n", " if run.last_error:\n", " print(\"Error details:\", run.last_error)\n", " return run\n", " except Exception as e:\n", " print(f\"❌ Error asking question: {e}\")\n", " return None\n", "\n", "if health_agent:\n", " thread = create_thread()\n", " if thread:\n", " # Let's ask a few sample questions\n", " queries = [\n", " \"Could you suggest a gluten-free lunch recipe?\",\n", " \"Show me some heart-healthy meal ideas.\",\n", " \"What guidelines do you have for someone with diabetes?\"\n", " ]\n", " for q in queries:\n", " ask_question(thread.id, health_agent.id, q)\n" ] }, { "cell_type": "markdown", "id": "17c61d8d", "metadata": {}, "source": [ "### 5.3.1 Viewing the conversation\n", "We can retrieve the conversation messages to see how the agent responded, check if it cited file passages, etc." ] }, { "cell_type": "code", "execution_count": null, "id": "a1c57935", "metadata": {}, "outputs": [], "source": [ "def display_thread(thread_id):\n", " try:\n", " messages = project_client.agents.list_messages(thread_id=thread_id)\n", " print(\"\\nπŸ—£οΈ Conversation:\")\n", " for m in reversed(messages.data):\n", " if m.content:\n", " last_content = m.content[-1]\n", " if hasattr(last_content, \"text\"):\n", " print(f\"[{m.role.upper()}]: {last_content.text.value}\\n\")\n", "\n", " print(\"\\nπŸ“Ž Checking for citations...\")\n", " for c in messages.file_citation_annotations:\n", " print(f\"- Citation snippet: '{c.text}' from file ID: {c.file_citation['file_id']}\")\n", " except Exception as e:\n", " print(f\"❌ Could not display thread: {e}\")\n", "\n", "# If we created a thread above, let's read it\n", "if health_agent and thread:\n", " display_thread(thread.id)" ] }, { "cell_type": "markdown", "id": "e7420c39", "metadata": {}, "source": [ "# 6. Cleanup\n", "If desired, we can remove the vector store, files, and agent to keep things tidy. (In a real solution, you might keep them around.)" ] }, { "cell_type": "code", "execution_count": null, "id": "f473cddc", "metadata": {}, "outputs": [], "source": [ "def cleanup_resources():\n", " try:\n", " if 'vector_store' in globals() and vector_store:\n", " project_client.agents.delete_vector_store(vector_store.id)\n", " print(\"πŸ—‘οΈ Deleted vector store.\")\n", "\n", " if 'file_ids' in globals() and file_ids:\n", " for fid in file_ids:\n", " project_client.agents.delete_file(fid)\n", " print(\"πŸ—‘οΈ Deleted uploaded files.\")\n", "\n", " if 'health_agent' in globals() and health_agent:\n", " project_client.agents.delete_agent(health_agent.id)\n", " print(\"πŸ—‘οΈ Deleted health agent.\")\n", "\n", " if 'sample_files' in globals() and sample_files:\n", " for sf in sample_files:\n", " if os.path.exists(sf):\n", " os.remove(sf)\n", " print(\"πŸ—‘οΈ Deleted local sample files.\")\n", " except Exception as e:\n", " print(f\"❌ Error cleaning up: {e}\")\n", "\n", "\n", "cleanup_resources()" ] }, { "cell_type": "markdown", "id": "4956d0ec", "metadata": {}, "source": [ "# πŸŽ‰ Wrap-Up\n", "We've demonstrated:\n", "1. **Basic LLM calls** with `AIProjectClient`.\n", "2. **Listing connections** in your Azure AI Foundry project.\n", "3. **Observability & tracing** in both local (console, OTLP endpoint) and cloud (App Insights) contexts.\n", "4. A quick **Agent** scenario that uses a vector store for searching sample docs.\n", "\n", "## Next Steps\n", "- Check the **Tracing** tab in your Azure AI Foundry portal to see the logs.\n", "- Explore advanced queries in Application Insights.\n", "- Use [Prompty](https://github.com/microsoft/prompty) or [Aspire](https://learn.microsoft.com/dotnet/aspire/) for local telemetry dashboards.\n", "- Incorporate this approach into your **production** GenAI pipelines!\n", "\n", "> πŸ‹οΈ **Health Reminder**: The LLM's suggestions are for demonstration only. For real health decisions, consult a professional.\n", "\n", "Happy Observing & Tracing! πŸŽ‰" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "version": "3.11.11" }, "name": "Observability_and_Tracing_Comprehensive" }, "nbformat": 4, "nbformat_minor": 5 }