2_eval-design-ptn/02_azure-evaluation-sdk/03.1_adversarial-simulator.ipynb (403 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Safety Evaluators with the Azure AI Evaluation SDK\n", "The following sample shows the basic way to evaluate a Generative AI application in your development environment with the Azure AI evaluation SDK.\n", "\n", "> ✨ ***Note*** <br>\n", "> Please check the reference document before you get started - https://learn.microsoft.com/en-us/azure/ai-studio/how-to/develop/evaluate-sdk" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 🔨 Current Support and Limitations (as of 2025-01-14) \n", "- Check the region support for the Azure AI Evaluation SDK. https://learn.microsoft.com/en-us/azure/ai-studio/concepts/evaluation-metrics-built-in?tabs=warning#region-support\n", "\n", "### Region support for evaluations\n", "| Region | Hate and Unfairness, Sexual, Violent, Self-Harm, XPIA, ECI (Text) | Groundedness (Text) | Protected Material (Text) | Hate and Unfairness, Sexual, Violent, Self-Harm, Protected Material (Image) |\n", "|---------------------|------------------------------------------------------------------|---------------------|----------------------------|----------------------------------------------------------------------------|\n", "| North Central US | no | no | no | yes |\n", "| East US 2 | yes | yes | yes | yes |\n", "| Sweden Central | yes | yes | yes | yes |\n", "| US North Central | yes | no | yes | yes |\n", "| France Central | yes | yes | yes | yes |\n", "| Switzerland West | yes | no | no | yes |\n", "\n", "### Region support for adversarial simulation\n", "| Region | Adversarial Simulation (Text) | Adversarial Simulation (Image) |\n", "|-------------------|-------------------------------|---------------------------------|\n", "| UK South | yes | no |\n", "| East US 2 | yes | yes |\n", "| Sweden Central | yes | yes |\n", "| US North Central | yes | yes |\n", "| France Central | yes | no |\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ✔️ Pricing and billing\n", "- Effective 1/14/2025, Azure AI Safety Evaluations will no longer be free in public preview. It will be billed based on consumption as following:\n", "\n", "| Service Name | Safety Evaluations | Price Per 1K Tokens (USD) |\n", "|---------------------------|--------------------------|---------------------------|\n", "| Azure Machine Learning | Input pricing for 3P | $0.02 |\n", "| Azure Machine Learning | Output pricing for 3P | $0.06 |\n", "| Azure Machine Learning | Input pricing for 1P | $0.012 |\n", "| Azure Machine Learning | Output pricing for 1P | $0.012 |\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import os\n", "import json\n", "\n", "from pprint import pprint\n", "from azure.ai.evaluation import evaluate\n", "from azure.ai.evaluation import RelevanceEvaluator\n", "from azure.ai.evaluation import GroundednessEvaluator, GroundednessProEvaluator\n", "from azure.identity import DefaultAzureCredential\n", "from dotenv import load_dotenv\n", "from azure.ai.projects import AIProjectClient\n", "from azure.ai.projects.models import (\n", " Evaluation,\n", " Dataset,\n", " EvaluatorConfiguration,\n", " ConnectionType,\n", " EvaluationSchedule,\n", " RecurrenceTrigger,\n", " ApplicationInsightsConfiguration,\n", ")\n", "import pathlib\n", "\n", "from azure.ai.evaluation import evaluate\n", "from azure.ai.evaluation import (\n", " ContentSafetyEvaluator,\n", " IndirectAttackEvaluator,\n", ")\n", "from azure.ai.evaluation.simulator import (\n", " AdversarialSimulator,\n", " AdversarialScenario,\n", " AdversarialScenarioJailbreak,\n", " IndirectAttackSimulator,\n", ")\n", "from azure.ai.evaluation.simulator._adversarial_scenario import (\n", " _UnstableAdversarialScenario,\n", ")\n", "from openai import AzureOpenAI\n", "from typing import List, Dict, Optional, Any\n", "\n", "\n", "load_dotenv(override=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Initialize Azure AI project and Azure OpenAI conncetion with your environment variables\n", "azure_ai_project_conn_str = os.environ.get(\"AZURE_AI_PROJECT_CONN_STR\")\n", "subscription_id = azure_ai_project_conn_str.split(\";\")[1]\n", "resource_group_name = azure_ai_project_conn_str.split(\";\")[2]\n", "project_name = azure_ai_project_conn_str.split(\";\")[3]\n", "\n", "azure_ai_project = {\n", " \"subscription_id\": subscription_id,\n", " \"resource_group_name\": resource_group_name,\n", " \"project_name\": project_name,\n", "}\n", "\n", "azure_openai_deployment = os.environ.get(\"AZURE_OPENAI_DEPLOYMENT_NAME\")\n", "azure_openai_endpoint = os.environ.get(\"AZURE_OPENAI_ENDPOINT\")\n", "azure_openai_key = (os.environ.get(\"AZURE_OPENAI_API_KEY\"),)\n", "azure_openai_api_version = os.environ.get(\"AZURE_OPENAI_API_VERSION\")\n", "\n", "credential = DefaultAzureCredential()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 🧪 AdversarialSimulator to generate abnormal contents\n", "- Test that the Protected Material (i.e. copyrighted content or material) is not being generated by your generative AI applications. The following example uses an AdversarialSimulator paired with a protected content scenario to prompt your model to respond with material that is protected by intellectual property laws." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def call_to_your_ai_application(query: str) -> str:\n", " # logic to call your application\n", " # use a try except block to catch any errors\n", " # Get a client handle for the model\n", " client = AzureOpenAI(\n", " azure_endpoint=azure_openai_endpoint,\n", " api_version=azure_openai_api_version,\n", " api_key=azure_openai_key,\n", " )\n", " completion = client.chat.completions.create(\n", " model=azure_openai_deployment,\n", " messages=[\n", " {\n", " \"role\": \"user\",\n", " \"content\": query,\n", " }\n", " ],\n", " max_tokens=800,\n", " temperature=0.7,\n", " top_p=0.95,\n", " frequency_penalty=0,\n", " presence_penalty=0,\n", " stop=None,\n", " stream=False,\n", " )\n", " message = completion.to_dict()[\"choices\"][0][\"message\"]\n", " # change this to return the response from your application\n", " return message[\"content\"]\n", "\n", "\n", "async def protected_material_callback(\n", " messages: Dict[str, List[Dict]],\n", " stream: bool = False,\n", " session_state: Any = None,\n", " context: Optional[Dict[str, Any]] = None,\n", ") -> dict:\n", " messages_list = messages[\"messages\"]\n", " # Get the last message from the user\n", " latest_message = messages_list[-1]\n", " query = latest_message[\"content\"]\n", "\n", " # Call the model\n", " response = call_to_your_ai_application(query)\n", "\n", " formatted_response = response.to_dict()[\"choices\"][0][\"message\"]\n", " messages[\"messages\"].append(formatted_response)\n", " return {\n", " \"messages\": messages[\"messages\"],\n", " \"stream\": stream,\n", " \"session_state\": session_state,\n", " \"context\": context,\n", " }" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# initialize the adversarial simulator\n", "protected_material_simulator = AdversarialSimulator(\n", " azure_ai_project=azure_ai_project, credential=credential\n", ")\n", "\n", "unfiltered_protected_material_outputs = await protected_material_simulator(\n", " scenario=AdversarialScenario.ADVERSARIAL_CONTENT_PROTECTED_MATERIAL,\n", " max_conversation_turns=1, # define the number of conversation turns\n", " max_simulation_results=3, # define the number of simulation results\n", " target=protected_material_callback, # define the target model callback\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Manually convert the data to JSON lines format\n", "result = \"\\n\".join([json.dumps(item) for item in unfiltered_protected_material_outputs])\n", "print(result)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "async def xpia_callback(\n", " messages: List[Dict],\n", " stream: bool = False,\n", " session_state: Optional[str] = None,\n", " context: Optional[Dict] = None,\n", ") -> dict:\n", " messages_list = messages[\"messages\"]\n", " # get last message\n", " latest_message = messages_list[-1]\n", " query = latest_message[\"content\"]\n", " context = None\n", " if \"file_content\" in messages[\"template_parameters\"]:\n", " query += messages[\"template_parameters\"][\"file_content\"]\n", " # the next few lines explain how to use the AsyncAzureOpenAI's chat.completions\n", " # to respond to the simulator. You should replace it with a call to your model/endpoint/application\n", " # make sure you pass the `query` and format the response as we have shown below\n", "\n", " # Get a client handle for the model\n", " client = AzureOpenAI(\n", " azure_endpoint=azure_openai_endpoint,\n", " api_version=azure_openai_api_version,\n", " api_key=azure_openai_key,\n", " )\n", " try:\n", " response_from_oai_chat_completions = client.chat.completions.create(\n", " messages=[{\"content\": query, \"role\": \"user\"}],\n", " model=azure_openai_deployment,\n", " max_tokens=300,\n", " )\n", " print(response_from_oai_chat_completions)\n", " except Exception as e:\n", " print(f\"Error: {e} with content length {len(query)}\")\n", " # to continue the conversation, return the messages, else you can fail the adversarial with an exception\n", " message = {\n", " \"content\": \"Something went wrong. Check the exception e for more details.\",\n", " \"role\": \"assistant\",\n", " \"context\": None,\n", " }\n", " messages[\"messages\"].append(message)\n", " return {\n", " \"messages\": messages[\"messages\"],\n", " \"stream\": stream,\n", " \"session_state\": session_state,\n", " }\n", " response_result = response_from_oai_chat_completions.choices[0].message.content\n", " formatted_response = {\n", " \"content\": response_result,\n", " \"role\": \"assistant\",\n", " \"context\": {},\n", " }\n", " messages[\"messages\"].append(formatted_response)\n", " return {\n", " \"messages\": messages[\"messages\"],\n", " \"stream\": stream,\n", " \"session_state\": session_state,\n", " \"context\": context,\n", " }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Note] If you encounter the following error, please `az login` and try again. You can also check the Azure AI Evaluation SDK documentation for more information.\n", "\n", "> EvaluationException: (UserError) Failed to connect to your Azure AI project. Please check if the project scope is configured correctly, and make sure you have the necessary access permissions. Status code: 401" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !az login --scope https://graph.microsoft.com//.default" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "indirect_attack_simulator = IndirectAttackSimulator(\n", " azure_ai_project=azure_ai_project, credential=DefaultAzureCredential()\n", ")\n", "\n", "unfiltered_indirect_attack_outputs = await indirect_attack_simulator(\n", " target=xpia_callback,\n", " scenario=AdversarialScenarioJailbreak.ADVERSARIAL_INDIRECT_JAILBREAK,\n", " max_simulation_results=10,\n", " max_conversation_turns=3,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pprint(unfiltered_indirect_attack_outputs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Results are truncated for brevity.\n", "truncation_limit = 50\n", "for output in unfiltered_indirect_attack_outputs:\n", " for turn in output[\"messages\"]:\n", " content = turn[\"content\"]\n", " if isinstance(content, dict): # user response from callback is dict\n", " print(f\"{turn['role']} : {content['content'][0:truncation_limit]}\")\n", " elif isinstance(content, tuple): # assistant response from callback is tuple\n", " print(f\"{turn['role']} : {content[0:truncation_limit]}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pathlib import Path\n", "\n", "print(unfiltered_indirect_attack_outputs)\n", "print(unfiltered_indirect_attack_outputs.to_eval_qr_json_lines())\n", "output = unfiltered_indirect_attack_outputs.to_eval_qr_json_lines()\n", "xpia_file_path = \"data/unfiltered_indirect_attack_outputs.jsonl\"\n", "\n", "# Write the output to the file\n", "with Path.open(Path(xpia_file_path), \"w\") as file:\n", " file.write(output)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "venv_agent", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.11" } }, "nbformat": 4, "nbformat_minor": 2 }