llm-circuit-breaking/llm_circuit_breaking.ipynb (269 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "7QC8yC7SoSMn" }, "source": [ "# **LLM Serving with Apigee**\n", "\n", "<table align=\"left\">\n", " <td style=\"text-align: center\">\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/apigee-samples/blob/main/llm-circuit-breaking/llm_circuit_breaking.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\\\"><br> Open in Colab\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fapigee-samples%2Fmain%2Fllm-circuit-breaking%2Fllm_circuit_breaking.ipynb\">\n", " <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n", " </a>\n", " </td> \n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/apigee-samples/main/llm-circuit-breaking/llm_circuit_breaking.ipynb\">\n", " <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/apigee-samples/blob/main/llm-circuit-breaking/llm_circuit_breaking.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n", " </a>\n", " </td>\n", "</table>\n", "<br />\n", "<br />\n", "<br />\n", "\n", "# Circuit Breaking Sample\n", "\n", "Circuit breaking with Apigee offers significant benefits for serving Large Language Models (LLMs) in Retrieval Augmented Generation (RAG) applications, particularly in preventing the dreaded `429` HTTP errors that arise from exceeding LLM endpoint quotas. By placing Apigee between the RAG application and LLM endpoints, users gain a robust mechanism for managing traffic distribution and graceful failure handling.\n", "\n", "Imagine a scenario where multiple tenants, each with their own LLM endpoints and associated capacity limits, are accessed by a single RAG application. Without circuit breaking, a surge in traffic to a particular tenant's LLM endpoint could trigger a `429` error, disrupting the entire RAG application's functionality. Apigee acts as a traffic cop, monitoring the health of each tenant's endpoint and implementing a circuit-breaking strategy to prevent cascading failures.\n", "\n", "To further enhance resilience, users can create priority pools, grouping together LLM endpoints with similar capabilities and quota limitations. This allows Apigee to distribute traffic evenly within a pool, effectively aggregating the individual endpoint quotas and ensuring that the combined capacity can handle the load.\n", "\n", "![image](https://github.com/GoogleCloudPlatform/apigee-samples/blob/main/llm-circuit-breaking/images/llm-circuit-breaking.png?raw=1)\n", "\n", "\n", "# Circuit Breaking Benefits\n", "\n", "1. **Improved fault tolerance**: The multi-pool architecture, coupled with circuit breaking, provides inherent fault tolerance, ensuring that the RAG application remains operational even if one or more LLM endpoints fail or experience outages.\n", "2. **Data-driven capacity planning**: Circuit breaking provides valuable insights into endpoint performance, allowing you to monitor and adjust capacity allocations based on actual traffic patterns and usage. This enables informed capacity planning and avoids unnecessary overprovisioning.\n", "3. **Multitenancy**: Apigee provides a unified platform for managing and routing traffic to different LLM tenants, simplifying integration and reducing development effort.\n", "4. **Centralized monitoring and analytics**: Apigee offers comprehensive monitoring and analytics capabilities, allowing for real-time insights into LLM endpoint performance, quota usage, and failover events. This enables proactive identification and resolution of issues, enhancing operational efficiency.\n", "\n", "\n", "# How does it work?\n", "\n", "1. Apigee recieves a request and verifies the primary pool status. If it's open, then route the traffic to the primary pool. It it's closed, then route the traffic to the secondary pool.\n", "2. If the request to the primary pool fails (`429` or error greater than `399`) then failover to the seconday pool and increase the error count in the circuit breaker.\n", "3. Once an max of 2 errors has been detected, then the primary pool is taken out of rotation and all traffic will be sent to the secondary pool.\n", "4. The primary pool will be returned back into rotation after a cooldown period of 2 minutes.\n", "\n", "# Setup\n", "\n", "Use the following GCP CloudShell tutorial. Follow the instructions to deploy the sample.\n", "\n", "[![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://ssh.cloud.google.com/cloudshell/open?cloudshell_git_repo=https://github.com/GoogleCloudPlatform/apigee-samples&cloudshell_git_branch=main&cloudshell_workspace=.&cloudshell_tutorial=llm-circuit-breaking/docs/cloudshell-tutorial.md)\n", "\n", "# Test Sample\n", "\n", "## Install dependencies\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 533 }, "id": "9AuXsoJDZPMs", "outputId": "ebb90f3a-178a-4429-85e7-3079fd680fa4" }, "outputs": [], "source": [ "!pip install -Uq google-cloud-tasks" ] }, { "cell_type": "markdown", "metadata": { "id": "LpGncHDpentW" }, "source": [ "## Authenticate your notebook environment (Colab only)\n", "If you are running this notebook on Google Colab, run the following cell to authenticate your environment. This step is not required if you are using Vertex AI Workbench or Colab Enterprise." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "id": "8layJrBOZP4-" }, "outputs": [], "source": [ "import sys\n", "\n", "# Additional authentication is required for Google Colab\n", "if \"google.colab\" in sys.modules:\n", " # Authenticate user to Google Cloud\n", " from google.colab import auth\n", " auth.authenticate_user()" ] }, { "cell_type": "markdown", "metadata": { "id": "-4eVYn8frc5i" }, "source": [ "## Initialize notebook variables\n", "\n", "* **PROJECT_ID**: The default GCP project to use when making Vertex API calls.\n", "* **LOCATION**: The default location to use when making Vertex API calls.\n", "* **API_ENDPOINT**: Desired API endpoint, e.g. https://apigee.iloveapimanagement.com/circuit-breaking\n", "* **TASK_QUEUE**: After deploying the sample you'll get a task queue ID. By default this value should be `ai-queue`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "kj-10KmnZSVe" }, "outputs": [], "source": [ "from google.auth import default\n", "from google.auth.transport.requests import Request\n", "# Define sample information\n", "PROJECT_ID = \"\" # @param {type:\"string\"}\n", "LOCATION = \"\" # @param {type:\"string\"}\n", "API_ENDPOINT = \"https://REPLACE_WITH_APIGEE_HOST/v1/samples/llm-circuit-breaking\" # @param {type:\"string\"}\n", "TASK_QUEUE = \"ai-queue\" # @param {type:\"string\"}\n", "SCOPES = ['https://www.googleapis.com/auth/cloud-platform']\n", "MODEL = \"gemini-1.5-pro\"\n", "GEMINI_SUFFIX = \"/v1/projects/{project}/locations/{location}/publishers/google/models/{model}:streamGenerateContent\".format(project=PROJECT_ID, location=LOCATION, model=MODEL)\n", "LLM_REQUEST_URL=API_ENDPOINT + GEMINI_SUFFIX\n", "\n", "credentials, project_id = default(scopes=SCOPES, quota_project_id=PROJECT_ID)\n", "credentials.refresh(Request())\n", "access_token = credentials.token\n" ] }, { "cell_type": "markdown", "metadata": { "id": "S478piq-SGLc" }, "source": [ "## Test Circuit Breaking\n", "\n", "The following cell executes a test scenario to exceed the total Gemini quota [model limits](https://cloud.google.com/vertex-ai/generative-ai/docs/quotas#quotas_by_region_and_model) for a **primary** GCP project. As soon as the project quota is reached, a secondary target will serve traffic without returing `429` errors to the consumer." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "87i610BzZns_" }, "outputs": [], "source": [ "from google.cloud import tasks_v2\n", "from google.protobuf import duration_pb2\n", "from typing import Dict, Optional\n", "import json\n", "\n", "prompts = [\"Why is the sky blue?\",\n", " \"What makes the sky blue?\",\n", " \"Why does the sky is blue colored?\",\n", " \"Can you explain why the sky is blue?\",\n", " \"The sky is blue, why is that?\",\n", " \"Why is the sky blue?\"]\n", "\n", "def create_http_task(\n", " project: str,\n", " location: str,\n", " queue: str,\n", " url: str,\n", " json_payload: Dict,\n", " scheduled_seconds_from_now: Optional[int] = None,\n", " task_id: Optional[str] = None,\n", " deadline_in_seconds: Optional[int] = None,\n", ") -> tasks_v2.Task:\n", " client = tasks_v2.CloudTasksClient()\n", " task = tasks_v2.Task(\n", " http_request=tasks_v2.HttpRequest(\n", " http_method=tasks_v2.HttpMethod.POST,\n", " url=url,\n", " headers={\"Content-type\": \"application/json\",\n", " \"Authorization\": f\"Bearer {access_token}\"},\n", " body=json.dumps(json_payload).encode(),\n", " ),\n", " name=(\n", " client.task_path(project, location, queue, task_id)\n", " if task_id is not None\n", " else None\n", " ),\n", " )\n", " duration = duration_pb2.Duration()\n", " duration.FromSeconds(120)\n", " task.dispatch_deadline = duration\n", " return client.create_task(\n", " tasks_v2.CreateTaskRequest(\n", " parent=client.queue_path(project, location, queue),\n", " task=task,\n", " )\n", " )\n", "\n", "def invoke_model(prompt):\n", " request = {\"contents\":[{\"role\":\"user\",\"parts\":[{\"text\":prompt}]}],\"generationConfig\":{}}\n", " create_http_task(PROJECT_ID, LOCATION, TASK_QUEUE, LLM_REQUEST_URL, request)\n", "\n", "x = range(15)\n", "for n in x:\n", " for prompt in prompts:\n", " invoke_model(prompt)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Analyze target pool Gemini quotas\n", "\n", "This sample also creates am LLM Target analytics report that allows you to:\n", "\n", "* Understand usage patterns: See how often the Gemini quota is being reached.\n", "* Optimize token management Make informed decisions about quota usage and ajust pte-allocated quota.\n", "* Plan for scalability: Forecast future demand and ensure resource availability.\n", "\n", "To use this dashboard, from the Apigee console navigate to `Custom Reports` > `LLM Target Report`. You'll be able to drill down into token metrics that represent LLM traffic. \n", "\n", "**NOTE**: It might take a few mins for the report to show some data\n", "\n", "See sample below:\n", "\n", "![image](https://github.com/GoogleCloudPlatform/apigee-samples/blob/main/llm-circuit-breaking/images/circuit-breaking-report.png?raw=1)\n" ] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }