llm-token-limits/llm_token_limits.ipynb (285 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "7QC8yC7SoSMn" }, "source": [ "# **LLM Serving with Apigee**\n", "\n", "<table align=\"left\">\n", " <td style=\"text-align: center\">\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/apigee-samples/blob/main/llm-token-limits/llm_token_limits.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\\\"><br> Open in Colab\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fapigee-samples%2Fmain%2Fllm-token-limits%2Fllm_token_limits.ipynb\">\n", " <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n", " </a>\n", " </td> \n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/apigee-samples/main/llm-token-limits/llm_token_limits.ipynb\">\n", " <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/apigee-samples/blob/main/llm-token-limits/llm_token_limits.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n", " </a>\n", " </td>\n", "</table>\n", "<br />\n", "<br />\n", "<br />\n", "\n", "# Token Limits Sample\n", "\n", "Every interaction with an LLM consumes tokens, therefore, LLM token management plays a crutial role in maintaining platform-level control and visility over the consumption of tokens across LLM providers and consumers.\n", "\n", "Apigee's API Products, when applied to token consumption, allows you to effectively manage token usage by setting limits on the number of tokens consumed per LLM consumer. This policy leverages the token usage metrics provided by an LLM, enabling real-time monitoring and enforcement of limits.\n", "\n", "![architecture](https://github.com/GoogleCloudPlatform/apigee-samples/blob/main/llm-token-limits/images/ai-product.png?raw=1)\n", "\n", "\n", "# Benefits Token Limits with AI Products\n", "\n", "Creating Product tiers within Apigee allows for differentiated token quotas at each consumer tier. This enables you to:\n", "\n", "* **Control resource allocation**: Prioritize resources for high-priority consumers by allocating higher token quotas to their tiers. This will also help to manage platform-wide token budgets across multiple LLM providers.\n", "* **Tiered AI products**: By utilizing product tiers with granular token quotas, Apigee effectively manages LLM and empowers AI platform teams to manage costs and provide a multi-tenant platform experience.\n", "\n", "# How does it work?\n", "\n", "1. Prompt request is receved by an Apigee Proxy.\n", "2. Apigee identifies the consumer Application and verifies that the AI Product token quota has not been exceeded.\n", "3. Apigee extracts token counts and adds them to quota counter.\n", "4. Apigee captures token counts as metrics for Analytics.\n", "\n", "# Setup\n", "\n", "Use the following GCP CloudShell tutorial. Follow the instructions to deploy the sample.\n", "\n", "[![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://ssh.cloud.google.com/cloudshell/open?cloudshell_git_repo=https://github.com/GoogleCloudPlatform/apigee-samples&cloudshell_git_branch=main&cloudshell_workspace=.&cloudshell_tutorial=llm-token-limits/docs/cloudshell-tutorial.md)\n", "\n", "# Test Sample\n", "\n", "## Install dependencies\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "9AuXsoJDZPMs", "outputId": "3ca485a7-b062-42bb-c388-df9e57d66501" }, "outputs": [], "source": [ "!pip install -Uq langchain==0.3.18\n", "!pip install -Uq langchain-google-vertexai==2.0.12\n", "!pip install -Uq google-cloud-aiplatform" ] }, { "cell_type": "markdown", "metadata": { "id": "LpGncHDpentW" }, "source": [ "## Authenticate your notebook environment (Colab only)\n", "If you are running this notebook on Google Colab, run the following cell to authenticate your environment. This step is not required if you are using Vertex AI Workbench or Colab Enterprise." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "id": "8layJrBOZP4-" }, "outputs": [], "source": [ "import sys\n", "\n", "# Additional authentication is required for Google Colab\n", "if \"google.colab\" in sys.modules:\n", " # Authenticate user to Google Cloud\n", " from google.colab import auth\n", "\n", " auth.authenticate_user()" ] }, { "cell_type": "markdown", "metadata": { "id": "-4eVYn8frc5i" }, "source": [ "## Initialize notebook variables\n", "\n", "* **PROJECT_ID**: The default GCP project to use when making Vertex API calls.\n", "* **REGION**: The default location to use when making API calls.\n", "* **API_ENDPOINT**: Desired API endpoint, e.g. https://apigee.iloveapimanagement.com/generate\n", "* **API_KEY**: After deploying the sample you'll get 2 API keys: **Bronze Key** and **Silver Key**. First, set the value of your **Bronze Key**." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "kj-10KmnZSVe" }, "outputs": [], "source": [ "from langchain_google_vertexai import VertexAI\n", "# Define project information\n", "PROJECT_ID = \"\" # @param {type:\"string\"}\n", "LOCATION = \"\" # @param {type:\"string\"}\n", "API_ENDPOINT = \"https://REPLACE_WITH_APIGEE_HOST/v1/samples/llm-token-limits\" # @param {type:\"string\"}\n", "API_KEY = \"\" # @param {type:\"string\"}\n", "MODEL = \"gemini-2.0-flash\"\n", "# Initialize LangChain\n", "model = VertexAI(\n", " project=PROJECT_ID,\n", " location=LOCATION,\n", " api_endpoint=API_ENDPOINT,\n", " api_transport=\"rest\",\n", " streaming=True,\n", " model_name=MODEL,\n", " additional_headers={\"x-apikey\": API_KEY})" ] }, { "cell_type": "markdown", "metadata": { "id": "S478piq-SGLc" }, "source": [ "## Test tiered AI products\n", "\n", "Apigee allows you to create a tiered product strategy with different API access levels (e.g., Bronze, Silver, Gold) to cater to diverse user needs and limits. During the [Setup](#setup) stage you deployed 2 AI Product tiers for testing purposes.\n", "\n", "* **Bronze AI Product**\n", "\n", "This product enforces a 2000 token limit every 5 minutes. To test this limit, follow the steps below.\n", "\n", " 1. Set the `API_KEY` value using your **Bronze Key** in the previous [step](#initialize-notebook-variables).\n", " 2. Start a debug session on the **llm-token-limits-v1** proxy that was deployed during the [Setup](#setup) stage.\n", " 3. Run the 2000 tokens every 5 minutes [scenario](#2000-tokens-every-5-minutes).\n", " 4. Observe `HTTP 200` success codes on debug session and explore `Q-TokenQuota` policy flow variables `allowed.count`, `used.count` and `available.count`.\n", " 5. Run the 10000 tokens every 5 minutes [scenario](#5000-tokens-every-5-minutes).\n", " 6. Observe `HTTP 429` error codes on debug session and explore `Q-TokenQuota` policy flow variables `allowed.count`, `used.count`, `available.count` and `exceed.count`.\n", "\n", "* **Silver AI Product**\n", "\n", "This product enforces a 5000 token limit every 5 minutes. To test this limit, follow the steps below.\n", "\n", " 1. Set the `API_KEY` value using your **Silver Key** in the previous [step](#initialize-notebook-variables).\n", " 2. Start a debug session on the **llm-token-limits-v1** proxy that was deployed during the [Setup](#setup) stage.\n", " 3. Run the 5000 tokens every 5 minutes [scenario](#5000-tokens-every-5-minutes).\n", " 4. Observe `HTTP 200` success codes on debug session and explore `Q-TokenQuota` policy flow variables `allowed.count`, `used.count` and `available.count`.\n", "\n", "## Tokens Consumption Analytics\n", "\n", "This sample also creates a Tokens Consumption analytics dashboard that allows you to:\n", "\n", "* Understand usage patterns: See how often tokens are being used and by Developer App.\n", "* Optimize token management Make informed decisions about token usage and ajust your tiered limits.\n", "* Plan for scalability: Forecast future demand and ensure resource availability.\n", "\n", "To use this dashboard, from the Apigee console navigate to `Custom Reports` > `Tokens Consumption Report`. You'll be able to drill down into token metrics that represent consumption by Developer Apps and Products. See sample below:\n", "\n", "![image](https://github.com/GoogleCloudPlatform/apigee-samples/blob/main/llm-token-limits/images/token-counts.png?raw=1)" ] }, { "cell_type": "markdown", "metadata": { "id": "NmRI6i4uhnJ4" }, "source": [ "# 2000 tokens every 5 minutes\n", "\n", "This scenario demonstrates a basic interaction with a language model. The code repeatedly asks a language model the same question, \"Why is the sky blue?\" but phrased in different ways. It's a simple example of how to interact with a language model. After running the scenario **only once** expect the following behavior:\n", "\n", "* If using the **Bronze Key**, the final token count (sum of tokens from prompts and response candidates) shouldn't exceed the Bronze AI Product tokens limit of 2000 tokens every 5 minutes.\n", "* If using the **Silver Key**, the final token count (sum of tokens from prompts and response candidates) shouldn't exceed the Silver AI Product tokens limit of 5000 tokens every 5 minutes.\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "87i610BzZns_" }, "outputs": [], "source": [ "prompts = [\"Why is the sky blue?\",\n", " \"What makes the sky blue?\",\n", " \"Why does the sky is blue colored?\",\n", " \"Can you explain why the sky is blue?\",\n", " \"The sky is blue, why is that?\"]\n", "\n", "def invoke_model(prompt, model_to_invoke=model):\n", " model_to_invoke.invoke(prompt)\n", "\n", "for prompt in prompts:\n", " model.invoke(prompt)" ] }, { "cell_type": "markdown", "metadata": { "id": "ZjW8UBEnZg9U" }, "source": [ "# 5000 tokens every 5 minutes\n", "\n", "This scenario demonstrates a basic interaction with a language model. The code repeatedly asks a language model the same question, \"Why is the sky blue?\" but phrased in different ways to make sure the candidate responses are very extensive (high token count). After running the scenario **only once** expect the following behavior:\n", "\n", "* If using the **Bronze Key**, the final token count (sum of tokens from prompts and response candidates) **should exceed** the Bronze AI Product tokens limit of 2000 tokens every 5 minutes. Should expect `HTTP 429` error messages in the notebook and also visible on Apigee's debug session.\n", "* If using the **Silver Key**, the final token count (sum of tokens from prompts and response candidates) shouldn't exceed the Silver AI Product tokens limit of 5000 tokens every 5 minutes." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "tUPXF1mXZjUz" }, "outputs": [], "source": [ "prompts = [\"Why is the sky blue? Provide a very long and detailed explanation.\",\n", " \"Furnish and exhaustive and long explanation (as long as a scence magazine article) for the phenomenon of the blue sky.\",\n", " \"Can you give me a really in-depth and as long as a book chapter of why the sky is blue?\",\n", " \"Give me a super detailed and very extensive explanation (as long as the yellow pages) of why the sky is blue.\",\n", " \"Can you tell me all about why the sky is blue, and make sure it's longer than a novel?\"]\n", "\n", "def invoke_model(prompt, model_to_invoke=model):\n", " model_to_invoke.invoke(prompt)\n", "\n", "for prompt in prompts:\n", " model.invoke(prompt)" ] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }