llm-logging/llm_logging_v1.ipynb (243 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "7QC8yC7SoSMn"
},
"source": [
"# **LLM Serving with Apigee**\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/apigee-samples/blob/main/llm-logging/llm_logging_v1.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\\\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fapigee-samples%2Fmain%2Fllm-logging%2Fllm_logging_v1.ipynb\">\n",
" <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td> \n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/apigee-samples/main/llm-logging/llm_logging_v1.ipynb\">\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/apigee-samples/blob/main/llm-logging/llm_logging_v1.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>\n",
"<br />\n",
"<br />\n",
"<br />\n",
"\n",
"# Logging Sample\n",
"\n",
"Logging both prompts and candidate responses from large language models (LLMs) allows for detailed analysis and improvement of the model's performance over time. By examining past interactions, AI practitioners can identify patterns leading to refinements in the training data or model architecture. Furthermore, by examining the prompts security teams can detect malicious intent, such as attempts to extract sensitive information or generate harmful content.\n",
"\n",
"Additionally, logging the generated candidates provides insights into the LLM's behavior and helps identify any biases or vulnerabilities in the model itself. This information can then be used to improve security measures, fine-tune the model, and mitigate potential risks associated with LLM usage.\n",
"\n",
"\n",
"\n",
"\n",
"# Benefits of Logging with Apigee and Google Cloud Logging\n",
"\n",
"* **Seamless logging**: Effortlessly capture prompts, candidate responses, and metadata without complex coding.\n",
"* **Scalable and secure**: Leverage Google Cloud's infrastructure for reliable and secure log management.\n",
"\n",
"# How does it work?\n",
"\n",
"1. Prompt request is receved by an Apigee Proxy.\n",
"2. Apigee extracts prompt and candidate responses.\n",
"3. Apigee logs prompt and candidate responses to Cloud Logging.\n",
"\n",
"# Setup\n",
"\n",
"Use the following GCP CloudShell tutorial. Follow the instructions to deploy the sample.\n",
"\n",
"[](https://ssh.cloud.google.com/cloudshell/open?cloudshell_git_repo=https://github.com/GoogleCloudPlatform/apigee-samples&cloudshell_git_branch=main&cloudshell_workspace=.&cloudshell_tutorial=llm-logging/docs/cloudshell-tutorial.md)\n",
"\n",
"# Test Sample\n",
"\n",
"## Install dependencies\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9AuXsoJDZPMs"
},
"outputs": [],
"source": [
"!pip install -Uq google-cloud-aiplatform\n",
"!pip install -Uq google-genai\n",
"!pip install -Uq langchain==0.3.18\n",
"!pip install -Uq langchain-google-vertexai==2.0.12"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LpGncHDpentW"
},
"source": [
"## Authenticate your notebook environment (Colab only)\n",
"If you are running this notebook on Google Colab, run the following cell to authenticate your environment. This step is not required if you are using Vertex AI Workbench or Colab Enterprise."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"id": "8layJrBOZP4-"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"# Additional authentication is required for Google Colab\n",
"if \"google.colab\" in sys.modules:\n",
" # Authenticate user to Google Cloud\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-4eVYn8frc5i"
},
"source": [
"## Initialize notebook variables\n",
"\n",
"* **PROJECT_ID**: The default GCP project to use when making Vertex API calls.\n",
"* **REGION**: The default location to use when making API calls.\n",
"* **API_ENDPOINT**: Desired API endpoint, e.g. https://apigee.iloveapimanagement.com/generate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"PROJECT_ID = \"\" # @param {type:\"string\"}\n",
"LOCATION = \"\" # @param {type:\"string\"}\n",
"API_ENDPOINT = \"https://APIGEE_HOST/v1/samples/llm-logging\" # @param {type:\"string\"}\n",
"MODEL = \"gemini-2.0-flash\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "S478piq-SGLc"
},
"source": [
"## Test Sample\n",
"\n",
"Apigee allows you to seamlessly send logs to Cloud Logging using native integration with the [Message Logging](https://cloud.google.com/apigee/docs/api-platform/reference/policies/message-logging-policy#cloudloggingelement) policy. This sample also includes a message chunking solution that allows logging very long messages (ex. 1M tokens supported by Gemini) and connecting them together using a unique message identifier.\n",
"\n",
"With the following cell you'll be able to invoke an LLM and both prompt and candidate resposne will be logged in Cloud Logging."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using GenAI SDK"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google import genai\n",
"from google.genai import types\n",
"\n",
"client = genai.Client(\n",
" vertexai=True,\n",
" project=PROJECT_ID,\n",
" location=LOCATION,\n",
" http_options=types.HttpOptions(api_version='v1', base_url=API_ENDPOINT)\n",
")\n",
"\n",
"response = client.models.generate_content(\n",
" model=MODEL, contents='Why is the earth round?'\n",
")\n",
"print(response.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using LangChain SDK"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "87i610BzZns_"
},
"outputs": [],
"source": [
"from langchain_google_vertexai import VertexAI\n",
"\n",
"# Initialize LangChain\n",
"model = VertexAI(\n",
" project=PROJECT_ID,\n",
" location=LOCATION,\n",
" api_endpoint=API_ENDPOINT,\n",
" api_transport=\"rest\",\n",
" streaming=False,\n",
" model_name=MODEL)\n",
"prompts = [\"Provide an explanation about the benefits of using sunscreen. Make sure to make it as long as a novel.\"]\n",
"\n",
"for prompt in prompts:\n",
" print(model.invoke(prompt))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZjW8UBEnZg9U"
},
"source": [
"## Explore and Analyze Logs with Cloud Logging\n",
"\n",
"1. Navigate to the Cloud Logging Explorer by [**clicking here**](https://console.cloud.google.com/logs/query?_ga=2.194228271.307340908.1727018794-898542846.1726863667).\n",
"\n",
"2. Set the query filter. Make sure to replace the `PROJECT_ID` with the Apigee project ID:\n",
"\n",
" ```\n",
" logName=\"projects/PROJECT_ID/logs/apigee\"\n",
" ```\n",
"3. Run the query and explore the logs. See example below:\n",
"\n",
" \n"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}