gemini/rag-engine/rag_engine_eval_service_sdk.ipynb (738 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ur8xi4C7S06n"
},
"outputs": [],
"source": [
"# Copyright 2025 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JAPoU8Sm5E6e"
},
"source": [
"# Evaluating Vertex RAG Engine Generation with Vertex AI Python SDK for Gen AI Evaluation Service\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/rag-engine/rag_engine_eval_service_sdk.ipynb\">\n",
" <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Frag-engine%2Frag_engine_eval_service_sdk.ipynb\">\n",
" <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/rag-engine/rag_engine_eval_service_sdk.ipynb\">\n",
" <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/rag-engine/rag_engine_eval_service_sdk.ipynb\">\n",
" <img width=\"32px\" src=\"https://upload.wikimedia.org/wikipedia/commons/9/91/Octicons-mark-github.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>\n",
"\n",
"<div style=\"clear: both;\"></div>\n",
"\n",
"<b>Share to:</b>\n",
"\n",
"<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/rag-engine/rag_engine_eval_service_sdk.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/rag-engine/rag_engine_eval_service_sdk.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/rag-engine/rag_engine_eval_service_sdk.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/53/X_logo_2023_original.svg\" alt=\"X logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/rag-engine/rag_engine_eval_service_sdk.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/rag-engine/rag_engine_eval_service_sdk.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
"</a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "84f0f73a0f76"
},
"source": [
"| | |\n",
"|-|-|\n",
"| Author(s) | [Noa Ben-Efraim](https://github.com/noabenefraim/) |"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tvgnzT1CKxrO"
},
"source": [
"## Overview\n",
"\n",
"This notebook demonstrates how to evaluate the performance of a Retrieval Augmented Generation (RAG) engine built with Vertex AI using the Vertex AI Python SDK for Gen AI Evaluation Service. By focusing on a practical example using \"Alice in Wonderland\" as our knowledge base, we'll walk through the process of creating an evaluation dataset and applying custom metrics to assess the quality of generated responses.\n",
"\n",
"Specifically, this notebook will guide you through:\n",
"\n",
"* **Setting up a RAG Corpus:** Creating and populating a RAG corpus with a PDF document.\n",
"* **Generating Grounded Responses:** Using the Vertex AI Gemini model to produce responses based on retrieved contexts.\n",
"* **Creating an Evaluation Dataset:** Constructing a dataset with prompts, retrieved contexts, and generated responses.\n",
"* **Defining Custom Evaluation Metrics:** Implementing a custom metric to assess the accuracy, completeness, and groundedness of the generated responses.\n",
"* **Running Evaluation Tasks:** Utilizing the Vertex AI Gen AI Evaluation Service to evaluate the RAG engine's performance.\n",
"* **Analyzing Evaluation Results:** Visualizing and interpreting the evaluation results using the provided SDK tools."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "61RBz8LLbxCR"
},
"source": [
"## Get started"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "No17Cw5hgx12"
},
"source": [
"### Install Google Gen AI SDK and other required packages\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "tFy3H3aPgx12"
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-genai google-cloud-aiplatform[evaluation] vertexai"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "R5Xep4W9lq-Z"
},
"source": [
"### Restart runtime\n",
"\n",
"To use the newly installed packages in this Jupyter runtime, you must restart the runtime. You can do this by running the cell below, which restarts the current kernel.\n",
"\n",
"The restart might take a minute or longer. After it's restarted, continue to the next step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XRvKdaPDTznN"
},
"outputs": [],
"source": [
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SbmM4z7FOBpM"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ The kernel is going to restart. In Colab or Colab Enterprise, you might see an error message that says \"Your session crashed for an unknown reason.\" This is expected. Wait until it's finished before continuing to the next step. ⚠️</b>\n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dmWOrTJ3gx13"
},
"source": [
"### Authenticate your notebook environment (Colab only)\n",
"\n",
"If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NyKGtVQjgx13"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DF4l8DTdWgPY"
},
"source": [
"### Set Google Cloud project information\n",
"\n",
"To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
"\n",
"Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Nqwi-5ufWp_B"
},
"outputs": [],
"source": [
"# Use the environment variable if the user doesn't provide Project ID.\n",
"import os\n",
"\n",
"from google import genai\n",
"import vertexai\n",
"\n",
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
"\n",
"client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)\n",
"vertexai.init(project=PROJECT_ID, location=LOCATION)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5303c05f7aa6"
},
"source": [
"### Import libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6fc324893334"
},
"outputs": [],
"source": [
"from google.genai.types import GenerateContentConfig, Retrieval, Tool, VertexRagStore\n",
"import pandas as pd\n",
"from tqdm import tqdm\n",
"from vertexai import rag\n",
"from vertexai.evaluation import (\n",
" EvalTask,\n",
" PointwiseMetric,\n",
" PointwiseMetricPromptTemplate,\n",
" notebook_utils,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e43229f3ad4f"
},
"source": [
"### Load model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cf93d5f0ce00"
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-2.0-flash-001\" # @param {type:\"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "a45112933981"
},
"source": [
"### Create `RAGCorpus`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1b1ba2900db9"
},
"outputs": [],
"source": [
"# Currently supports Google first-party embedding models\n",
"EMBEDDING_MODEL = \"publishers/google/models/text-embedding-005\" # @param {type:\"string\", isTemplate: true}\n",
"\n",
"rag_corpus = rag.create_corpus(\n",
" display_name=\"rag-eval-corpus\",\n",
" description=\"A test corpus for generation evaluation\",\n",
" backend_config=rag.RagVectorDbConfig(\n",
" rag_embedding_model_config=rag.RagEmbeddingModelConfig(\n",
" vertex_prediction_endpoint=rag.VertexPredictionEndpoint(\n",
" publisher_model=EMBEDDING_MODEL\n",
" )\n",
" )\n",
" ),\n",
")\n",
"\n",
"# Get the rag corpus you just created\n",
"rag.get_corpus(rag_corpus.name)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5a1cbd3d0f96"
},
"source": [
"### Import files from Google Cloud Storage into `RagCorpus` (configure chunk size, chunk overlap etc as desired)\n",
"\n",
"For this step you will need to create a GCS bucket, and then copy over the data from the public GCS bucket. Remember to grant \"Viewer\" access to the \"Vertex RAG Data Service Agent\" (with the format of service-{project_number}@gcp-sa-vertex-rag.iam.gserviceaccount.com) for your Google Cloud Storage bucket.\n",
"\n",
"For this example, we'll use a dataset that comprises the full texts of five classic children's literature books: \"The Wizard of Oz,\" \"Gulliver's Travels,\" \"Peter Pan,\" \"Alice's Adventures in Wonderland,\" and \"Through the Looking-Glass.\" This collection provides a rich corpus for exploring themes, characters, and settings across these iconic stories.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "20d44cdad6ab"
},
"source": [
"##### Copy data from public GCS bucket"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ca8381cc8a9b"
},
"outputs": [],
"source": [
"CURRENT_BUCKET_PATH = \"gs://\" # @param {type:\"string\"},\n",
"\n",
"PUBLIC_DATA_PATH = (\n",
" \"gs://github-repo/generative-ai/gemini/rag-engine/rag_engine_eval_service/\"\n",
")\n",
"\n",
"!gsutil -m rsync -r -d $PUBLIC_DATA_PATH $CURRENT_BUCKET_PATH"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b84020847859"
},
"source": [
"##### Import dataset into `RagCorpus`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8f10ecc2c035"
},
"outputs": [],
"source": [
"transformation_config = rag.TransformationConfig(\n",
" chunking_config=rag.ChunkingConfig(\n",
" chunk_size=512,\n",
" chunk_overlap=100,\n",
" ),\n",
")\n",
"\n",
"rag.import_files(\n",
" corpus_name=rag_corpus.name,\n",
" paths=[CURRENT_BUCKET_PATH],\n",
" transformation_config=transformation_config, # Optional\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "609cdfd973c6"
},
"outputs": [],
"source": [
"# List the files in the rag corpus\n",
"rag.list_files(rag_corpus.name)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dbb9974409ca"
},
"source": [
"### Create RAG Retrieval Tool"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "d767af7c494e"
},
"outputs": [],
"source": [
"# Create a tool for the RAG Corpus\n",
"rag_retrieval_tool = Tool(\n",
" retrieval=Retrieval(\n",
" vertex_rag_store=VertexRagStore(\n",
" rag_corpora=[rag_corpus.name],\n",
" similarity_top_k=10,\n",
" vector_distance_threshold=0.5,\n",
" )\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ce3f57ffd049"
},
"outputs": [],
"source": [
"def get_generated_response(prompt: str) -> str:\n",
" \"\"\"\n",
" Generates a grounded response using a language model and retrieved context.\n",
"\n",
" Args:\n",
" prompt: The input prompt for the language model.\n",
"\n",
" Returns:\n",
" The generated text response.\n",
" \"\"\"\n",
" response = client.models.generate_content(\n",
" model=MODEL_ID,\n",
" contents=prompt,\n",
" config=GenerateContentConfig(tools=[rag_retrieval_tool]),\n",
" )\n",
"\n",
" return response.text"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bc928efd7e38"
},
"outputs": [],
"source": [
"def get_retrieved_contexts(prompt: str) -> str:\n",
" \"\"\"\n",
" Retrieves relevant contexts based on a given prompt using a RAG system.\n",
"\n",
" Args:\n",
" prompt: The input prompt for context retrieval.\n",
"\n",
" Returns:\n",
" A concatenated string of retrieved context texts, with newlines removed.\n",
" \"\"\"\n",
"\n",
" rag_filter = rag.utils.resources.Filter(vector_distance_threshold=0.5)\n",
"\n",
" retrieval_config = rag.RagRetrievalConfig(top_k=5, filter=rag_filter)\n",
"\n",
" response = rag.retrieval_query(\n",
" rag_resources=[\n",
" rag.RagResource(\n",
" rag_corpus=rag_corpus.name,\n",
" # Optional: supply IDs from `rag.list_files()`.\n",
" # rag_file_ids=[\"rag-file-1\", \"rag-file-2\", ...],\n",
" )\n",
" ],\n",
" text=prompt,\n",
" rag_retrieval_config=retrieval_config,\n",
" )\n",
" context = \" \".join(\n",
" [context.text for context in response.contexts.contexts]\n",
" ).replace(\"\\n\", \"\")\n",
" return context"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8a904150ee7d"
},
"source": [
"### Create Evaluation Dataset\n",
"\n",
"Now we are prepared to create the evaluation dataset. The dataset will include:\n",
"\n",
"+ Prompt: What the user is asking the RAG engine. The prompts will be a mix of inter-document and intra-document analysis.\n",
"+ Retrieved Context: The top k retrieved context from Vertex RAG Engine\n",
"+ Generated Response: The LLM generated responses grounded in the retrieved context."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "00ffc51df8b6"
},
"outputs": [],
"source": [
"prompts = [\n",
" \"Compare and contrast the behaviors of the Mad Hatter and the March Hare during the tea party.\",\n",
" \"What happened during Alice's croquet game with the Queen of Hearts?\",\n",
" \"How did the Mad Hatter and March Hare act at the tea party?\",\n",
" \"What was special about the cakes Alice ate?\",\n",
" \"What happened when Gulliver first arrived in Lilliput?\",\n",
" \"What was Captain Hook's main goal in Neverland?\",\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "531d827a83fd"
},
"outputs": [],
"source": [
"retrieved_context = []\n",
"generated_response = []\n",
"for prompt in tqdm(prompts):\n",
" retrieved_context.append(get_retrieved_contexts(prompt))\n",
" generated_response.append(get_generated_response(prompt))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0d2d7a3f0b76"
},
"outputs": [],
"source": [
"eval_dataset = pd.DataFrame(\n",
" {\n",
" \"prompt\": prompts,\n",
" \"retrieved_context\": retrieved_context,\n",
" \"response\": generated_response,\n",
" }\n",
")\n",
"\n",
"eval_dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ad1d81adc1cd"
},
"source": [
"## Use Gen AI Evaluation Service SDK\n",
"\n",
"Before diving into the evaluation process, we've set up the necessary components: a RAG corpus containing our document, a retrieval tool, and functions to generate grounded responses and retrieve relevant contexts. We've also compiled an evaluation dataset with a set of questions, the corresponding retrieved contexts, and the model's responses.\n",
"\n",
"This dataset will serve as the foundation for our evaluation. We'll now leverage the Vertex AI Gen AI Evaluation Service SDK to define and apply custom metrics, allowing us to quantitatively assess the RAG engine's performance. The Gen AI Evaluation Service provides a robust framework for creating and running evaluation tasks, enabling us to gain valuable insights into the quality of our generated responses."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "c056fb58f6e1"
},
"outputs": [],
"source": [
"custom_question_answering_correctness = PointwiseMetric(\n",
" metric=\"custom_question_answering_correctness\",\n",
" metric_prompt_template=PointwiseMetricPromptTemplate(\n",
" criteria={\n",
" \"accuracy\": (\n",
" \"The response provides completely accurate information, consistent with the retrieved context, with no errors or omissions.\"\n",
" ),\n",
" \"completeness\": (\n",
" \"The response answers all parts of the question fully, utilizing the information available in the retrieved context.\"\n",
" ),\n",
" \"groundedness\": (\n",
" \"The response uses only the information provided in the retrieved context and does not introduce any external information or hallucinations.\"\n",
" ),\n",
" },\n",
" rating_rubric={\n",
" \"5\": \"(Very good). The answer is completely accurate, complete, concise, grounded in the retrieved context, and follows all instructions.\",\n",
" \"4\": \"(Good). The answer is mostly accurate, complete, and grounded in the retrieved context, with minor issues in conciseness or instruction following.\",\n",
" \"3\": \"(Ok). The answer is partially accurate and complete but may have some inaccuracies, omissions, or significant issues with conciseness, groundedness, or instruction following, based on the retrieved context.\",\n",
" \"2\": \"(Bad). The answer contains significant inaccuracies, is largely incomplete, or fails to follow key instructions, considering the information available in the retrieved context.\",\n",
" \"1\": \"(Very bad). The answer is completely inaccurate, irrelevant, or fails to address the question in any meaningful way, based on the retrieved context.\",\n",
" },\n",
" input_variables=[\"prompt\", \"retrieved_context\"],\n",
" ),\n",
")\n",
"\n",
"# Display the serialized metric prompt template\n",
"print(custom_question_answering_correctness.metric_prompt_template)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bc6cd023895a"
},
"source": [
"### Run Eval Task\n",
"\n",
"The Gen AI Evaluation SDK has many useful utilities to graph, summarize, and explain the evaluation results. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bbbe1f16e57e"
},
"outputs": [],
"source": [
"# Run evaluation using the custom_text_quality metric\n",
"eval_task = EvalTask(\n",
" dataset=eval_dataset,\n",
" metrics=[custom_question_answering_correctness],\n",
" experiment=\"test\",\n",
")\n",
"eval_result = eval_task.evaluate()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0e23de09f2b8"
},
"outputs": [],
"source": [
"notebook_utils.display_eval_result(eval_result=eval_result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2fb80150675a"
},
"outputs": [],
"source": [
"# Example for graphing\n",
"notebook_utils.display_radar_plot(\n",
" eval_results_with_title=[(\"Question answering correctness\", eval_result)],\n",
" metrics=[\"custom_question_answering_correctness\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9a40053a479e"
},
"outputs": [],
"source": [
"# Displaying explanations for one row.\n",
"notebook_utils.display_explanations(eval_result=eval_result, num=1)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2a4e033321ad"
},
"source": [
"## Cleaning up\n",
"\n",
"Delete ExperimentRun created by the evaluation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "113ab9d8887a"
},
"outputs": [],
"source": [
"aiplatform.ExperimentRun(\n",
" run_name=eval_result.metadata[\"experiment_run\"],\n",
" experiment=eval_result.metadata[\"experiment\"],\n",
").delete()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "db1bfe14c505"
},
"outputs": [],
"source": [
"rag.delete_corpus(rag_corpus.name)"
]
}
],
"metadata": {
"colab": {
"name": "rag_engine_eval_service_sdk.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}