gemini/tuning/sft_gemini_summarization.ipynb (1,683 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "i3oNB_qC4X2Y"
},
"outputs": [],
"source": [
"# Copyright 2024 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "c4-kxwz23nzr"
},
"source": [
"# Supervised Fine Tuning with Gemini 2.0 Flash for Article Summarization\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_summarization.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Ftuning%2Fsft_gemini_summarization.ipynb\">\n",
" <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td> \n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/tuning/sft_gemini_summarization.ipynb\">\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_summarization.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>\n",
"\n",
"<div style=\"clear: both;\"></div>\n",
"\n",
"<b>Share to:</b>\n",
"\n",
"<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_summarization.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_summarization.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_summarization.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_summarization.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/sft_gemini_summarization.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
"</a> "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pO98gUu-4eTJ"
},
"source": [
"| Author(s) |\n",
"| --- |\n",
"| [Deepak Moonat](https://github.com/dmoonat) |\n",
"| [Safiuddin Khaja](https://github.com/Safikh) |"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PN72vQp6DWck"
},
"source": [
"## Overview\n",
"\n",
"**Gemini** is a family of generative AI models developed by Google DeepMind that is designed for multimodal use cases. The Gemini API gives you access to the various Gemini models, such as Gemini 2.0 Pro/Flash, Gemini 2.0/Flash, Gemini/Flash and more.\n",
"\n",
"This notebook demonstrates how to fine-tune the Gemini 2.0 Flash generative model using the Vertex AI Supervised Tuning feature. Supervised Tuning allows you to use your own training data to further refine the base model's capabilities towards your specific tasks.\n",
"\n",
"Supervised Tuning uses labeled examples to tune a model. Each example demonstrates the output you want from your text model during inference.\n",
"\n",
"First, ensure your training data is of high quality, well-labeled, and directly relevant to the target task. This is crucial as low-quality data can adversely affect the performance and introduce bias in the fine-tuned model.\n",
"- Training: Experiment with different configurations to optimize the model's performance on the target task.\n",
"- Evaluation:\n",
" - Metric: Choose appropriate evaluation metrics that accurately reflect the success of the fine-tuned model for your specific task\n",
" - Evaluation Set: Use a separate set of data to evaluate the model's performance\n",
"\n",
"\n",
"Refer to public [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini-supervised-tuning) for more details.\n",
"\n",
"\n",
"<hr/>\n",
"\n",
"Before running this notebook, ensure you have:\n",
"\n",
"- A Google Cloud project: Provide your project ID in the `PROJECT_ID` variable.\n",
"\n",
"- Authenticated your Colab environment: Run the authentication code block at the beginning.\n",
"\n",
"- Prepared training data (Test with your own data or use the one in the notebook): Data should be formatted in JSONL with prompts and corresponding completions."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "77ppk4eke7G4"
},
"source": [
"### Objective\n",
"\n",
"In this tutorial, you will learn how to use `Vertex AI` to tune a `Gemini 2.0 Flash` model.\n",
"\n",
"\n",
"This tutorial uses the following Google Cloud ML services:\n",
"\n",
"- `Vertex AI`\n",
"\n",
"\n",
"The steps performed include:\n",
"\n",
"- Prepare and load the dataset\n",
"- Load the `gemini-2.0-flash-001` model\n",
"- Evaluate the model before tuning\n",
"- Tune the model.\n",
" - This will automatically create a Vertex AI endpoint and deploy the model to it\n",
"- Make a prediction using tuned model\n",
"- Evaluate the model after tuning"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PVRaH5wqfIy3"
},
"source": [
"### Costs\n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI\n",
"pricing](https://cloud.google.com/vertex-ai/pricing), [Cloud Storage\n",
"pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
"Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sfQYl84Cu_xL"
},
"source": [
"## Wikilingua Dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SxAXgV2FvBPz"
},
"source": [
"The dataset includes article and summary pairs from WikiHow. It consists of article-summary pairs in multiple languages. Refer to the following [github repository](https://github.com/esdurmus/Wikilingua) for more details.\n",
"\n",
"For this notebook, we have picked `english` language dataset."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KTHBNpb-BBdc"
},
"source": [
"### Dataset Citation\n",
"\n",
"```\n",
"@inproceedings{ladhak-wiki-2020,\n",
" title={WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},\n",
" author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},\n",
" booktitle={Findings of EMNLP, 2020},\n",
" year={2020}\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "llEFILYz2aye"
},
"source": [
"## Getting Started"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oo2rh4cC2e1r"
},
"source": [
"### Install Gen AI SDK and other required packages"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0d8bad6906dc"
},
"source": [
"The new Google Gen AI SDK provides a unified interface to Gemini through both the Gemini Developer API and the Gemini API on Vertex AI. With a few exceptions, code that runs on one platform will run on both. This means that you can prototype an application using the Developer API and then migrate the application to Vertex AI without rewriting your code.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "l_ok3vdw2cyf"
},
"outputs": [],
"source": [
"%pip install --upgrade --user --quiet google-genai google-cloud-aiplatform rouge_score plotly jsonlines"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6in46hzz3At9"
},
"source": [
"### Restart runtime (Colab only)\n",
"\n",
"To use the newly installed packages, you must restart the runtime on Google Colab.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "haJlKcSY3EsE"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
" import IPython\n",
"\n",
" app = IPython.Application.instance()\n",
" app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iTOjupCM3TDb"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ The kernel is going to restart. Please wait until it is finished before continuing to the next step. ⚠️</b>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "F0HMlz-MD9Yt"
},
"source": [
"## Step0: Authenticate your notebook environment (Colab only)\n",
"\n",
"If you are running this notebook on Google Colab, run the cell below to authenticate your environment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "86VNaqlgD9rK"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yKRPFNzWJLVY"
},
"source": [
"- If you are running this notebook in a local development environment:\n",
" - Install the [Google Cloud SDK](https://cloud.google.com/sdk).\n",
" - Obtain authentication credentials. Create local credentials by running the following command and following the oauth2 flow (read more about the command [here](https://cloud.google.com/sdk/gcloud/reference/beta/auth/application-default/login)):\n",
"\n",
" ```bash\n",
" gcloud auth application-default login\n",
" ```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "k8CI-TcqD06L"
},
"source": [
"## Step1: Import Libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rerpHL_eEG8D"
},
"outputs": [],
"source": [
"import time\n",
"\n",
"from google import genai\n",
"\n",
"# For extracting vertex experiment details.\n",
"from google.cloud import aiplatform\n",
"from google.cloud.aiplatform.metadata import context\n",
"from google.cloud.aiplatform.metadata import utils as metadata_utils\n",
"from google.genai import types\n",
"\n",
"# For data handling.\n",
"import jsonlines\n",
"import pandas as pd\n",
"\n",
"# For visualization.\n",
"import plotly.graph_objects as go\n",
"from plotly.subplots import make_subplots\n",
"\n",
"# For evaluation metric computation.\n",
"from rouge_score import rouge_scorer\n",
"from tqdm import tqdm\n",
"\n",
"# For fine tuning Gemini model.\n",
"import vertexai"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FBY9nK3qEJLk"
},
"source": [
"## Step2: Set Google Cloud project information and initialize Vertex AI and Gen AI SDK\n",
"\n",
"To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
"\n",
"Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment).\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VpzmI1K61Tn2"
},
"outputs": [],
"source": [
"PROJECT_ID = \"[YOUR_PROJECT_ID]\" # @param {type:\"string\"}\n",
"REGION = \"us-central1\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7MZmIZvlQUhy"
},
"outputs": [],
"source": [
"vertexai.init(project=PROJECT_ID, location=REGION)\n",
"\n",
"client = genai.Client(vertexai=True, project=PROJECT_ID, location=REGION)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JUEloBlsCPFr"
},
"source": [
"## Step3: Create Dataset in correct format\n",
"\n",
"The dataset used to tune a foundation model needs to include examples that align with the task that you want the model to perform. Structure your training dataset in a text-to-text format. Each record, or row, in the dataset contains the input text (also referred to as the prompt) which is paired with its expected output from the model. Supervised tuning uses the dataset to teach the model to mimic a behavior, or task, you need by giving it hundreds of examples that illustrate that behavior.\n",
"\n",
"Your dataset size depends on the task, and follows the recommendation mentioned in the `Overview` section. The more examples you provide in your dataset, the better the results.\n",
"\n",
"### Dataset format\n",
"\n",
"Training data should be structured within a JSONL file located at a Google Cloud Storage (GCS) URI. Each line (or row) of the JSONL file must adhere to a specific schema: It should contain a `contents` array, with objects inside defining a `role` (either \"user\" for user input or \"model\" for model output) and `parts`, containing the input data. For example, a valid data row would look like this:\n",
"\n",
"\n",
"```\n",
"{\n",
" \"contents\":[\n",
" {\n",
" \"role\":\"user\", # This indicate input content\n",
" \"parts\":[\n",
" {\n",
" \"text\":\"How are you?\"\n",
" }\n",
" ]\n",
" },\n",
" {\n",
" \"role\":\"model\", # This indicate target content\n",
" \"parts\":[ # text only\n",
" {\n",
" \"text\":\"I am good, thank you!\"\n",
" }\n",
" ]\n",
" }\n",
" # ... repeat \"user\", \"model\" for multi turns.\n",
" ]\n",
"}\n",
"```\n",
"\n",
"\n",
"Refer to the public [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini-supervised-tuning-prepare#about-datasets) for more details."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TglfI3tQr2oZ"
},
"source": [
"To run a tuning job, you need to upload one or more datasets to a Cloud Storage bucket. You can either create a new Cloud Storage bucket or use an existing one to store dataset files. The region of the bucket doesn't matter, but we recommend that you use a bucket that's in the same Google Cloud project where you plan to tune your model."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OsNdYgnaITuz"
},
"source": [
"### Step3 [a]: Create a Cloud Storage bucket\n",
"\n",
"Create a storage bucket to store intermediate artifacts such as datasets.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WveeKANmITK5"
},
"outputs": [],
"source": [
"# Provide a bucket name\n",
"BUCKET_NAME = \"[YOUR_BUCKET_NAME]\" # @param {type:\"string\"}\n",
"BUCKET_URI = f\"gs://{BUCKET_NAME}\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lBSGTEiyJfSR"
},
"source": [
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GQSJ3LJkJhLm"
},
"outputs": [],
"source": [
"! gsutil mb -l {LOCATION} -p {PROJECT_ID} {BUCKET_URI}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7YGurtXHJy_y"
},
"source": [
"### Step3 [b]: Upload tuning data to Cloud Storage"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ip8rErN2r3ah"
},
"source": [
"- Data used in this notebook is present in the public Google Cloud Storage(GCS) bucket.\n",
"- It's in Gemini finetuning dataset format"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "aV_GZg_LaNmV"
},
"outputs": [],
"source": [
"!gsutil ls gs://github-repo/generative-ai/gemini/tuning/summarization/wikilingua"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "b-EQ4FcExIfp"
},
"outputs": [],
"source": [
"!gsutil cp gs://github-repo/generative-ai/gemini/tuning/summarization/wikilingua/* ."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6wmBkAUoyzdJ"
},
"source": [
"#### Convert Gemini tuning dataset to Gemini 2.0 tuning dataset format"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7bhtxKbha_wS"
},
"outputs": [],
"source": [
"def save_jsonlines(file, instances):\n",
" \"\"\"\n",
" Saves a list of json instances to a jsonlines file.\n",
" \"\"\"\n",
" with jsonlines.open(file, mode=\"w\") as writer:\n",
" writer.write_all(instances)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-3xbpRnSyxH9"
},
"outputs": [],
"source": [
"def create_tuning_samples(file_path):\n",
" \"\"\"\n",
" Creates tuning samples from a file.\n",
" \"\"\"\n",
" with jsonlines.open(file_path) as reader:\n",
" instances = []\n",
" for obj in reader:\n",
" instance = []\n",
" for content in obj[\"messages\"]:\n",
" instance.append(\n",
" {\"role\": content[\"role\"], \"parts\": [{\"text\": content[\"content\"]}]}\n",
" )\n",
" instances.append({\"contents\": instance})\n",
" return instances"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "orHUpTagyw-z"
},
"outputs": [],
"source": [
"train_file = \"sft_train_samples.jsonl\"\n",
"train_instances = create_tuning_samples(train_file)\n",
"len(train_instances)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "U3m05gAM1mlp"
},
"outputs": [],
"source": [
"# save the training instances to jsonl file\n",
"save_jsonlines(train_file, train_instances)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xZICW6uU1T3Z"
},
"outputs": [],
"source": [
"val_file = \"sft_val_samples.jsonl\"\n",
"val_instances = create_tuning_samples(val_file)\n",
"len(val_instances)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "n1s7adZH1CCe"
},
"outputs": [],
"source": [
"# save the validation instances to jsonl file\n",
"save_jsonlines(val_file, val_instances)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "AVL2gfP-J5SL"
},
"outputs": [],
"source": [
"# Copy the tuning and evaluation data to your bucket.\n",
"!gsutil cp {train_file} {BUCKET_URI}/train/\n",
"!gsutil cp {val_file} {BUCKET_URI}/val/"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kV5X6_DsIXPm"
},
"source": [
"### Step3 [c]: Test dataset\n",
"\n",
"- It contains document text(`input_text`) and corresponding reference summary(`output_text`), which will be compared with the model generated summary"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "wtxPI3GPIckU"
},
"outputs": [],
"source": [
"# Load the test dataset using pandas as it's in the csv format.\n",
"testing_data_path = \"sft_test_samples.csv\"\n",
"test_data = pd.read_csv(testing_data_path)\n",
"test_data.head()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bRBtYfN_PPaP"
},
"outputs": [],
"source": [
"test_data.loc[0, \"input_text\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "tTt7qjSeSHRW"
},
"outputs": [],
"source": [
"# Article summary stats\n",
"stats = test_data[\"output_text\"].apply(len).describe()\n",
"stats"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WKptd49cdjSi"
},
"outputs": [],
"source": [
"print(f\"Total `{stats['count']}` test records\")\n",
"print(f\"Average length is `{stats['mean']}` and max is `{stats['max']}` characters\")\n",
"print(\"\\nConsidering 1 token = 4 chars\")\n",
"\n",
"# Get ceil value of the tokens required.\n",
"tokens = (stats[\"max\"] / 4).__ceil__()\n",
"print(\n",
" f\"\\nSet max_token_length = stats['max']/4 = {stats['max']/4} ~ {tokens} characters\"\n",
")\n",
"print(f\"\\nLet's keep output tokens upto `{tokens}`\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "idM1p_UNvA7w"
},
"outputs": [],
"source": [
"# Maximum number of tokens that can be generated in the response by the LLM.\n",
"# Experiment with this number to get optimal output.\n",
"max_output_tokens = tokens"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DhjmRffOOPAS"
},
"source": [
"## Step4: Initailize model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UhhD1VWDsLat"
},
"source": [
"The following Gemini text model support supervised tuning:\n",
"\n",
"* `gemini-2.0-flash-001`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jL-zRl5_OVZW"
},
"outputs": [],
"source": [
"base_model = \"gemini-2.0-flash-001\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ieJe8yGlOtFD"
},
"source": [
"## Step5: Test the Gemini model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "F8DFUzRnHMi8"
},
"source": [
"### Generation config\n",
"\n",
"- Each call that you send to a model includes parameter values that control how the model generates a response. The model can generate different results for different parameter values\n",
"- <strong>Experiment</strong> with different parameter values to get the best values for the task\n",
"\n",
"Refer to the following [link](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/adjust-parameter-values) for understanding different parameters"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6hbaeT8AcniS"
},
"source": [
"**Prompt** is a natural language request submitted to a language model to receive a response back\n",
"\n",
"Some best practices include\n",
" - Clearly communicate what content or information is most important\n",
" - Structure the prompt:\n",
" - Defining the role if using one. For example, You are an experienced UX designer at a top tech company\n",
" - Include context and input data\n",
" - Provide the instructions to the model\n",
" - Add example(s) if you are using them\n",
"\n",
"Refer to the following [link](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/prompt-design-strategies) for prompt design strategies."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hZUcvQr0rAWA"
},
"source": [
"Wikilingua data contains the following task prompt at the end of the article, `Provide a summary of the article in two or three sentences:`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "iu6OuIhFOv4C"
},
"outputs": [],
"source": [
"test_doc = test_data.loc[0, \"input_text\"]\n",
"\n",
"prompt = f\"\"\"\n",
"{test_doc}\n",
"\"\"\"\n",
"\n",
"config = {\n",
" \"temperature\": 0.1,\n",
" \"max_output_tokens\": max_output_tokens,\n",
"}\n",
"\n",
"response = client.models.generate_content(\n",
" model=base_model,\n",
" contents=prompt,\n",
" config=config,\n",
").text\n",
"print(response)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8YvlMfmIQqK8"
},
"outputs": [],
"source": [
"# Ground truth\n",
"test_data.loc[0, \"output_text\"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jGPUKZlcP69-"
},
"source": [
"## Step6: Evaluation before model tuning"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1yayTQdd9oE5"
},
"source": [
"- Evaluate the Gemini model on the test dataset before tuning it on the training dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "610J64SpQ5TE"
},
"outputs": [],
"source": [
"# Convert the pandas dataframe to records (list of dictionaries).\n",
"corpus = test_data.to_dict(orient=\"records\")\n",
"# Check number of records.\n",
"len(corpus)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KkKldH90MY4v"
},
"source": [
"### Evaluation metric\n",
"\n",
"The type of metrics used for evaluation depends on the task that you are evaluating. The following table shows the supported tasks and the metrics used to evaluate each task:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "t6oLtUEWMHVu"
},
"source": [
"| Task | Metric(s) |\n",
"|-----------------|---------------------------------|\n",
"| Classification | Micro-F1, Macro-F1, Per class F1 |\n",
"| Summarization | ROUGE-L |\n",
"| Question Answering | Exact Match |\n",
"| Text Generation | BLEU, ROUGE-L |\n",
"\n",
"\n",
"<br/>\n",
"\n",
"Refer to this [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/evaluate-models) for metric based evaluation."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iKNk3zG4CNSS"
},
"source": [
"- **Recall-Oriented Understudy for Gisting Evaluation (ROUGE)**: A metric used to evaluate the quality of automatic summaries of text. It works by comparing a generated summary to a set of reference summaries created by humans."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5XP9VOaTd3z8"
},
"source": [
"Now you can take the candidate and reference to evaluate the performance. In this case, ROUGE will give you:\n",
"\n",
"- `rouge-1`, which measures unigram overlap\n",
"- `rouge-2`, which measures bigram overlap\n",
"- `rouge-l`, which measures the longest common subsequence"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "I2CrcvvzFfBL"
},
"source": [
"#### *Recall vs. Precision*\n",
"\n",
"**Recall**, meaning it prioritizes how much of the information in the reference summaries is captured in the generated summary.\n",
"\n",
"**Precision**, which measures how much of the generated summary is relevant to the original text."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uU6K4YdaGGyp"
},
"source": [
"<strong>Alternate Evaluation method</strong>: Check out the [AutoSxS](https://cloud.google.com/vertex-ai/generative-ai/docs/models/side-by-side-eval) evaluation for automatic evaluation of the task.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "p3YZAOZcQWtW"
},
"outputs": [],
"source": [
"# Create rouge_scorer object for evaluation\n",
"scorer = rouge_scorer.RougeScorer([\"rouge1\", \"rouge2\", \"rougeL\"], use_stemmer=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "N8Y06N_b_EP5"
},
"outputs": [],
"source": [
"def run_evaluation(model, corpus: list[dict]) -> pd.DataFrame:\n",
" \"\"\"Runs evaluation for the given model and data.\n",
"\n",
" Args:\n",
" model: The generation model.\n",
" corpus: The test data.\n",
"\n",
" Returns:\n",
" A pandas DataFrame containing the evaluation results.\n",
" \"\"\"\n",
" records = []\n",
" for item in tqdm(corpus):\n",
" document = item.get(\"input_text\")\n",
" summary = item.get(\"output_text\")\n",
"\n",
" # Catch any exception that occur during model evaluation.\n",
" try:\n",
" response = client.models.generate_content(\n",
" model=model,\n",
" contents=document,\n",
" config=config,\n",
" )\n",
"\n",
" # Check if response is generated by the model, if response is empty then continue to next item.\n",
" if not (\n",
" response\n",
" and response.candidates\n",
" and response.candidates[0].content.parts\n",
" ):\n",
" print(\n",
" f\"\\nModel has blocked the response for the document.\\n Response: {response}\\n Document: {document}\"\n",
" )\n",
" continue\n",
"\n",
" # Calculates the ROUGE score for a given reference and generated summary.\n",
" scores = scorer.score(target=summary, prediction=response.text)\n",
"\n",
" # Append the results to the records list\n",
" records.append(\n",
" {\n",
" \"document\": document,\n",
" \"summary\": summary,\n",
" \"generated_summary\": response.text,\n",
" \"scores\": scores,\n",
" \"rouge1_precision\": scores.get(\"rouge1\").precision,\n",
" \"rouge1_recall\": scores.get(\"rouge1\").recall,\n",
" \"rouge1_fmeasure\": scores.get(\"rouge1\").fmeasure,\n",
" \"rouge2_precision\": scores.get(\"rouge2\").precision,\n",
" \"rouge2_recall\": scores.get(\"rouge2\").recall,\n",
" \"rouge2_fmeasure\": scores.get(\"rouge2\").fmeasure,\n",
" \"rougeL_precision\": scores.get(\"rougeL\").precision,\n",
" \"rougeL_recall\": scores.get(\"rougeL\").recall,\n",
" \"rougeL_fmeasure\": scores.get(\"rougeL\").fmeasure,\n",
" }\n",
" )\n",
" except AttributeError as attr_err:\n",
" print(\"Attribute Error:\", attr_err)\n",
" continue\n",
" except Exception as err:\n",
" print(\"Error:\", err)\n",
" continue\n",
" return pd.DataFrame(records)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4afTSo5cpM73"
},
"outputs": [],
"source": [
"# Batch of test data.\n",
"corpus_batch = corpus[:100]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oM10zigp7kTZ"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ It will take ~2 mins for the evaluation run on the provided batch. ⚠️</b>\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "aO8JhIg1pYkE"
},
"outputs": [],
"source": [
"# Run evaluation using loaded model and test data corpus\n",
"evaluation_df = run_evaluation(base_model, corpus_batch)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "H4VUFeb9tRBP"
},
"outputs": [],
"source": [
"evaluation_df.head()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "18m75w6m0R10"
},
"outputs": [],
"source": [
"evaluation_df_stats = evaluation_df.dropna().describe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dYokPWqdUIMv"
},
"outputs": [],
"source": [
"# Statistics of the evaluation dataframe.\n",
"evaluation_df_stats"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4fdVjY_JWcmq"
},
"outputs": [],
"source": [
"print(\"Mean rougeL_precision is\", evaluation_df_stats.rougeL_precision[\"mean\"])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IgMb3E0YEqL2"
},
"source": [
"## Step7: Fine-tune the Model\n",
"\n",
" - `source_model`: Specifies the base Gemini model version you want to fine-tune.\n",
" - `train_dataset`: Path to your training data in JSONL format.\n",
"\n",
" *Optional parameters*\n",
" - `validation_dataset`: If provided, this data is used to evaluate the model during tuning.\n",
" - `tuned_model_display_name`: Display name for the tuned model.\n",
" - `epochs`: The number of training epochs to run.\n",
" - `learning_rate_multiplier`: A value to scale the learning rate during training.\n",
" - `adapter_size` : Gemini 2.0 Pro supports Adapter length [1, 2, 4, 8], default value is 4."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4e81137766c6"
},
"source": [
"**Note: The default hyperparameter settings are optimized for optimal performance based on rigorous testing and are recommended for initial use. Users may customize these parameters to address specific performance requirements.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "vQM2vDBZ27b_"
},
"outputs": [],
"source": [
"tuned_model_display_name = \"[DISPLAY NAME FOR TUNED MODEL]\" # @param {type:\"string\"}\n",
"\n",
"training_dataset = {\n",
" \"gcs_uri\": f\"{BUCKET_URI}/train/sft_train_samples.jsonl\",\n",
"}\n",
"\n",
"validation_dataset = types.TuningValidationDataset(\n",
" gcs_uri=f\"{BUCKET_URI}/val/sft_val_samples.jsonl\"\n",
")\n",
"\n",
"# Tune a model using `tune` method.\n",
"sft_tuning_job = client.tunings.tune(\n",
" base_model=base_model,\n",
" training_dataset=training_dataset,\n",
" config=types.CreateTuningJobConfig(\n",
" tuned_model_display_name=tuned_model_display_name,\n",
" validation_dataset=validation_dataset,\n",
" ),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "yLlAgVjCNqXg"
},
"outputs": [],
"source": [
"# Get the tuning job info.\n",
"tuning_job = client.tunings.get(name=sft_tuning_job.name)\n",
"tuning_job"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "22QZ035C8GJ3"
},
"source": [
"**Note: Tuning time depends on several factors, such as training data size, number of epochs, learning rate multiplier, etc.**"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NN1KX-_WyKeu"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ It will take ~15 mins for the model tuning job to complete on the provided dataset and set configurations/hyperparameters. ⚠️</b>\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2ma3P6tZ6suI"
},
"outputs": [],
"source": [
"%%time\n",
"# Wait for job completion\n",
"\n",
"running_states = [\n",
" \"JOB_STATE_PENDING\",\n",
" \"JOB_STATE_RUNNING\",\n",
"]\n",
"\n",
"while tuning_job.state.name in running_states:\n",
" print(\".\", end=\"\")\n",
" tuning_job = client.tunings.get(name=tuning_job.name)\n",
" time.sleep(10)\n",
"print()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "e1O1xCBS6spi"
},
"outputs": [],
"source": [
"tuned_model = tuning_job.tuned_model.endpoint\n",
"experiment_name = tuning_job.experiment\n",
"\n",
"print(\"Tuned model experiment\", experiment_name)\n",
"print(\"Tuned model endpoint resource name:\", tuned_model)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8DzlWWKpbGcu"
},
"source": [
"### Step7 [a]: Tuning and evaluation metrics"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "psRbCfzwWz_g"
},
"source": [
"#### Model tuning metrics\n",
"\n",
"- `/train_total_loss`: Loss for the tuning dataset at a training step.\n",
"- `/train_fraction_of_correct_next_step_preds`: The token accuracy at a training step. A single prediction consists of a sequence of tokens. This metric measures the accuracy of the predicted tokens when compared to the ground truth in the tuning dataset.\n",
"- `/train_num_predictions`: Number of predicted tokens at a training step\n",
"\n",
"#### Model evaluation metrics:\n",
"\n",
"- `/eval_total_loss`: Loss for the evaluation dataset at an evaluation step.\n",
"- `/eval_fraction_of_correct_next_step_preds`: The token accuracy at an evaluation step. A single prediction consists of a sequence of tokens. This metric measures the accuracy of the predicted tokens when compared to the ground truth in the evaluation dataset.\n",
"- `/eval_num_predictions`: Number of predicted tokens at an evaluation step.\n",
"\n",
"The metrics visualizations are available after the model tuning job completes. If you don't specify a validation dataset when you create the tuning job, only the visualizations for the tuning metrics are available.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5J1LP3nCbNlg"
},
"outputs": [],
"source": [
"# Locate Vertex AI Experiment and Vertex AI Experiment Run\n",
"experiment = aiplatform.Experiment(experiment_name=experiment_name)\n",
"filter_str = metadata_utils._make_filter_string(\n",
" schema_title=\"system.ExperimentRun\",\n",
" parent_contexts=[experiment.resource_name],\n",
")\n",
"experiment_run = context.Context.list(filter_str)[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "htBrcQY1bPyh"
},
"outputs": [],
"source": [
"# Read data from Tensorboard\n",
"tensorboard_run_name = f\"{experiment.get_backing_tensorboard_resource().resource_name}/experiments/{experiment.name}/runs/{experiment_run.name.replace(experiment.name, '')[1:]}\"\n",
"tensorboard_run = aiplatform.TensorboardRun(tensorboard_run_name)\n",
"metrics = tensorboard_run.read_time_series_data()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uRZ-UZXcbYj5"
},
"outputs": [],
"source": [
"def get_metrics(metric: str = \"/train_total_loss\"):\n",
" \"\"\"\n",
" Get metrics from Tensorboard.\n",
"\n",
" Args:\n",
" metric: metric name, eg. /train_total_loss or /eval_total_loss.\n",
" Returns:\n",
" steps: list of steps.\n",
" steps_loss: list of loss values.\n",
" \"\"\"\n",
" loss_values = metrics[metric].values\n",
" steps_loss = []\n",
" steps = []\n",
" for loss in loss_values:\n",
" steps_loss.append(loss.scalar.value)\n",
" steps.append(loss.step)\n",
" return steps, steps_loss"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ImR4doLZblaH"
},
"outputs": [],
"source": [
"# Get Train and Eval Loss\n",
"train_loss = get_metrics(metric=\"/train_total_loss\")\n",
"eval_loss = get_metrics(metric=\"/eval_total_loss\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NuN-m1Ikbn15"
},
"source": [
"### Step7 [b]: Plot the metrics"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1KWWkVR5jQkA"
},
"outputs": [],
"source": [
"# Plot the train and eval loss metrics using Plotly python library\n",
"\n",
"fig = make_subplots(\n",
" rows=1, cols=2, shared_xaxes=True, subplot_titles=(\"Train Loss\", \"Eval Loss\")\n",
")\n",
"\n",
"# Add traces\n",
"fig.add_trace(\n",
" go.Scatter(x=train_loss[0], y=train_loss[1], name=\"Train Loss\", mode=\"lines\"),\n",
" row=1,\n",
" col=1,\n",
")\n",
"fig.add_trace(\n",
" go.Scatter(x=eval_loss[0], y=eval_loss[1], name=\"Eval Loss\", mode=\"lines\"),\n",
" row=1,\n",
" col=2,\n",
")\n",
"\n",
"# Add figure title\n",
"fig.update_layout(title=\"Train and Eval Loss\", xaxis_title=\"Steps\", yaxis_title=\"Loss\")\n",
"\n",
"# Set x-axis title\n",
"fig.update_xaxes(title_text=\"Steps\")\n",
"\n",
"# Set y-axes titles\n",
"fig.update_yaxes(title_text=\"Loss\")\n",
"\n",
"# Show plot\n",
"fig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KY-eiVk0FI-M"
},
"source": [
"## Step8: Load the Tuned Model\n",
"\n",
" - Load the fine-tuned model using `GenerativeModel` class with the tuning job model endpoint name.\n",
"\n",
" - Test the tuned model with the following prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GiJ831VMDQNy"
},
"outputs": [],
"source": [
"prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "65SYYpaNT4QR"
},
"outputs": [],
"source": [
"if True:\n",
" # Test with the loaded model.\n",
" print(\"***Testing***\")\n",
" print(\n",
" client.models.generate_content(\n",
" model=tuned_model, contents=prompt, config=config\n",
" ).text\n",
" )\n",
"else:\n",
" print(\"State:\", tuning_job.state.name.state)\n",
" print(\"Error:\", tuning_job.state.name.error)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d54ce2b88af3"
},
"source": [
"- We can clearly see the difference between summary generated pre and post tuning, as tuned summary is more inline with the ground truth format (**Note**: Pre and Post outputs, might vary based on the set parameters.)\n",
"\n",
" - *Pre*: `This article describes a method for applying lotion to your back using your forearms as applicators. By squeezing lotion onto your forearms and then reaching behind your back, you can use a windshield wiper motion to spread the lotion across your back. The method acknowledges potential limitations for those with shoulder pain or limited flexibility.`\n",
" - *Post*: `Squeeze a line of lotion on your forearm. Reach behind you and rub your back.`\n",
" - *Ground Truth*:` Squeeze a line of lotion onto the tops of both forearms and the backs of your hands. Place your arms behind your back. Move your arms in a windshield wiper motion.`"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oYsIpFakU4CC"
},
"source": [
"## Step9: Evaluation post model tuning"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mwlCcKPZ62Of"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ It will take ~5 mins for the evaluation on the provided batch. ⚠️</b>\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "KrBk1amTU3r2"
},
"outputs": [],
"source": [
"# run evaluation\n",
"evaluation_df_post_tuning = run_evaluation(tuned_model, corpus_batch)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ONnlEkSex-iO"
},
"outputs": [],
"source": [
"evaluation_df_post_tuning.head()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xDJrlD8O0B4d"
},
"outputs": [],
"source": [
"evaluation_df_post_tuning_stats = evaluation_df_post_tuning.dropna().describe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "c24-mE12y4Nm"
},
"outputs": [],
"source": [
"# Statistics of the evaluation dataframe post model tuning.\n",
"evaluation_df_post_tuning_stats"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9VU-8Ql2bqlo"
},
"outputs": [],
"source": [
"print(\n",
" \"Mean rougeL_precision is\", evaluation_df_post_tuning_stats.rougeL_precision[\"mean\"]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4Q8hN7SE08-X"
},
"source": [
"#### Improvement"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "j0ctGzdnznYO"
},
"outputs": [],
"source": [
"improvement = round(\n",
" (\n",
" (\n",
" evaluation_df_post_tuning_stats.rougeL_precision[\"mean\"]\n",
" - evaluation_df_stats.rougeL_precision[\"mean\"]\n",
" )\n",
" / evaluation_df_stats.rougeL_precision[\"mean\"]\n",
" )\n",
" * 100,\n",
" 2,\n",
")\n",
"print(\n",
" f\"Model tuning has improved the rougeL_precision by {improvement}% (result might differ based on each tuning iteration)\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LQkpAMnpw-jH"
},
"source": [
"## Conclusion"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Esra6YPgxBiV"
},
"source": [
"Performance could be further improved:\n",
"- By adding more training samples. In general, improve your training data quality and/or quantity towards getting a more diverse and comprehensive dataset for your task\n",
"- By tuning the hyperparameters, such as epochs and learning rate multiplier\n",
" - To find the optimal number of epochs for your dataset, we recommend experimenting with different values. While increasing epochs can lead to better performance, it's important to be mindful of overfitting, especially with smaller datasets. If you see signs of overfitting, reducing the number of epochs can help mitigate the issue\n",
"- You may try different prompt structures/formats and opt for the one with better performance"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3e6fd3649040"
},
"source": [
"## Cleaning up"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5528064b2cdf"
},
"source": [
"To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
"project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
"\n",
"\n",
"Otherwise, you can delete the individual resources you created in this tutorial.\n",
"\n",
"Refer to this [instructions](https://cloud.google.com/vertex-ai/docs/tutorials/image-classification-custom/cleanup#delete_resources) to delete the resources from console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "c4dd0f5d2a21"
},
"outputs": [],
"source": [
"# Delete Experiment.\n",
"delete_experiments = True\n",
"if delete_experiments:\n",
" experiments_list = aiplatform.Experiment.list()\n",
" for experiment in experiments_list:\n",
" if experiment.resource_name == experiment_name:\n",
" print(experiment.resource_name)\n",
" experiment.delete()\n",
" break\n",
"\n",
"print(\"***\" * 10)\n",
"\n",
"# Delete Endpoint.\n",
"delete_endpoint = True\n",
"# If force is set to True, all deployed models on this\n",
"# Endpoint will be first undeployed.\n",
"if delete_endpoint:\n",
" for endpoint in aiplatform.Endpoint.list():\n",
" if endpoint.resource_name == tuned_model:\n",
" print(endpoint.resource_name)\n",
" endpoint.delete(force=True)\n",
" break\n",
"\n",
"print(\"***\" * 10)\n",
"\n",
"# Delete Cloud Storage Bucket.\n",
"delete_bucket = True\n",
"if delete_bucket:\n",
" ! gsutil -m rm -r $BUCKET_URI"
]
}
],
"metadata": {
"colab": {
"name": "sft_gemini_summarization.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}