gemini/evaluation/compare_generative_ai_models.ipynb (625 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OsXAs2gcIpbC"
},
"outputs": [],
"source": [
"# Copyright 2024 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QN61Ug4hLby5"
},
"source": [
" # Compare Generative AI Models | Gen AI Evaluation SDK Tutorial\n",
"\n",
" <table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/compare_generative_ai_models.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fevaluation%2Fcompare_generative_ai_models.ipynb\">\n",
" <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td> \n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/evaluation/compare_generative_ai_models.ipynb\">\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/compare_generative_ai_models.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>\n",
"\n",
"<div style=\"clear: both;\"></div>\n",
"\n",
"<b>Share to:</b>\n",
"\n",
"<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/compare_generative_ai_models.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/compare_generative_ai_models.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/compare_generative_ai_models.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/compare_generative_ai_models.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/compare_generative_ai_models.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
"</a> "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "G8AQH4hNLby6"
},
"source": [
"| | |\n",
"|-|-|\n",
"| Author(s) | [Jason Dai](https://github.com/jsondai) [Bo Zheng](https://github.com/coolalexzb) |"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7Kt6bwP9Lby6"
},
"source": [
"## Overview\n",
"\n",
"In this tutorial, you will learn how to use the *Vertex AI Python SDK for Gen AI Evaluation Service* to score and evaluate different generative AI models on a specific evaluation task, `EvalTask`. Then visualize and compare the evaluation results for the `EvalTask` to select a generative model.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PwvmywfkLby6"
},
"source": [
"## Getting Started"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KZIpM3ASLby6"
},
"source": [
"### Install Vertex AI Python SDK for Gen AI Evaluation Service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bR0rvHA3Lby6"
},
"outputs": [],
"source": [
"%pip install --upgrade --user --quiet google-cloud-aiplatform[evaluation] plotly"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SY6KLk8cLby6"
},
"source": [
"### Restart runtime\n",
"To use the newly installed packages in this Jupyter runtime, you must restart the runtime. You can do this by running the cell below, which restarts the current kernel.\n",
"\n",
"The restart might take a minute or longer. After it's restarted, continue to the next step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pVN_5UyRLby6"
},
"outputs": [],
"source": [
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "imFE2o-3Lby6"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ The kernel is going to restart. Wait until it's finished before continuing to the next step. ⚠️</b>\n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uyY5AU-fLby7"
},
"source": [
"### Authenticate your notebook environment (Colab only)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VRFZFC6OLby7"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YSMfLP1TLby7"
},
"source": [
"### Set Google Cloud project information and initialize Vertex AI SDK"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "47MSzXn-Lby7"
},
"outputs": [],
"source": [
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n",
"LOCATION = \"us-central1\" # @param {type:\"string\"}\n",
"EXPERIMENT_NAME = \"eval-sdk-model-selection\" # @param {type:\"string\"}\n",
"\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
" raise ValueError(\"Please set your PROJECT_ID\")\n",
"\n",
"import vertexai\n",
"\n",
"vertexai.init(project=PROJECT_ID, location=LOCATION)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dvhI92xhQTzk"
},
"source": [
"### Import libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qP4ihOCkEBje"
},
"outputs": [],
"source": [
"import inspect\n",
"\n",
"import pandas as pd\n",
"from vertexai.evaluation import EvalTask, MetricPromptTemplateExamples, PairwiseMetric\n",
"from vertexai.generative_models import GenerativeModel, HarmBlockThreshold, HarmCategory\n",
"from vertexai.preview.evaluation import notebook_utils"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AIl7R_jBUsaC"
},
"source": [
"## Compare and Select Generative Models\n",
"\n",
"You can define an `EvalTask` with pointwise metrics and an evaluation dataset, and then conduct a controlled experiment by running the evaluation multiple times with the same setup, but each run utilizing a different model. This allows you to isolate the impact of the model on the results, and ensuring consistent conditions for comparison."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Qvt4ydMMQiMP"
},
"source": [
"### Define a Dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "hUGFuYawEJYX"
},
"outputs": [],
"source": [
"instruction = \"Summarize the following article\"\n",
"\n",
"context = [\n",
" \"To make a classic spaghetti carbonara, start by bringing a large pot of salted water to a boil. While the water is heating up, cook pancetta or guanciale in a skillet with olive oil over medium heat until it's crispy and golden brown. Once the pancetta is done, remove it from the skillet and set it aside. In the same skillet, whisk together eggs, grated Parmesan cheese, and black pepper to make the sauce. When the pasta is cooked al dente, drain it and immediately toss it in the skillet with the egg mixture, adding a splash of the pasta cooking water to create a creamy sauce.\",\n",
" \"Preparing a perfect risotto requires patience and attention to detail. Begin by heating butter in a large, heavy-bottomed pot over medium heat. Add finely chopped onions and minced garlic to the pot, and cook until they're soft and translucent, about 5 minutes. Next, add Arborio rice to the pot and cook, stirring constantly, until the grains are coated with the butter and begin to toast slightly. Pour in a splash of white wine and cook until it's absorbed. From there, gradually add hot chicken or vegetable broth to the rice, stirring frequently, until the risotto is creamy and the rice is tender with a slight bite.\",\n",
" \"For a flavorful grilled steak, start by choosing a well-marbled cut of beef like ribeye or New York strip. Season the steak generously with kosher salt and freshly ground black pepper on both sides, pressing the seasoning into the meat. Preheat a grill to high heat and brush the grates with oil to prevent sticking. Place the seasoned steak on the grill and cook for about 4-5 minutes on each side for medium-rare, or adjust the cooking time to your desired level of doneness. Let the steak rest for a few minutes before slicing against the grain and serving.\",\n",
" \"Creating a creamy homemade tomato soup is a comforting and simple process. Begin by heating olive oil in a large pot over medium heat. Add diced onions and minced garlic to the pot and cook until they're soft and fragrant. Next, add chopped fresh tomatoes, chicken or vegetable broth, and a sprig of fresh basil to the pot. Simmer the soup for about 20-30 minutes, or until the tomatoes are tender and falling apart. Remove the basil sprig and use an immersion blender to puree the soup until smooth. Season with salt and pepper to taste before serving.\",\n",
" \"To bake a decadent chocolate cake from scratch, start by preheating your oven to 350°F (175°C) and greasing and flouring two 9-inch round cake pans. In a large mixing bowl, cream together softened butter and granulated sugar until light and fluffy. Beat in eggs one at a time, making sure each egg is fully incorporated before adding the next. In a separate bowl, sift together all-purpose flour, cocoa powder, baking powder, baking soda, and salt. Divide the batter evenly between the prepared cake pans and bake for 25-30 minutes, or until a toothpick inserted into the center comes out clean.\",\n",
"]\n",
"\n",
"reference = [\n",
" \"The process of making spaghetti carbonara involves boiling pasta, crisping pancetta or guanciale, whisking together eggs and Parmesan cheese, and tossing everything together to create a creamy sauce.\",\n",
" \"Preparing risotto entails sautéing onions and garlic, toasting Arborio rice, adding wine and broth gradually, and stirring until creamy and tender.\",\n",
" \"Grilling a flavorful steak involves seasoning generously, preheating the grill, cooking to desired doneness, and letting it rest before slicing.\",\n",
" \"Creating homemade tomato soup includes sautéing onions and garlic, simmering with tomatoes and broth, pureeing until smooth, and seasoning to taste.\",\n",
" \"Baking a decadent chocolate cake requires creaming butter and sugar, beating in eggs and alternating dry ingredients with buttermilk before baking until done.\",\n",
"]\n",
"\n",
"\n",
"eval_dataset = pd.DataFrame(\n",
" {\n",
" \"context\": context,\n",
" \"instruction\": [instruction] * len(context),\n",
" \"reference\": reference,\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FJa2mCBQJnnj"
},
"source": [
"### Create an Evaluation Task"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "U5P8TDQYHXZs"
},
"source": [
"#### Documentation\n",
"\n",
"Documentation and example usages for the `EvalTask`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Szik-lQXosuz"
},
"outputs": [],
"source": [
"print(f\"{EvalTask.__name__}:\\n{inspect.getdoc(EvalTask)}\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bXtJprVEG_v6"
},
"source": [
"#### Define EvalTask & Experiment\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8S9aPoVscp1A"
},
"outputs": [],
"source": [
"summarization_eval_task = EvalTask(\n",
" dataset=eval_dataset,\n",
" metrics=[\n",
" MetricPromptTemplateExamples.Pointwise.TEXT_QUALITY,\n",
" MetricPromptTemplateExamples.Pointwise.FLUENCY,\n",
" MetricPromptTemplateExamples.Pointwise.SAFETY,\n",
" MetricPromptTemplateExamples.Pointwise.VERBOSITY,\n",
" ],\n",
" experiment=EXPERIMENT_NAME,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FiFc4h6CR9EM"
},
"source": [
"### Compare Gemini-2.0-Flash to Gemini-2.0-Flash-Lite\n",
"\n",
"Evaluate **Gemini-2.0-Flash** model and **Gemini-2.0-Flash-Lite** model on the text summarization task defined above using Gen AI Eval Service SDK"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dlK2Ki8dEQMs"
},
"source": [
"#### Model settings\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7hHwD6DLaEVp"
},
"outputs": [],
"source": [
"generation_config = {\n",
" \"max_output_tokens\": 128,\n",
" \"temperature\": 0.4,\n",
"}\n",
"\n",
"safety_settings = {\n",
" HarmCategory.HARM_CATEGORY_UNSPECIFIED: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,\n",
"}\n",
"\n",
"gemini_model_20 = GenerativeModel(\n",
" \"gemini-2.0-flash\",\n",
" generation_config=generation_config,\n",
" safety_settings=safety_settings,\n",
")\n",
"\n",
"gemini_model_15 = GenerativeModel(\n",
" \"gemini-2.0-flash-lite\",\n",
" generation_config=generation_config,\n",
" safety_settings=safety_settings,\n",
")\n",
"\n",
"models = {\n",
" \"gemini-2.0-flash-lite\": gemini_model_15,\n",
" \"gemini-2.0-flash\": gemini_model_20,\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BwsdE2QmHafi"
},
"source": [
"#### Running evaluation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "KfG9JG9VHaNw"
},
"outputs": [],
"source": [
"## Design text prompt template\n",
"# For more task-specific prompt guidance, see https://cloud.google.com/vertex-ai/generative-ai/docs/text/text-prompts.\n",
"prompt_template = \"{instruction}. Article: {context}. Summary:\"\n",
"\n",
"eval_results = []\n",
"run_id = notebook_utils.generate_uuid(8)\n",
"\n",
"for model_name, model in models.items():\n",
" experiment_run_name = f\"eval-{model_name}-{run_id}\"\n",
"\n",
" eval_result = summarization_eval_task.evaluate(\n",
" model=model,\n",
" prompt_template=prompt_template,\n",
" experiment_run_name=experiment_run_name,\n",
" )\n",
"\n",
" eval_results.append((f\"Model {model_name}\", eval_result))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "_UlrmtlOKsv8"
},
"outputs": [],
"source": [
"for title, eval_result in eval_results:\n",
" notebook_utils.display_eval_result(title=title, eval_result=eval_result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "55FWSjhqSCZk"
},
"outputs": [],
"source": [
"for title, eval_result in eval_results:\n",
" notebook_utils.display_explanations(eval_result, metrics=[\"fluency\"])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zmMqFO4icFxq"
},
"source": [
"### Compare Eval Results\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9wsLHwXbCvAw"
},
"outputs": [],
"source": [
"notebook_utils.display_radar_plot(\n",
" eval_results,\n",
" metrics=[\"fluency\", \"text_quality\", \"safety\", \"verbosity\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "sR697aXj-Di1"
},
"outputs": [],
"source": [
"summarization_eval_task.display_runs()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6381b31a9b8a"
},
"source": [
"## Compare Two Models Side-by-side (SxS)\n",
"\n",
"To directly compare two models, you can define a `PairwiseMetric` within an `EvalTask` run. This approach allows for a head-to-head assessment of the models' performance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f4da6941c3b3"
},
"outputs": [],
"source": [
"metric_name = \"pairwise_text_quality\"\n",
"\n",
"pairwise_text_quality_result = EvalTask(\n",
" dataset=eval_dataset,\n",
" metrics=[\n",
" PairwiseMetric(\n",
" metric=metric_name,\n",
" metric_prompt_template=MetricPromptTemplateExamples.get_prompt_template(\n",
" metric_name\n",
" ),\n",
" # Baseline model for pairwise comparison\n",
" baseline_model=GenerativeModel(\n",
" \"gemini-2.0-flash\",\n",
" generation_config=generation_config,\n",
" safety_settings=safety_settings,\n",
" ),\n",
" ),\n",
" ],\n",
" experiment=EXPERIMENT_NAME,\n",
").evaluate(\n",
" # Specify candidate model for pairwise comparison\n",
" model=GenerativeModel(\n",
" \"gemini-2.0-flash\",\n",
" generation_config=generation_config,\n",
" safety_settings=safety_settings,\n",
" ),\n",
" prompt_template=prompt_template,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "c649d9461e82"
},
"outputs": [],
"source": [
"notebook_utils.display_eval_result(\n",
" title=\"Side-by-side Eval Results\", eval_result=pairwise_text_quality_result\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ff0723f91b79"
},
"outputs": [],
"source": [
"notebook_utils.display_explanations(\n",
" pairwise_text_quality_result, metrics=[metric_name], num=1\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "25a2ba32223d"
},
"outputs": [],
"source": [
"candidate_model_win_rate = round(\n",
" pairwise_text_quality_result.summary_metrics[\n",
" f\"{metric_name}/candidate_model_win_rate\"\n",
" ]\n",
" * 100\n",
")\n",
"print(\n",
" f\"Win rate: Autorater prefers Candidate Model over Baseline Model {candidate_model_win_rate}% of time.\"\n",
")"
]
}
],
"metadata": {
"colab": {
"name": "compare_generative_ai_models.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}