gemini/evaluation/evaluate_videos_with_gecko.ipynb (1,328 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "ur8xi4C7S06n" }, "outputs": [], "source": [ "# Copyright 2025 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "JAPoU8Sm5E6e" }, "source": [ "# Evaluate videos with Gecko\n", "\n", "<table align=\"left\">\n", " <td style=\"text-align: center\">\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/tree/main/gemini/evaluation/evaluate_videos_with_gecko.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fevaluation%2Fevaluate_videos_with_gecko.ipynb\">\n", " <img width=\"32px\" src=\"https://cloud.google.com/ml-engine/images/colab-enterprise-logo-32px.png\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n", " </a>\n", " </td> \n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/evaluation/evaluate_videos_with_gecko.ipynb\">\n", " <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/tree/main/gemini/evaluation/evaluate_videos_with_gecko.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n", " </a>\n", " </td>\n", "</table>\n", "\n", "<div style=\"clear: both;\"></div>\n", "\n", "<b>Share to:</b>\n", "\n", "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluate_videos_with_gecko.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n", "</a>\n", "\n", "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluate_videos_with_gecko.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n", "</a>\n", "\n", "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluate_videos_with_gecko.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n", "</a>\n", "\n", "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluate_videos_with_gecko.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n", "</a>\n", "\n", "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluate_videos_with_gecko.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n", "</a>" ] }, { "cell_type": "markdown", "metadata": { "id": "UBAgFfnqvxy0" }, "source": [ " | | | | |\n", " |-|-|-|-|\n", " |Author(s): | [Greg Breard](https://github.com/gregbreard) | Anant Nawalgaria | Olivia Wiles |" ] }, { "cell_type": "markdown", "metadata": { "id": "tvgnzT1CKxrO" }, "source": [ "## Overview\n", "\n", "This CoLAB shows how to leverage the Vertex AI evaluation service in order to run [Gecko](https://arxiv.org/abs/2404.16820).\n", "\n", "As with a more standard rubric approach, Gecko proceeds in two stages: a rubric generation step followed by a validator step. The key difference is that the rubric is generated based on the prompt.\n", "This allows for a more fine-grained metric that can be customized to prompts with differing challenges.\n", "\n", "In more detail, Gecko proceeds as follows, with two key steps: the QA generation step (ie the rubric generation step) and then the VQA step (ie the validator step).\n", "\n", "## The rubric generation step\n", "Given a prompt, such as `A teddy bear riding a skateboard`, we prompt the Gemini model to generate a set of questions, answer choices and corresponding ground truth (GT) answer. The question is also tagged with a question type. Depending on the prompt, these questions can either be `yes`/`no` questions or multiple choice ones.\n", "\n", "`A teddy bear riding a skateboard` -->\n", "\n", "- `Q1: Is there a teddy bear? Choices: [yes, no]. GT Answer: yes. Tag: Object.`\n", "- `Q2: Is there a skateboard? Choices: [yes, no]. GT Answer: yes. Tag: Object.`\n", "- `Q3: Is the teddy bear riding a skateboard? Choices: [yes, no]. GT Answer: yes. Tag: Action.`\n", "\n", "## The validator step\n", "Given a generated image and the questions above, we query the Gemini model for each question to give an answer. We then check if it matches the GT answer, with a result of 1 if it matches and 0 if it does not. We aggregate these results to give a final overall score, which can be broken down into scores per question. We can also aggregate scores based on tags.\n", "\n", "For example, imagine we have a generated image `<image1>` which includes a teddy bear but no skateboard, and Gemini outputs the following results:\n", "\n", "- `<image1> Is there a teddy bear? GT Answer: yes. Result: 1.`\n", "- `<image1> Is there a skateboard? GT Answer: no. Result: 0.`\n", "- `<image1> Is the teddy bear riding a skateboard? GT Answer: no. Result: 0.`\n", "\n", "The final score will be `0.33` with a score of `0.5` for the question tag and `0.0` for the action tag.\n", "\n", "## Further exploration\n", "We provide two prompts, engineered for video and image generation tasks. Below, we show how to run Gecko for the video modality on a set of generations.\n", "\n", "However, these prompts can be modified and changed as suits a developer's needs. The quality can be analysed by exploring what questions are generated as well as the reliability of the validator step. Questions can also be manually added as desired for an application.\n", "\n", "## Steps\n", "\n", "1. Set up the environment.\n", "2. Define helper functions, prompt templates, and metric.\n", "3. Prepare the dataset for evaluation.\n", "4. Run the evaluation (including model inference).\n", "\n", "## Costs\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "- Vertex AI\n", "\n", "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": { "id": "61RBz8LLbxCR" }, "source": [ "# Get started" ] }, { "cell_type": "markdown", "metadata": { "id": "No17Cw5hgx12" }, "source": [ "### Install Vertex AI SDK for Python and other required packages\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tFy3H3aPgx12" }, "outputs": [], "source": [ "%pip install --upgrade --quiet google-cloud-aiplatform" ] }, { "cell_type": "markdown", "metadata": { "id": "R5Xep4W9lq-Z" }, "source": [ "### Restart runtime (Colab only)\n", "\n", "To use the newly installed packages, you must restart the runtime on Google Colab." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XRvKdaPDTznN" }, "outputs": [], "source": [ "import sys\n", "\n", "if \"google.colab\" in sys.modules:\n", "\n", " import IPython\n", "\n", " app = IPython.Application.instance()\n", " app.kernel.do_shutdown(True)" ] }, { "cell_type": "markdown", "metadata": { "id": "SbmM4z7FOBpM" }, "source": [ "<div class=\"alert alert-block alert-warning\">\n", "<b>⚠️ The kernel is going to restart. Wait until it's finished before continuing to the next step. ⚠️</b>\n", "</div>\n" ] }, { "cell_type": "markdown", "metadata": { "id": "dmWOrTJ3gx13" }, "source": [ "### Authenticate your notebook environment (Colab only)\n", "\n", "Authenticate your environment on Google Colab.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NyKGtVQjgx13" }, "outputs": [], "source": [ "import sys\n", "\n", "if \"google.colab\" in sys.modules:\n", "\n", " from google.colab import auth\n", "\n", " auth.authenticate_user()" ] }, { "cell_type": "markdown", "metadata": { "id": "DF4l8DTdWgPY" }, "source": [ "### Set Google Cloud project information and initialize Vertex AI SDK for Python\n", "\n", "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com). Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Nqwi-5ufWp_B" }, "outputs": [], "source": [ "PROJECT_ID = \"your-project-id\" # @param {type:\"string\"}\n", "LOCATION = \"us-central1\" # @param {type:\"string\"}\n", "\n", "\n", "import vertexai\n", "\n", "vertexai.init(project=PROJECT_ID, location=LOCATION)" ] }, { "cell_type": "markdown", "metadata": { "id": "EdvJRUWRNGHE" }, "source": [ "## Import libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "R0peQPa48oTg" }, "outputs": [], "source": [ "import pandas as pd\n", "from vertexai.preview.evaluation import (\n", " CustomOutputConfig,\n", " EvalTask,\n", " PointwiseMetric,\n", " RubricBasedMetric,\n", " RubricGenerationConfig,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "e-TDamAG9mkM" }, "source": [ "# Set up eval metrics for Gecko" ] }, { "cell_type": "markdown", "metadata": { "id": "SR2qRUm29otX" }, "source": [ "## Helper functions" ] }, { "cell_type": "markdown", "metadata": { "id": "1pjVIACIMYne" }, "source": [ "The outputs supported by Gecko are more sophisticated than the default outputs of predefined rubric based metrics. To handle this, custom parsing logic is required.\n", "\n", "The following code block defines 2 classes: `QARecord` and `QAResult`. The `QARecord` represents the questions created during rubric generation. The `QAResult` extends the `QARecord` with a result field that is populated after validation.\n", "\n", "There are also two parsing methods. The `parse_json_to_qa_records` method converts the text output of rubric generation to `QARecords` and the `parse_rubric_results` method extracts the answers from the validation step. These are passed into the metric definition and parsing is handled automatically during the generation and validation steps.\n", "\n", "Finally, the `compute_scores` method compares the `QARecord`s and rubric results to calculate a per row score and appends `QAResult`s and scores to the dataset.\n", "\n", "In addition, there are pretty printing methods provided to present the output in a human readable format." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GfLmFxkP-Kvr" }, "outputs": [], "source": [ "from collections.abc import Sequence\n", "from dataclasses import dataclass, field\n", "import json\n", "import re\n", "from typing import Any\n", "\n", "import numpy as np\n", "\n", "_QUESTION_REGEX = re.compile(r\"Question:(.*?)Verdict:\", re.DOTALL)\n", "_VERDICT_REGEX = re.compile(\"Verdict:(.*)\")\n", "_QUESTION_BLOCK_REGEX = re.compile(\"<question>(.*?)</question>\", re.DOTALL)\n", "_TABLE_STYLE = [\n", " {\n", " \"selector\": \"th\",\n", " \"props\": [\n", " (\"background-color\", \"#f2f2f2\"),\n", " (\"border\", \"1px solid gray\"),\n", " (\"color\", \"black\"),\n", " (\"font-size\", \"11pt\"),\n", " (\"text-align\", \"center\"),\n", " (\"word-break\", \"break-all\"),\n", " ],\n", " },\n", " {\"selector\": \"tr:nth-child(even)\", \"props\": [(\"background-color\", \"#f9f9f9\")]},\n", " {\"selector\": \"tr:nth-child(odd)\", \"props\": [(\"background-color\", \"white\")]},\n", " {\"selector\": \"tr:hover\", \"props\": [(\"background-color\", \"#94e6ff\")]},\n", " {\"selector\": \"td:hover\", \"props\": [(\"background-color\", \"#ffffb3\")]},\n", "]\n", "\n", "\n", "@dataclass(kw_only=True, frozen=True)\n", "class QARecord:\n", " \"\"\"A basic QA Record for storing question-answer pairs.\n", "\n", " Attributes:\n", " question: Question text.\n", " question_type: Category of question.\n", " gt_answer: Ground-truth answer to the question.\n", " answer_choices: Possible answers for multiple choice questions.\n", " justification: How the question relates to the prompt.\n", " \"\"\"\n", "\n", " question: str = \"\"\n", " question_type: str = \"\"\n", " gt_answer: str = \"\"\n", " answer_choices: Sequence[str] = field(default_factory=list)\n", " justification: str = \"\"\n", "\n", "\n", "class QAResult(QARecord):\n", " \"\"\"A basic QA Result for storing question-answer results.\n", "\n", " Attributes:\n", " result: The result of answering the question.\n", " \"\"\"\n", "\n", " result: str = \"\"\n", "\n", " def __init__(self, qa_record: QARecord, result: str):\n", " super().__init__(\n", " question=qa_record.question,\n", " gt_answer=qa_record.gt_answer,\n", " answer_choices=qa_record.answer_choices,\n", " justification=qa_record.justification,\n", " )\n", " self.result = result\n", "\n", "\n", "def parse_json_to_qa_records(json_response: str) -> dict[str, Any]:\n", " \"\"\"\n", " Parse the JSON response and convert it to a questions and QARecords.\n", "\n", " Args:\n", " json_response: JSON string containing the QA data.\n", "\n", " Returns:\n", " Dict with keywords, questions, and QARecord objects.\n", "\n", " Raises:\n", " json.JSONDecodeError: If JSON parsing fails\n", " KeyError: If expected keys are missing from the JSON structure\n", " \"\"\"\n", " json_response = re.sub(\n", " r\"(.*```json|```.*)\",\n", " \"\",\n", " json_response.strip(),\n", " )\n", " try:\n", " # Parse JSON string to Python object\n", " data = json.loads(json_response)\n", " qa_records = []\n", "\n", " # Process each QA pair in the QAs array\n", " rubrics = []\n", " for qa in data[\"qas\"]:\n", " record = QARecord(\n", " question=qa[\"question\"],\n", " gt_answer=qa[\"answer\"],\n", " answer_choices=qa[\"choices\"],\n", " justification=qa[\"justification\"],\n", " )\n", " qa_records.append(record)\n", " rubrics.append(\n", " f\"<question>{record.question}<choices>{','.join(record.answer_choices)}\"\n", " )\n", " return {\n", " \"questions\": \"\\n\".join(rubrics),\n", " \"keywords\": data[\"keywords\"],\n", " \"qa_records\": qa_records,\n", " }\n", " except json.JSONDecodeError as e:\n", " return {\n", " \"questions\": f\"Error decoding JSON response: {str(e)}\",\n", " \"keywords\": \"\",\n", " \"qa_records\": json_response,\n", " }\n", " except KeyError as e:\n", " return {\n", " \"questions\": f\"Missing required key in JSON structure: {str(e)}\",\n", " \"keywords\": \"\",\n", " \"qa_records\": json_response,\n", " }\n", "\n", "\n", "def parse_rubric_results(results: list[str]) -> dict[str, Any]:\n", " \"\"\"Parses the rubric results from the rubric validator response.\"\"\"\n", " rubric_results = {}\n", " for result in results:\n", " rubric_verdicts = _parse_question_blocks(result)\n", " for rubric, verdict in rubric_verdicts:\n", " rubric_results[rubric.lower()] = verdict.lower()\n", " return {\"rubric_results\": rubric_results}\n", "\n", "\n", "def _parse_question_blocks(txt: str) -> list[tuple[str, bool]]:\n", " \"\"\"Parses the question blocks from the rubric validator response.\"\"\"\n", " responses = []\n", " question_blocks = _QUESTION_BLOCK_REGEX.findall(txt)\n", " if not question_blocks:\n", " question_blocks = [txt]\n", " for block in question_blocks:\n", " q = _parse_question(block)\n", " v = _parse_verdict(block)\n", " if q is not None and v is not None:\n", " responses.append((q, v))\n", " return responses\n", "\n", "\n", "def _parse_question(txt: str):\n", " \"\"\"Parses the question from the rubric validator response.\"\"\"\n", " if not isinstance(txt, str) or not txt:\n", " return None\n", " try:\n", " txt = txt.split(\"Verdict:\")[0]\n", " if \"Question:\" in txt:\n", " return txt.split(\"Question:\")[-1].strip()\n", " if question := _QUESTION_REGEX.findall(txt):\n", " return question[0].strip()\n", " except Exception as e:\n", " print(f\"Failed to parse question: {str(e)}\")\n", " return None\n", "\n", "\n", "def _parse_verdict(txt: str):\n", " \"\"\"Parses the verdict from the rubric validator response.\"\"\"\n", " if not isinstance(txt, str) or not txt:\n", " return None\n", " try:\n", " if verdict := _VERDICT_REGEX.findall(txt):\n", " verdict = verdict[0].strip()\n", " return verdict\n", " except Exception as e:\n", " print(f\"Failed to parse question: {str(e)}\")\n", " return None\n", "\n", "\n", "def compute_scores(df: \"pd.DataFrame\") -> \"pd.DataFrame\":\n", " \"\"\"Computes scores for each row based on QA results.\"\"\"\n", " qa_results = []\n", " final_scores = []\n", " for idx, row in df.iterrows():\n", " rubric_results = {}\n", " for key in row.keys():\n", " if \"rubric_results\" in key:\n", " rubric_results = row[key]\n", " scores = []\n", " results = []\n", " for qa in row[\"qa_records\"]:\n", " q = qa.question.lower()\n", " if q in rubric_results:\n", " if qa.gt_answer.lower() in rubric_results[q]:\n", " results.append(QAResult(qa, f\"{qa.gt_answer} ✓\"))\n", " scores.append(1)\n", " else:\n", " results.append(QAResult(qa, f\"{rubric_results[q]} 🗴\"))\n", " scores.append(0)\n", " else:\n", " results.append(QAResult(qa, \"no result\"))\n", " scores.append(0)\n", " qa_results.append(results)\n", " final_scores.append(np.mean(scores))\n", " df_with_score = df.assign(qa_results=qa_results, final_score=final_scores)\n", " return df_with_score\n", "\n", "\n", "def pretty_print_qa_records_df(\n", " df: \"pd.DataFrame\", hide_columns: list[str]\n", ") -> \"pd.Styler\":\n", " \"\"\"Prints QA records data frame as stylized HTML table.\"\"\"\n", " styled_df = df.copy()\n", " for col in df.columns:\n", " if (\n", " isinstance(df[col][0], list)\n", " and df[col][0]\n", " and isinstance(df[col][0][0], QARecord)\n", " ):\n", " styled_df[col] = styled_df[col].apply(\n", " lambda x: _qa_records_to_html_table(x)\n", " )\n", " styles = _TABLE_STYLE.copy()\n", " styles.append(\n", " {\n", " \"selector\": \"td\",\n", " \"props\": [\n", " (\"border\", \"1px solid gray\"),\n", " (\"color\", \"black\"),\n", " (\"min-width\", \"100px\"),\n", " (\"text-align\", \"center\"),\n", " ],\n", " }\n", " )\n", " return (\n", " styled_df.style.hide(axis=\"index\")\n", " .hide(subset=hide_columns, axis=1)\n", " .set_table_styles(styles)\n", " )\n", "\n", "\n", "def pretty_print_result_df(df: \"pd.DataFrame\", hide_columns: list[str]) -> \"pd.Styler\":\n", " \"\"\"Prints results data frame as stylized HTML table.\"\"\"\n", " styled_df = df.copy()\n", " for col in df.columns:\n", " if (\n", " isinstance(df[col][0], list)\n", " and df[col][0]\n", " and isinstance(df[col][0][0], QARecord)\n", " ):\n", " styled_df[col] = styled_df[col].apply(\n", " lambda x: _qa_records_to_html_table(x)\n", " )\n", " styles = _TABLE_STYLE.copy()\n", " styles.append(\n", " {\n", " \"selector\": \"td\",\n", " \"props\": [\n", " (\"border\", \"1px solid gray\"),\n", " (\"color\", \"black\"),\n", " (\"min-width\", \"120px\"),\n", " (\"text-align\", \"center\"),\n", " ],\n", " }\n", " )\n", " return (\n", " styled_df.style.hide(axis=\"index\")\n", " .hide(subset=hide_columns, axis=1)\n", " .format({\"final_score\": \"{:,.1f}\"})\n", " .set_table_styles(styles)\n", " )\n", "\n", "\n", "def _qa_records_to_html_table(data: list[QARecord]) -> str:\n", " \"\"\"Converts a list to an HTML table.\"\"\"\n", " if not data:\n", " return \"<i>No data to display.</i>\"\n", " html_table = \"<table style='border-collapse: collapse'><thead><tr>\"\n", " # Extract headers from the first element.\n", " keys = [\"question\", \"answer_choices\", \"gt_answer\"]\n", " if isinstance(data[0], QAResult):\n", " keys.append(\"result\")\n", " else:\n", " keys.append(\"justification\")\n", " for key in keys:\n", " html_table += f\"<th>{key}</th>\"\n", " html_table += \"</tr></thead><tbody>\"\n", " # Add rows\n", " for item in data:\n", " html_table += \"<tr>\"\n", " for key in keys:\n", " html_table += f\"<td>{item.__dict__[key]}</td>\"\n", " html_table += \"</tr>\"\n", " html_table += \"</tbody></table>\"\n", " return html_table" ] }, { "cell_type": "markdown", "metadata": { "id": "tLn3FteP9ttL" }, "source": [ "## Prompt Templates" ] }, { "cell_type": "markdown", "metadata": { "id": "k6bau3qOFInw" }, "source": [ "This cell defines the prompt templates that will be used for evaluation. The `RUBRIC_GENERATION_PROMPT` is used to generate questions relevant to the user input. The `RUBRIC_VALIDATOR_PROMPT` is then used to answer the questions for a generated video." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5--M9Kcz-LFt" }, "outputs": [], "source": [ "RUBRIC_GENERATION_PROMPT = \"\"\"Given a video description and the groundable words\n", "in it, generate multiple-choice questions that verify if the video description\n", "is correct.\n", "\n", "The goal is to ask questions about entities, objects, attributes, actions, colors,\n", "spatial relations, temporal relations, styles and scenes, when these are present\n", "in the description.\n", "\n", "Make sure that all options are substantially different from each other and only\n", "one option can be the correct one based on the description. Do not include other\n", "parts of the description as a non correct option.\n", "\n", "Justify why the other options cannot be true based on the description and\n", "question. Also, make sure that the question cannot be answered correctly only\n", "based on common sense and without reading the description.\n", "\n", "Each generated question should be independent of the other ones and it should be\n", "able to be understood without knowing the other questions; avoid referring to\n", "entities/objects/places from previous questions.\n", "\n", "Finally, avoid asking very general questions, such as 'What is in the video?',\n", "or 'Name a character in the video'.\n", "\n", "Generate the multiple-choice questions in the exact same format as the examples\n", "that follow. Do not add asterisks, white spaces, or any other reformatting and\n", "explanation that deviate from the formatting of the following examples.\n", "\n", "**Important**: There should be one and only one question-answer pair per key word.\n", "**Important**: answer value MUST BE only one of the following letters a, b, c, or d. And it MUST BE ALWAYS in lowercase!\n", "\n", "\n", "Given a \"description\", your answer must respond using this format:\n", "{\n", " \"keywords\": \"Your {1}[itemized] {2}[keywords]\",\n", " \"qas\": [\n", " The list of QAs in the format \"{\n", " \"question_id\": i,\n", " \"question\": \"the question\",\n", " \"choices\": [\"a) option 1\", \"b) option 2\", \"c) option 3\", \"d) option 4\"],\n", " \"justification\": \"why is this about the keyword\",\n", " \"answer\": \"the identifier of the right answer (i.e. a, b, c, or d)\",\n", " }\",\n", " ]\n", "}\n", "\n", "===\n", "Some examples are below.\n", "\n", "Description:\n", "\n", "Close up of grapes on a rotating table.\n", "Answer:\n", "{\n", " \"keywords\": \"{1}[Close up, style, 1.0] of {2}[grapes, object, 1.0] {3}[on a {4}[rotating, action, 1.0] {5}[table, spatial relation, 1.0]\",\n", " \"qas\": [\n", " {\n", " \"question_id\": 1, \"question\": \" How is the object displayed in the video shot in the camera?\", \"choices\": [\"a) long shot\", \"b) close up\", \"c) glimpse\", \"d) slow motion\"],\n", " \"justification\": \"The grapes, which is the main object displayed in the video ({2}) is presented with a close up ({1}). Given this, none of the other options can be correct as they are the opposite or contradict the description.\"\n", " \"answer\": \"b\"\n", " },\n", " {\n", " \"question_id\": 2, \"question\": \"What is the object that the camera focuses on during the video?\", \"choices\": [\"a) table\", \"b) pears\", \"c) blackberries\", \"d) grapes\"],\n", " \"justification\": \"the close up is happening on the grapes ({2}). A table is also present in the video ({5}) but it is not the main focus (close up) of the video. Pears and blackberries are not present in the video.\",\n", " \"answer\": \"d\"\n", " },\n", " {\n", " \"question_id\": 3, \"question\": \"Where are the grapes placed in the video?\", \"choices\": [\"a) table\", \"b) chair\", \"c) bowl\", \"d) plate'],\n", " \"justification\": \"the grapes are placed on a table ({3}). Chair is not correct, but it is similar furniture to table and could be found next to it, and bowl and plate are reasonable answers for placing grapes but not true here based on the description.\",\n", " \"answer\": \"a\",\n", "\n", " },\n", " {\n", " \"question_id\": 4, \"question\": \"What movement does the table in the video follows?\", \"choices\": [\"a) it stays still\", \"b) it is moved to the right\", \"c) it is moved to the left\", \"d) it rotates\"],\n", " \"justification\": \"the table is rotating ({4}, {5}). Staying still is typically how a table is depicted in videos, and moving it right or left are other movements that we often see but they are not true according to the description.\",\n", " \"answer\": d,\n", " }\n", " ]\n", "}\n", "\n", "Description:\n", "\n", "Turtle swimming in ocean.\n", "\n", "Answer:\n", "\n", "{\n", " \"keywords\": \"{1}[Turtle, entity, 1.0] {2}[swimming, action, 1.0] {3}[in ocean, spatial relation, 1.0]\",\n", " \"qas\": [\n", " {\n", " \"question_id\": 1,\n", " \"question\": \"What animal is present in the video?\",\n", " \"choices\": [\"a) fish\", \"b) dolphin\", \"c) turtle\", \"d) whale\"],\n", " \"justification\": \"turtle is the correct answer ({1}). All of fish, dolphin and whale are animals that live and swim in the ocean, so they are reasonable responses to such a question, but not the correct ones according to the description.\",\n", " \"answer\": \"c\"\n", " },\n", " {\n", " \"question_id\": 2,\n", " \"question\": \"What is the turtle doing in the video?\",\n", " \"choices\": [\"a) swims\", \"b) walks\", \"c) stays still\", \"d) moves the legs statically\"],\n", " \"justification\": \"the turtle is swimming ({2}). Staying still, walking or moving the legs without walking are typical movements that a turtle does, but they are not true according to the description.\",\n", " \"answer\": \"a\"\n", " },\n", " {\n", " \"question_id\": 3,\n", " \"question\": \"Where is the video taking place?\",\n", " \"choices\": [\"a) in the beach\", \"b) in the ocean\", \"c) in a boat\", \"d) in a lake\"],\n", " \"justification\": \"the turtle is swimming in the ocean ({3}). All other options are not true, but they would look similar to an ocean and they are of similar topic.\",\n", " \"answer\": \"b\"\n", " }\n", " ]\n", "}\n", "\n", "Description:\n", "\n", "A fat rabbit wearing a purple robe walking through a fantasy landscape.\n", "\n", "Answer:\n", "\n", "{\n", " \"keywords\": \"A {1}[fat, attribute, 1.0] {2}[rabbit, entity, 1.0] {3}[wearing a {4}[purple, color, 1.0] robe, attribute, 1.0] {5}[walking, action, 1.0] through a {6}[fantasy landscape, scene, 1.0]\",\n", " \"qas\": [\n", " {\n", " \"question_id\": 1,\n", " \"question\": \"What is the most appropriate description for the animal of the video?\",\n", " \"choices\": [\"a) thin\", \"b) regular\", \"c) slim\", \"d) fat\"],\n", " \"justification\": \"the rabbit in the video is fat ({1}). The options thin and slim are opposite of the attribute mentioned in the description and the regular adjective checks whether it is obvious that the rabbit has a weight above normal.\",\n", " \"answer\": \"d\"\n", " },\n", " {\n", " \"question_id\": 2,\n", " \"question\": \"Who wears a robe in the video?\",\n", " \"choices\": [\"a) rabbit\", \"b) hare\", \"c) squirrel\", \"d) rat\"],\n", " \"justification\": \"the rabbit is the animal that wears a robe in the video ({2}). Hare is an animal very similar to rabbit, and the other two options (squirrel and rat) are also similar but not true according to the description.\",\n", " \"answer\": \"a\"\n", " },\n", " {\n", " \"question_id\": 3,\n", " \"question\": \"What is the rabbit wearing in the video?\",\n", " \"choices\": [\"a) nothing\", \"b) dress\", \"c) robe\", \"d) jumpsuit\"],\n", " \"justification\": \"the rabbit is wearing a robe ({3}). Nothing is what normally an animal is wearing, and the options dress and jumpsuit are similar to the robe but not true according to the description.\",\n", " \"answer\": \"c\"\n", " },\n", " {\n", " \"question_id\": 4,\n", " \"question\": \"What is the color of the clothing that the rabbit wears in the video?\",\n", " \"choices\": [\"a) purple\", \"b) blue\", \"c) pink\", \"d) green\"],\n", " \"justification\": \"the rabbit is wearing a purple robe ({4}). the options blue, pink and green are colors similar to purple.\",\n", " \"answer\": \"a\"\n", " },\n", " {\n", " \"question_id\": 5,\n", " \"question\": \"What is the rabbit doing in the video?\",\n", " \"choices\": [\"a) running\", \"b) walking\", \"c) standing\", \"d) jumping\"],\n", " \"justification\": \"the rabbit is walking through a fantasy landscape ({5}, {6}). The options running and standing are similar to walking, and jumping is an action that could be performed by a rabbit, but not true according to the description.\",\n", " \"answer\": \"b\"\n", " },\n", " {\n", " \"question_id\": 6,\n", " \"question\": \"Where is the video taking place?\",\n", " \"choices\": [\"a) fields\", \"b) countryside\", \"c) fantasy landscape\", \"d) mountains\"],\n", " \"justification\": \"the rabbit is walking through a fantasy landscape ({6}). The options fields, countryside, and mountains are different types of landscapes, but they are real-world scenes instead of fantasy ones.\",\n", " \"answer\": \"c\"\n", " }\n", " ]\n", "}\n", "\n", "Description:\n", "\n", "A beautiful coastal beach in spring, waves lapping on sand by Hokusai, in the style of Ukiyo\n", "\n", "Answer:\n", "\n", "{\n", " \"keywords\": \"A {1}[beautiful coastal beach, scene, 1.0] {2}[in spring, temporal relation, 1.0], {3}[waves, scene, 1.0] {4}[lapping, action, 1.0] {5}[on sand, spatial relation, 1.0] {6}[by Hokusai, style, 1.0], {7}[in the style of Ukiyo, style, 1.0]\",\n", " \"qas\": [\n", " {\n", " \"question_id\": 1,\n", " \"question\": \"Where is the video taking place?\",\n", " \"choices\": [\"a) cliffs\", \"b) harbor\", \"c) coastal park\", \"d) coastal beach\"],\n", " \"justification\": \"the main scene is a beautiful coastal beach ({1}). The options cliffs, harbor, and coastal park are similar to coastal beach but not true according to the description.\",\n", " \"answer\": \"d\"\n", " },\n", " {\n", " \"question_id\": 2,\n", " \"question\": \"Which season is most likely during the video?\",\n", " \"choices\": [\"a) spring\", \"b) summer\", \"c) autumn\", \"d) winter\"],\n", " \"justification\": \"the video shows a coastal beach in spring ({2}). The options summer, autumn and winter are other seasons that are not true according to the description.\",\n", " \"answer\": \"a\"\n", " },\n", " {\n", " \"question_id\": 3,\n", " \"question\": \"What is the level of movement of the sea during the video?\",\n", " \"choices\": [\"a) calm\", \"b) wavy\", \"c) slightly moving\", \"d) ripply\"],\n", " \"justification\": \"the sea is wavy ({3}). The options calm, slightly moving, and ripply are different levels of movement of the sea and they are all different enough from wavy.\",\n", " \"answer\": \"b\"\n", " },\n", " {\n", " \"question_id\": 4,\n", " \"question\": \"What is the movement of the sea during the video?\",\n", " \"choices\": [\"a) gentle waves are coming to the shore\", \"b) there is a tide\", \"c) waves are lapping on the shore\", \"d) there are sea ripples\"],\n", " \"justification\": \"the sea is lapping on the shore ({4}). The other provided options are either of less intensity (gentle waves are coming to the shore, there are sea ripples) or the exact opposite (there is a tide).\",\n", " \"answer\": \"c\"\n", " },\n", " {\n", " \"question_id\": 5,\n", " \"question\": \"Where does the sea move to during the video?\",\n", " \"choices\": [\"a) sand\", \"b) rocks\", \"c) cliffs\", \"d) pebbles\"],\n", " \"justification\": \"the waves are lapping on sand ({5}). The options pebbles, rocks, and cliffs are different types of ground typically by the sea and have different levels of solidity.\",\n", " \"answer\": \"a\"\n", " },\n", " {\n", " \"question_id\": 6,\n", " \"question\": \"Whose artist is the theme of the scene similar to?\",\n", " \"choices\": [\"a) Utamaro\", \"b) Hokusai\", \"c) Hiroshige\", \"d) Yoshitoshi\"],\n", " \"justification\": \"the theme of the scene resembles a painting of Hokusai. The other options are other Japanese artists that are similar to Hokusai.\",\n", " \"answer\": \"b\"\n", " },\n", " {\n", " \"question_id\": 7,\n", " \"question\": \"Which Japanese painting style is most similar to the video?\",\n", " \"choices\": [\"a) Ukiyo\", \"b) Nihonga\", \"c) Sumi\", \"d) ink calligraphy\"],\n", " \"justification\": \"the video scene is in the style of Ukiyo ({7}). The other options are other types of Japanese painting styles that are not similar to the video according to the description.\",\n", " \"answer\": \"a\"\n", " }\n", " ]\n", "}\n", "\n", "Descripion:\n", "\n", "Mysterious scene of Sherlock Holmes investigating a crime scene at 221B Baker Street, forced perspective\n", "\n", "Answer:\n", "\n", "{\n", " \"keywords\": \"{1}[Mysterious scene, style, 1.0] of {2}[Sherlock Holmes, entity, 1.0] {3}[investigating, action, 1.0] a {4}[crime scene, scene, 1.0] {5}[at 221B Baker Street, spatial relation, 1.0], {6}[forced perspective, style, 1.0]\",\n", " \"qas\": [\n", " {\n", " \"question_id\": 1,\n", " \"question\": \"What is the vibe of the video?\",\n", " \"choices\": [\"a) light\", \"b) mysterious\", \"c) scary\", \"d) calm\"],\n", " \"justification\": \"the vibe of the video is mysterious ({1}). The options light and calm are opposite vibes to mysterious, and scary is similar to mysterious but more exaggerated and not true according to the description.\",\n", " \"answer\": \"b\"\n", " },\n", " {\n", " \"question_id\": 2,\n", " \"question\": \"What is the name of the person investigating the scene in the video?\",\n", " \"choices\": [\"a) Sherlock Holmes\", \"b) Watson\", \"c) John Luther\", \"d) Hercule Poirot\"],\n", " \"justification\": \"the video shows Sherlock Holmes in the scene ({2}). Watson is another character from the Sherlock Holmes show but not the correct one according to the description, and John Luther and Hercule Poirot are other detective characters from shows.\",\n", " \"answer\": \"a\"\n", " },\n", " {\n", " \"question_id\": 3,\n", " \"question\": \"What is the man doing in the video?\",\n", " \"choices\": [\"a) walking in a street\", \"b) walking indoors\", \"c) investigating a scene\", \"d) leaving a scene\"],\n", " \"justification\": \"the man is investigating the scene ({3}). The options walking in a street, and walking indoors are general descriptions but not specific enough to the contents of the video, and leaving a scene is the opposite of investigating.\",\n", " \"answer\": \"c\"\n", " },\n", " {\n", " \"question_id\": 4,\n", " \"question\": \"Where is the video taking place?\",\n", " \"choices\": [\"a) house\", \"b) basement\", \"c) street\", \"d) crime scene\"],\n", " \"justification\": \"the video is taking place in a crime scene ({4}). The other provided options are common places, but not as specific as a crime scene.\",\n", " \"answer\": \"d\"\n", " },\n", " {\n", " \"question_id\": 5,\n", " \"question\": \"Which street appears in the video?\",\n", " \"choices\": [\"a) Liverpool\", \"b) Baker\", \"c) Oxford\", \"d) Bond\"],\n", " \"justification\": \"the street appearing in the video is the Baker Street ({5}). The options Liverpool, Baker, Oxford and Bond are different names of streets.\",\n", " \"answer\": \"b\"\n", " },\n", " {\n", " \"question_id\": 6,\n", " \"question\": \"What is the perspective of the video?\",\n", " \"choices\": [\"a) close up\", \"b) forced\", \"c) farther away\", \"d) top down\"],\n", " \"justification\": \"the perspective of the video is forced. The other options are other perspective styles in video.\",\n", " \"answer\": \"b\"\n", " }\n", " ]\n", "}\n", "\n", "Description:\n", "\n", "Larry David costumed as Bob Ross is drawing a nature scene but spills the paint\n", "\n", "Answer:\n", "\n", "{\n", " \"keywords\": \"{1}[Larry David, entity, 1.0] as {2}[Bob Ross, entity, 1.0] {3}[is drawing, action, 1.0] a {4}[nature scene, object, 1.0] {5}[but, temporal relation, 1.0] {6}[spills, spatial relation, 1.0], {7}[the paint, object, 1.0]\",\n", " \"qas\": [\n", " {\n", " \"question_id\": 1,\n", " \"question\": \"Who is the character that draws a painting in the video?\",\n", " \"choices\": [\"a) Bob Ross\", \"b) Larry David\", \"c) Bill Alexander\", \"d) George Costanza\"],\n", " \"justification\": \"Larry David is present in the video ({1}). The option Bob Ross is the person that Larry is dressed as, Bill Alexander is a painter with similar style as Bob Ross, and George Costanza is a character similar to Larry David.\",\n", " \"answer\": \"b\"\n", " },\n", " {\n", " \"question_id\": 2,\n", " \"question\": \"Who is the painter of the video dressed as?\",\n", " \"choices\": [\"a) Bill Alexander\", \"b) William Alexander\", \"c) Thomas Kinkade\", \"d) Bob Ross\"],\n", " \"justification\": \"the main character is dressed like Bob Ross ({2}). The other options are all painters that are similar to Bob Ross.\",\n", " \"answer\": \"d\"\n", " },\n", " {\n", " \"question_id\": 3,\n", " \"question\": \"What is the painter doing in the video?\",\n", " \"choices\": [\"a) looking at a painting\", \"b) sitting next to a painting\", \"c) drawing a painting\", \"d) hanging up a painting\"],\n", " \"justification\": \"the man is drawing a painting ({3}). The other options are still involving a painting; looking at and sitting next to a painting are more static, and hanging up a painting is a different action from drawing the painting.\",\n", " \"answer\": \"c\"\n", " },\n", " {\n", " \"question_id\": 4,\n", " \"question\": \"What is depicted in the painting in the video?\",\n", " \"choices\": [\"a) nature scene\", \"b) abstract art\", \"c) geometric shapes\", \"d) blank canvas\"],\n", " \"justification\": \"the painting in the video depicts a nature scene ({4}). The other options are all different types of paintings that are mutually exclusive with depicting a nature scene.\",\n", " \"answer\": \"a\"\n", " },\n", " {\n", " \"question_id\": 5,\n", " \"question\": \"What is happening in the end of the video?\",\n", " \"choices\": [\"a) the man looks at the painting\", \"b) the man spills the paint\", \"c) the main draws the painting\", \"d) the man leaves the painting\"],\n", " \"justification\": \"towards the end of the video the man spills the paint ({5}, {6}). The option of drawing the painting happens earlier in the video, and the other two options are alternative actions around the painting.\",\n", " \"answer\": \"b\"\n", " },\n", " {\n", " \"question_id\": 6,\n", " \"question\": \"What does the man overturn in the end of the video?\",\n", " \"choices\": [\"a) the paint\", \"b) the painting\", \"c) the hat\", \"d) the brushes\"],\n", " \"justification\": \"the man overturns the paint. The option of the painting is another object present in the video, but not the correct one given the question, and the hat and brushes are related objects that are likely in the space in the video.\",\n", " \"answer\": \"a\"\n", " }\n", " ]\n", "}\n", "\n", "Description:\n", "\n", "Child swings high on tire swing\n", "\n", "Answer:\n", "\n", "{\n", " \"keywords\": \"{1}[Child, entity, 1.0] {2}[swings, action, 1.0] {3}[high, spatial relation, 1.0] {4}[on tire swing, spatial relation, 1.0]\",\n", " \"qas\": [\n", " {\n", " \"question_id\": 1,\n", " \"question\": \"What is the age of the character in the video?\",\n", " \"choices\": [\"a) child\", \"b) young man\", \"c) baby\", \"d) old man\"],\n", " \"justification\": \"the main character of the video is a child ({1}). The options young man, baby and old man are characters of different ages.\",\n", " \"answer\": \"a\"\n", " },\n", " {\n", " \"question_id\": 2,\n", " \"question\": \"What is the child doing in the video?\",\n", " \"choices\": [\"a) sits on swing\", \"b) pushes the swing\", \"c) swings on swing\", \"d) walks away from the swing\"],\n", " \"justification\": \"the child swings on the swing ({2}). The option sits on the swing is similar, but it does not have any movement. The options pushes the swing and walks away from the swing require a different position of the child relative to the swing.\",\n", " \"answer\": \"c\"\n", " },\n", " {\n", " \"question_id\": 3,\n", " \"question\": \"What is the child doing on the swing?\",\n", " \"choices\": [\"a) sits\", \"b) swings high\", \"c) moves slightly\", \"d) gets off\"],\n", " \"justification\": \"the child swings high on the swing ({3}). The options sits and moves slightly are different movements of different intensity that the child could have been doing on the swing and the option gets off the swing is the opposite.\",\n", " \"answer\": \"b\"\n", " },\n", " {\n", " \"question_id\": 4,\n", " \"question\": \"Where is the child sitting on?\",\n", " \"choices\": [\"a) circular swing\", \"b) flat swing\", \"c) classic swing\", \"d) tire swing\"],\n", " \"justification\": \"The child sits in a tire swing ({4}). The other options are all different types of swings that are similar to tire swing.\",\n", " \"answer\": \"d\"\n", " }\n", " ]\n", "}\n", "\n", "Description:\n", "\n", "Frog jumps in a pond, forced perspective\n", "\n", "Answer:\n", "\n", "{\n", " \"keywords\": \"{1}[Frog, entity, 1.0] {2}[jumps, action, 1.0] {3}[in a pond, spatial relation, 1.0] {4}[forced perspective, style, 1.0]\",\n", " \"qas\": [\n", " {\n", " \"question_id\": 1,\n", " \"question\": \"What animal is present in the video?\",\n", " \"choices\": [\"a) toad\", \"b) salamander\", \"c) frogs\", \"d) frog\"],\n", " \"justification\": \"the animal of the video is a frog ({1}). The option frogs is the plural which is not correct given the description. The options salamander and toad are animals similar to frog.\",\n", " \"answer\": \"d\"\n", " },\n", " {\n", " \"question_id\": 2,\n", " \"question\": \"What is the frog doing in the video?\",\n", " \"choices\": [\"a) sits next to a pond\", \"b) jumps in a pond\", \"c) jumps out of a pond\", \"d) slides in a pond\"],\n", " \"justification\": \"the frog jumps in a pond ({2}). The option sits next to a pond is related to the pond, but it does not have any movement. The option slides in a pond has a similar movement but it is a different action of different intensity. The option jumps out of a pond is the opposite.\",\n", " \"answer\": \"b\"\n", " },\n", " {\n", " \"question_id\": 3,\n", " \"question\": \"Where is the frog jumping in?\",\n", " \"choices\": [\"a) lake\", \"b) reservoir\", \"c) pond\", \"d) fountain\"],\n", " \"justification\": \"the frog jumps in a pond ({3}). The other options are all different types of water masses of different sizes.\",\n", " \"answer\": \"c\"\n", " },\n", " {\n", " \"question_id\": 4,\n", " \"question\": \"What is the perspective that the video is filmed?\",\n", " \"choices\": [\"a) aerial perspective\", \"b) forced perspective\", \"c) linear perspective\", \"d) one point perspective\"],\n", " \"justification\": \"the video is filmed in a forced perspective. The other options are all different perspective styles in video.\",\n", " \"answer\": \"b\"\n", " }\n", " ]\n", "}\n", "\n", "Description:\n", "{prompt}\n", "Answer:\n", "\"\"\"\n", "\n", "RUBRIC_VALIDATOR_PROMPT = \"\"\"\n", "# Instructions\n", "Watch the video below carefully and answer the question based on the choices\n", "provided. Only answer with the letter (a, b, c, or d) that corresponds to the\n", "correct answer.\n", "\n", "{rubrics}\n", "\n", "# Video\n", "{video}\n", "\n", "# Output Format\n", "<question>\n", "Question: repeat the original question\n", "Verdict: a|b|c|d|e\n", "</question>\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": { "id": "Fdtcd-WF9ymb" }, "source": [ "## Define the metric" ] }, { "cell_type": "markdown", "metadata": { "id": "xTyZIRi6L7CI" }, "source": [ "This cell configures the rubric generation and validator metric for rubric based evaluation." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "CeA0mrWi-LX1" }, "outputs": [], "source": [ "# Rubric Generation\n", "rubric_generation_config = RubricGenerationConfig(\n", " prompt_template=RUBRIC_GENERATION_PROMPT,\n", " parsing_fn=parse_json_to_qa_records,\n", ")\n", "\n", "# Rubric Validation\n", "pointwise_metric = PointwiseMetric(\n", " metric=\"gecko_metric\",\n", " metric_prompt_template=RUBRIC_VALIDATOR_PROMPT,\n", " custom_output_config=CustomOutputConfig(\n", " return_raw_output=True,\n", " parsing_fn=parse_rubric_results,\n", " ),\n", ")\n", "\n", "# Rubric Metric\n", "rubric_based_gecko = RubricBasedMetric(\n", " generation_config=rubric_generation_config,\n", " critique_metric=pointwise_metric,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "nHjCRree99Px" }, "source": [ "# Prepare the dataset" ] }, { "cell_type": "markdown", "metadata": { "id": "HFJr-xZMaaUH" }, "source": [ "In the following dataset, two prompts are used for each generated video. The first is the prompt that corresponds to the generated content. The second is a counterexample that is similar but does not exactly match the generated content. This is done to demonstrate the difference in the Gecko evaluation for high quality and low quality responses." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "U37FNLD8-Lp5" }, "outputs": [], "source": [ "prompts = [\n", " \"Snow blanketed rocky mountains surround and shadow deep canyons. the canyons bend through the high elevated mountain peaks. black and white\",\n", " \"Lush green valley is carved between rocky cliffs. the valley winds through the high elevated rock faces. misty morning\",\n", " \"A couple in formal evening wear going home get caught in a heavy downpour with umbrellas\",\n", " \"Two friends, dressed in casual summer clothes, are caught in a light summer rain while running home\",\n", " \"A tranquil tableau of in the heart of the Utah desert, a massive sandstone arch spanned the horizon\",\n", " \"A eerie panorama of the Arizona desert, with ancient ruins silhouetted against the setting sun\",\n", " \"Few big purple plums rotating on the turntable. water drops appear on the skin during rotation. isolated on the white background. close-up\",\n", " \"A large red apple rotating on the turntable. water drops appear on the skin during rotation. isolated on the black background. close-up\",\n", " \"A boat sailing leisurely along the Seine River with the Eiffel Tower in background\",\n", " \"A boat cruising rapidly along the Thames River with Big Ben behind\",\n", "]\n", "videos = [\n", " '{\"contents\": [{\"parts\": [{\"file_data\": {\"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/evaluation/videos/mountain.mp4\"}}]}]}',\n", " '{\"contents\": [{\"parts\": [{\"file_data\": {\"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/evaluation/videos/mountain.mp4\"}}]}]}',\n", " '{\"contents\": [{\"parts\": [{\"file_data\": {\"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/evaluation/videos/couple.mp4\"}}]}]}',\n", " '{\"contents\": [{\"parts\": [{\"file_data\": {\"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/evaluation/videos/couple.mp4\"}}]}]}',\n", " '{\"contents\": [{\"parts\": [{\"file_data\": {\"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/evaluation/videos/desert.mp4\"}}]}]}',\n", " '{\"contents\": [{\"parts\": [{\"file_data\": {\"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/evaluation/videos/desert.mp4\"}}]}]}',\n", " '{\"contents\": [{\"parts\": [{\"file_data\": {\"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/evaluation/videos/plum.mp4\"}}]}]}',\n", " '{\"contents\": [{\"parts\": [{\"file_data\": {\"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/evaluation/videos/plum.mp4\"}}]}]}',\n", " '{\"contents\": [{\"parts\": [{\"file_data\": {\"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/evaluation/videos/boat.mp4\"}}]}]}',\n", " '{\"contents\": [{\"parts\": [{\"file_data\": {\"mime_type\": \"video/mp4\", \"file_uri\": \"gs://cloud-samples-data/generative-ai/evaluation/videos/boat.mp4\"}}]}]}',\n", "]\n", "\n", "eval_dataset = pd.DataFrame(\n", " {\n", " \"prompt\": prompts,\n", " \"video\": videos,\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "qm6mHoPc-DS0" }, "source": [ "# Run evaluation" ] }, { "cell_type": "markdown", "metadata": { "id": "BCdjWbaYCvIT" }, "source": [ "## Generate rubrics" ] }, { "cell_type": "markdown", "metadata": { "id": "6DU7nuNPNoXQ" }, "source": [ "First we generate rubrics for the user prompts." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7EWsZkW-CzXZ" }, "outputs": [], "source": [ "dataset_with_rubrics = rubric_based_gecko.generate_rubrics(eval_dataset)\n", "pretty_print_qa_records_df(\n", " dataset_with_rubrics, hide_columns=[\"prompt\", \"video\", \"rubrics\"]\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "Pq5uNh0hC3sz" }, "source": [ "## Evaluate with rubrics" ] }, { "cell_type": "markdown", "metadata": { "id": "VcQwzMjYNsr7" }, "source": [ "Then we use the generated rubrics to evaluate the quality of the responses." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "mMuDxiwy-MTt" }, "outputs": [], "source": [ "eval_task = EvalTask(\n", " dataset=dataset_with_rubrics,\n", " metrics=[rubric_based_gecko],\n", ")\n", "eval_result = eval_task.evaluate(response_column_name=\"video\")\n", "\n", "# Calculate overall score for metric.\n", "dataset_with_final_scores = compute_scores(eval_result.metrics_table)\n", "np.mean(dataset_with_final_scores[\"final_score\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zc6GvhGdK7D_" }, "outputs": [], "source": [ "pretty_print_result_df(\n", " dataset_with_final_scores,\n", " hide_columns=[\n", " \"prompt\",\n", " \"video\",\n", " \"rubrics\",\n", " \"qa_records\",\n", " \"gecko_metric/rubric_results\",\n", " ],\n", ")" ] } ], "metadata": { "colab": { "collapsed_sections": [ "R5Xep4W9lq-Z", "dmWOrTJ3gx13", "DF4l8DTdWgPY", "tLn3FteP9ttL", "Pq5uNh0hC3sz" ], "name": "evaluate_videos_with_gecko.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }