notebooks/multimodal_search_evaluation.ipynb (1,013 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ofZHOerGBTx8"
},
"outputs": [],
"source": [
"# Copyright 2025 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0kIyAXORBTx9"
},
"source": [
"# Evaluating Multimodal Image Search\n",
"*powered by BigQuery, Vertex AI and Gemini*\n",
"\n",
"This notebook builds upon the multimodal analysis, search, and understanding capabilities established in [Part 1](./part_1_multimodal_analysis_search.ipynb) and [Part 2](./part_2_large_scale_understanding.ipynb). Here, we focus on evaluating the performance of the image search functionality which uses BigQuery and Gemini. We will leverage the same tables, models and vector search created in the previous notebook to assess the accuracy of our semantic search. We will use [DeepEval](https://docs.confident-ai.com/), the open-source LLM evaluation framework, to set up and execute our test cases and calculate the different evaluation metrics (detailed below).\n",
"\n",
"This notebook will cover:\n",
"\n",
"- **Defining evaluation metrics for document retrieval**: We will cover relevant metrics to evaluate the image search, i.e. our vector search-backed document retrieval. Metrics such as contextual recall and precision, will assess how well the system retrieves relevant images given a textual query. We will use a diverse set of natural language queries related to common bus stop issues (e.g., \"damaged bench,\" \"litter near shelter,\" \"graffiti on the wall\") as our test cases.\n",
"\n",
"- **Using Gemini as multimodal LLM judge in DeepEval evaluations**: We will leverage Gemini 1.5 Pro to evaluate each retrieved image and determine whether it's a true or false positive. These results will be used to calculate the document retrieval metrics. Using LLM for evaluation of retrieved documents is necessary since there's no labeled dataset, as is often the case.\n",
"\n",
"For our image search use case, this evaluation will help answer the following questions:\n",
"\n",
"1. Do the vector embeddings meaningfully represent the bus stop images and accurately capture the cleanliness/maintenance/safety nuances?\n",
"\t- This is influenced by hyperparameters like the choice of **embedding model**, **prompt**, and **temperature**.\n",
"2. Does the vector search retrieve the relevant results and rerank them in the right order?\n",
"\t- This is influenced by hyperparameters like search parameters **top-K**, **distance type**, and **reranking** logic.\n",
"\n",
"This quantitative assessment provides valuable insights into the strengths of multimodal image search, along with actionable results to tune the aforementioned hyperparameters. The methods described here can be readily applied to other document retrieval systems, be it standalone like a search application or a part of a RAG pipeline for a LLM application."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WzI1RC8NnJT5"
},
"source": [
"### Why does document retrieval evaluation matter?"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mJog9DT5nfJS"
},
"source": [
"Evaluating document retrieval is crucial for both search applications and Retrieval-Augmented-Generation (RAG) pipelines because it directly impacts the quality and relevance of the information retrieved. Effective evaluation helps identify areas for improvement in retrieval systems, leading to more accurate, comprehensive, and satisfactory search results and LLM-generated responses."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pDGx79sanO3Y"
},
"source": [
"In the context of RAG, you typically evaluate the Retriever and Generator separately to help pinpoint issues in your RAG pipeline. The following diagram shows the relevant metrics for each component. In this notebook, we focus on the Retriever metrics."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6u9QIm7vuWPY"
},
"source": [
"<div style=\"text-align: center;\">\n",
" <img src=\"https://d2lsxfc3p6r9rv.cloudfront.net/rag-pipeline.svg\" alt=\"RAG pipeline evaluation\" width=\"300px\" style=\"margin: auto;display: block;margin-left: auto;margin-right: auto;\">\n",
" <p>RAG pipeline evaluation - Source: https://docs.confident-ai.com/docs/guides-rag-evaluation</p>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AFqQKMnW9HJt"
},
"source": [
"Using DeepEval and Gemini as LLM judge, we will specifically measure the following metrics for a comprehensive evaluation of our image search:\n",
"- **Contextual recall** which calculates the percentage of relevant images out of all retrieved images. This metric evaluates the ability of the underlying embedding model to accurately capture and retrieve as many relevant images based on the textual query. For more details on how it is calculated, see [multimodal contextual recall](https://docs.confident-ai.com/docs/multimodal-metrics-contextual-recall#how-is-it-calculated).\n",
"- **Contextual precision** which evaluates the order of the retrieved images where the relevant images rank higher than the irrelevant ones. For more details on how it is calculated, see [multimodal contextual precision](https://docs.confident-ai.com/docs/multimodal-metrics-contextual-precision#how-is-it-calculated).\n",
"\n",
"Note: You might have noticed we are not measuring **contextual relevancy**. While contextual recall compares retrieved documents against expected output, contextual relevancy compares retrieved documents against user input. They're both useful for evaluating a general RAG retriever. However, in the case of a simple search application, user input and expected output are semantically equivalent (e.g. querying for 'broken glass' has the expected output of 'broken glass' images). Therefore, contextual relevancy and recall are 100% the same in the case of a search application like our cross-modality image search use case."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jbmv2fBEUN51"
},
"source": [
"## Prerequisites\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IhVqAbHbZPd6"
},
"source": [
"Since this notebook builds on parts 1 and 2, it is assumed you have already processed the image batches. Specifically, in this notebook, you will use the following pre-deployed BigQuery and Cloud Storage resources:\n",
"\n",
"- Image reports table `multimodal.image_reports`\n",
"- Image vector embeddings table `multimodal.image_reports_vector_db`\n",
"- Remote text embedding model `multimodal.text_embedding_model`\n",
"- Cloud resource connection `multimodal.{REGION}.multimodal`\n",
"- Cloud Storage bucket and pre-processed images under `gs://{PROJECT_ID}-multimodal/target/*`"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ijFFC4kaBTx_"
},
"source": [
"#### Cost considerations"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4xtQm6u_BTx_"
},
"source": [
"While no new cloud resources will be created in this notebook, running this notebook will incur minor costs associated with Gemini API usage and BigQuery processing. The DeepEval framework evaluates the accuracy of BigQuery vector search results, and is configured to use Gemini 1.5 Pro as the LLM judge for the few test cases defined below. Refer to the following pages for pricing details of the Cloud services used in this notebook:\n",
"\n",
"- [BigQuery pricing](https://cloud.google.com/bigquery/docs/pricing)\n",
"- [Vertex AI pricing](https://cloud.google.com/vertex-ai/generative-ai/pricing)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kq5CVnCXebOb"
},
"source": [
"## Getting Started"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "77mCQO2Of15N"
},
"source": [
"Let's first create some environment variables, including your Google Cloud project ID and the region where you previously deployed the prerequisite resources into:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HR__Ef43ZPfQ"
},
"outputs": [],
"source": [
"PROJECT_ID = \"your-project-id\" # @param {type:\"string\"}\n",
"REGION = \"us-central1\" # @param {type:\"string\"}\n",
"\n",
"BUCKET_NAME = f\"{PROJECT_ID}-multimodal\" # Bucket pre-created\n",
"IMAGE_PATH = f\"gs://{BUCKET_NAME}/target\" # Path of pre-processed images"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "n8rxPY7tZY5U"
},
"source": [
"### Install packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Q1eNnfaqZTNd"
},
"outputs": [],
"source": [
"%pip install --upgrade --user --quiet \\\n",
" google-cloud-aiplatform \\\n",
" google-cloud-bigquery \\\n",
" deepeval"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "eUHDKyrvZbfY"
},
"source": [
"### Import libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uj6OAruUmS45"
},
"outputs": [],
"source": [
"from google.cloud import bigquery\n",
"import asyncio\n",
"import vertexai\n",
"import pandas as pd\n",
"from IPython.display import HTML"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "wtkyjl75afM-"
},
"outputs": [],
"source": [
"# initialize bigquery client\n",
"client = bigquery.Client()\n",
"\n",
"# set pandas styling options\n",
"# Don't truncate strings in BigQuery query results\n",
"pd.set_option('display.max_colwidth', None)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "k4_Ja4irfhVf"
},
"source": [
"### Define utility functions"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "60Pkxw0VfmLV"
},
"source": [
"As per prior notebook, let's define some styling functions to help display dataframes with embedded images."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "INZg3gfPfkyZ"
},
"outputs": [],
"source": [
"# Generate HTML img tag from signed url\n",
"def preview_image(uri):\n",
" if pd.notna(uri):\n",
" return f'<img src=\"{uri}\" style=\"width:300px; height:auto; transition: transform 0.25s ease; border: 1px solid black;\" onmouseover=\"this.style.transform=\\'scale(2.5)\\';\" onmouseout=\"this.style.transform=\\'scale(1.0)\\';\">'\n",
" else:\n",
" return None"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VI0xLxmm7HBa"
},
"source": [
"## Preview Image Dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rIDOBrKegaN_"
},
"source": [
"Let's first confirm the image reports and embeddings tables are already populated."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "puoEjIY8iL4m"
},
"source": [
"If you've completed part 1, there should be 70+ pre-processed images and corresponding vector embeddings in BigQuery. It's a small dataset, but sufficient to run some searches and evaluate initial search quality."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bIw82qhuHWy0"
},
"outputs": [],
"source": [
"%%bigquery image_reports_df\n",
"SELECT * FROM `multimodal.image_reports`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cNWTTydb7jkl"
},
"outputs": [],
"source": [
"len(image_reports_df)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "DTDpz3Rx7El4"
},
"outputs": [],
"source": [
"%%bigquery image_embeddings_df\n",
"SELECT * FROM `multimodal.image_reports_vector_db`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "POm3f4NLidsJ"
},
"outputs": [],
"source": [
"len(image_embeddings_df)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YCrM4k-NkdSy"
},
"source": [
"Let's preview some of these image reports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "itFJzBFNr72l"
},
"outputs": [],
"source": [
"image_reports_df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HWmaBmF5m7wk"
},
"source": [
"## Set up Document Retriever"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lQZRgFEwkxP3"
},
"source": [
"We're using BigQuery `VECTOR_SEARCH` to power our document (image) retriever.\n",
"To faciliate the retrieval, we'll wrap the BigQuery SQL logic in a search helper function as we did in prior notebook."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WV_sH-wSFyd-"
},
"source": [
"### Define search helper function"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zP4TMXJaoq8_"
},
"source": [
"Let's define this utility function for semantic search. This function generates the text embedding for the test query using the same `text_embedding_model` (and same task type), then runs the `VECTOR_SEARCH` query against the base table of embeddings for the vectors, that is `reports_vector_db` table. It then appends the authenticated url link to preview the results including the image."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bz-RnhB921xi"
},
"outputs": [],
"source": [
"def run_semantic_search(query:str, top_k:int):\n",
" escaped_query = query.replace(\"'\", \"''\").replace(\"\\\\\", \"\\\\\\\\\")\n",
"\n",
" search_terms_embeddings_query = f\"\"\"\n",
" SELECT\n",
" query.content AS search, distance,\n",
" RANK() OVER (ORDER BY distance ASC) as rank, -- Calculate rank based on distance\n",
" base.report_id, base.bus_stop_id, base.uri, base.description,\n",
" CONCAT(\"https://storage.mtls.cloud.google.com/\", SPLIT(base.uri, \"gs://\")[OFFSET(1)]) AS url,\n",
" base.cleanliness_level, base.safety_level\n",
" FROM\n",
" VECTOR_SEARCH(\n",
" TABLE `multimodal.image_reports_vector_db`,\n",
" 'embedding',\n",
" (\n",
" SELECT * FROM ML.GENERATE_EMBEDDING(\n",
" MODEL `multimodal.text_embedding_model`,\n",
" (\n",
" SELECT '{escaped_query}' AS content\n",
" ),\n",
" STRUCT('SEMANTIC_SIMILARITY' as task_type))\n",
" ),\n",
" top_k => {top_k},\n",
" distance_type => 'COSINE'\n",
" )\n",
" \"\"\"\n",
"\n",
" return client.query(search_terms_embeddings_query).to_dataframe()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "H7hMgttCoApZ"
},
"source": [
"### Run retriever"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Ja4Op-athYy-"
},
"source": [
"Let's test our retriever using the query \"broken glass\":"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7c8xtt1QhbPV"
},
"outputs": [],
"source": [
"query = \"broken glass\"\n",
"top_k = 3\n",
"retrieved_docs_df = run_semantic_search(query, top_k)\n",
"\n",
"retrieved_docs_df['image'] = retrieved_docs_df['url'].apply(preview_image)\n",
"\n",
"# Display the DataFrame with embedded images\n",
"HTML(retrieved_docs_df.sort_values('rank')[['bus_stop_id', 'uri', 'rank', 'image', 'description']].to_html(escape=False))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nfpEzfLiBTyD"
},
"source": [
"The retrieved images should all have broken glass, as shown in the output screenshot below. Note the underlying search is done against the actual image description embeddings, and does not take into account the image file name."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FRtdBIzEBTyD"
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4YzIjJJAarf9"
},
"source": [
"## Set up DeepEval to use Vertex AI"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LnSMxWUje9e7"
},
"source": [
"### Define Gemini as multimodal evaluation model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Pl7SPDFnaTLs"
},
"outputs": [],
"source": [
"from vertexai.generative_models import GenerativeModel, GenerationConfig, Part, Image, HarmCategory, HarmBlockThreshold\n",
"from deepeval.models.base_model import DeepEvalBaseMLLM\n",
"from deepeval.test_case import MLLMImage\n",
"from pydantic import BaseModel\n",
"from typing import Optional, List, Dict, Tuple, Union\n",
"\n",
"class MultimodalGoogleVertexAI(DeepEvalBaseMLLM):\n",
" \"\"\"Class that implements Vertex AI for DeepEval\"\"\"\n",
" def __init__(self, model_name, *args, **kwargs):\n",
" super().__init__(model_name, *args, **kwargs)\n",
" self.model = self.load_model(*args, **kwargs)\n",
"\n",
" def load_model(self, *args, **kwargs):\n",
" # Initialize safety filters for Vertex AI model\n",
" # This is important to ensure no evaluation responses are blocked\n",
" safety_settings = {\n",
" HarmCategory.HARM_CATEGORY_UNSPECIFIED: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE\n",
" }\n",
"\n",
" vertexai.init(project=kwargs['project'], location=kwargs['location'])\n",
"\n",
" return GenerativeModel(\n",
" model_name=self.model_name,\n",
" safety_settings=safety_settings)\n",
"\n",
" def generate_prompt(\n",
" self, multimodal_input: List[Union[str, MLLMImage]] = []\n",
" ):\n",
" prompt = []\n",
" for ele in multimodal_input:\n",
" if isinstance(ele, str):\n",
" prompt.append(ele)\n",
" elif isinstance(ele, MLLMImage):\n",
" if ele.local == True:\n",
" image = Part.from_image(Image.load_from_file(ele.url))\n",
" else:\n",
" image = Part.from_uri(uri=ele.url, mime_type=\"image/jpeg\")\n",
"\n",
" prompt.append(image)\n",
" else:\n",
" raise ValueError(f\"Invalid input type: {type(ele)}\")\n",
"\n",
" return prompt\n",
"\n",
" def generate(\n",
" self, multimodal_input: List[Union[str, MLLMImage]], schema: Optional[BaseModel] = None\n",
" ) -> Tuple[str, float]:\n",
"\n",
" prompt = self.generate_prompt(multimodal_input)\n",
" if schema is not None:\n",
" response = self.model.generate_content(prompt, generation_config=GenerationConfig(\n",
" response_mime_type=\"application/json\", response_schema=schema\n",
" ))\n",
" else:\n",
" response = self.model.generate_content(prompt)\n",
"\n",
" return response.text\n",
"\n",
" async def a_generate(\n",
" self, multimodal_input: List[Union[str, MLLMImage]], schema: Optional[BaseModel] = None\n",
" ) -> Tuple[str, float]:\n",
" prompt = self.generate_prompt(multimodal_input)\n",
" if schema is not None:\n",
" response = await self.model.generate_content_async(prompt, generation_config=GenerationConfig(\n",
" response_mime_type=\"application/json\", response_schema=schema\n",
" ))\n",
" else:\n",
" response = await self.model.generate_content_async(prompt)\n",
"\n",
" return response.text\n",
"\n",
" def get_model_name(self) -> str:\n",
" return self.model_name\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SiNOCGRUdqsr"
},
"source": [
"Let's instantiate the Vertex AI wrapper class to use Gemini 1.5 Pro for evaluating metrics criteria in subsequent section."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "e8FcqtB_dzcy"
},
"outputs": [],
"source": [
"gemini_pro = MultimodalGoogleVertexAI(\n",
" model_name=\"gemini-1.5-pro-001\",\n",
" project=PROJECT_ID,\n",
" location=REGION\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rmsT9IIEp4Yt"
},
"source": [
"### Evaluate sample test case"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_asJpjvmqUS7"
},
"source": [
"Let's go back to our image search results from \"broken glass\" query where we got 3 correctly identified broken glass images. Before we measure the recall and precision, let's add one false positive image `MA-01.jpg` for testing purposes:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "KAl-umMh02El"
},
"outputs": [],
"source": [
"retrieved_docs = retrieved_docs_df.sort_values('rank')['uri'].values.tolist()\n",
"retrieved_docs.append(f\"{IMAGE_PATH}/MA-01.jpg\")\n",
"retrieved_docs"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ciLz2ZmsBTyE"
},
"source": [
"The `retrieved_docs` list should contain 4 images including the last one we just manually appended:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KXXFSQCLBTyE"
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "g44ReJyr2Oyb"
},
"source": [
" Given there are 4 images, with the first 3 being relevant and the last is non-relevant, we should see the following results:\n",
" - contextual recall should be equal to 3/4 = 0.75\n",
" - contextual precision should be equal to 1 since all relevant images rank higher"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9ubcKnbTz16O"
},
"outputs": [],
"source": [
"from deepeval.test_case import MLLMTestCase, MLLMImage\n",
"from deepeval.metrics import MultimodalContextualRecallMetric, MultimodalContextualPrecisionMetric\n",
"\n",
"test_case = MLLMTestCase(\n",
" input=[\"broken glass\"],\n",
" actual_output = [],\n",
" expected_output=[f\"Node {i} image shows broken glass in or around the sidewalk\" for i in enumerate(retrieved_docs)],\n",
" retrieval_context=[MLLMImage(uri) for uri in retrieved_docs]\n",
")\n",
"\n",
"recall_metric = MultimodalContextualRecallMetric(model=gemini_pro)\n",
"precision_metric = MultimodalContextualPrecisionMetric(model=gemini_pro)\n",
"\n",
"print(\"Evaluating Contextual Recall:\")\n",
"recall_metric.measure(test_case)\n",
"print(\"Score: \", recall_metric.score)\n",
"print(\"Reason: \", recall_metric.reason)\n",
"for verdict in recall_metric.verdicts: print(verdict)\n",
"\n",
"print(\"Evaluating Contextual Precision:\")\n",
"precision_metric.measure(test_case)\n",
"print(\"Score: \", precision_metric.score)\n",
"print(\"Reason: \", precision_metric.reason)\n",
"for verdict in precision_metric.verdicts: print(verdict)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2XDRN2znBTyE"
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lA7lWT3Xobfx"
},
"source": [
"## Define Multiple Test Cases"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "X4kakQh6gLFm"
},
"source": [
"Let's create a helper function which we'll use to create several LLM test cases given different queries."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fTe9m28Fdrfh"
},
"outputs": [],
"source": [
"from deepeval.test_case import MLLMTestCase, MLLMImage\n",
"\n",
"def create_llm_test_case(query: str, top_k: int) -> MLLMTestCase:\n",
" \"\"\"\n",
" Creates MLLMTestCase given textual query and top_k arguments\n",
"\n",
" Args:\n",
" query: textual query to execute.\n",
" top_k: The number of top results to consider.\n",
"\n",
" Returns:\n",
" A MLLMTestCase object\n",
" \"\"\"\n",
" print(f\"Creating new test case: search {top_k} '{query}' images\")\n",
" # Run semantic search and retrieve documents\n",
" retrieved_docs_df = run_semantic_search(query, top_k)\n",
" retrieved_docs = retrieved_docs_df.sort_values('rank')['uri'].values.tolist()\n",
" print('\\n'.join(map(str, retrieved_docs)))\n",
"\n",
" # Create test case comparing retrieved documents against expected documents attributes\n",
" test_case = MLLMTestCase(\n",
" input=[query],\n",
" actual_output = [],\n",
" # expected_output=[f\"The image shows noticeable amount of ${query}\"] * len(retrieved_docs),\n",
" expected_output=[f\"Node {i} image shows ${query} in or around the sidewalk\" for i in enumerate(retrieved_docs)],\n",
" retrieval_context=[MLLMImage(uri) for uri in retrieved_docs]\n",
" )\n",
"\n",
" return test_case"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SkFq6Ms6mhMx"
},
"source": [
"Now let's generate different test cases for the following sample searches:\n",
"- broken glass\n",
"- damaged bench\n",
"- excessive litter\n",
"- graffiti\n",
"\n",
"This will run the different searches and store the results for subsequent metrics calculation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "g_o6WJ-kmZKP"
},
"outputs": [],
"source": [
"queries = [\"broken glass\", \"damaged bench\", \"excessive litter\", \"graffiti\"]\n",
"top_k = 3\n",
"test_cases = [create_llm_test_case(query, top_k) for query in queries]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9xcy48JaBTyF"
},
"source": [
"You should expect an output as follows, including the list of retrieved image uris for each test case."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JZT00OGHBTyF"
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wZnHqNAVoXU7"
},
"source": [
"## Calculate Metrics"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kiCLXlnaBTyF"
},
"source": [
"As mentioned above, measuring both contextual recall and precision allows us to comprehensively evaluate the accuracy of our semantic search."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "igEvnDU9j35p"
},
"source": [
"### Measure Contextual Recall"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "p4NZkKVnnPQu"
},
"outputs": [],
"source": [
"from deepeval.metrics import MultimodalContextualRecallMetric\n",
"\n",
"recall_metric = MultimodalContextualRecallMetric(model=gemini_pro)\n",
"avg_recall = 0\n",
"\n",
"# Measure metric for each test_case and display corresponding score and reason\n",
"# Uncomment vertdicts line to include verdict for each retrieved document.\n",
"for test_case in test_cases:\n",
" print(\"Query:\", test_case.input[0])\n",
" # print(\"Retrieved Images:\", test_case.retrieval_context)\n",
" recall_metric.measure(test_case)\n",
" print(\"Score: \", recall_metric.score)\n",
" print(\"Reason: \", recall_metric.reason)\n",
" avg_recall += recall_metric.score\n",
" # for verdict in recall_metric.verdicts: print(verdict)\n",
" print(\"\\n\")\n",
"\n",
"avg_recall = avg_recall / len(test_cases)\n",
"print(f\"Average Contextual Recall: {avg_recall}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JZnRGTzDBTyF"
},
"source": [
"Here are sample results. In this case, the search shows perfect contextual recall for all test cases. Due to probabilistic nature of LLM evaluations, you may sporadically get slightly different results."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7frKUvNvBTyF"
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dm6j0ZjilosP"
},
"source": [
"### Measure Contextual Precision"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9OmMkuMnlmeS"
},
"outputs": [],
"source": [
"from deepeval.metrics import MultimodalContextualPrecisionMetric\n",
"\n",
"precision_metric = MultimodalContextualPrecisionMetric(model=gemini_pro)\n",
"avg_precision = 0\n",
"\n",
"# Measure metric for each test_case and display corresponding score and reason\n",
"# Uncomment vertdicts line to include verdict for each retrieved document.\n",
"for test_case in test_cases:\n",
" print(\"Query:\", test_case.input[0])\n",
" # print(\"Retrieved Images:\", test_case.retrieval_context)\n",
" precision_metric.measure(test_case)\n",
" print(\"Score: \", precision_metric.score)\n",
" print(\"Reason: \", precision_metric.reason)\n",
" avg_precision += precision_metric.score\n",
" # for verdict in precision_metric.verdicts: print(verdict)\n",
" print(\"\\n\")\n",
"\n",
"avg_precision = avg_precision / len(test_cases)\n",
"print(f\"Average Contextual Precision: {avg_precision}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "w4VYpkbtBTyG"
},
"source": [
"Below are sample results. Due to probabilistic nature of LLM evaluations, you may sporadically get slightly different results."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Bu9zVdUjBTyG"
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hnqRVV8e5LDn"
},
"source": [
"## Summary"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3KxznYfe5R9U"
},
"source": [
"In this notebook, we evaluated a multimodal image search system using DeepEval and Gemini 1.5 Pro as an LLM judge. Contextual recall and precision metrics demonstrated the effectiveness of Gemini vector embeddings and BigQuery vector search in retrieving relevant images based on textual queries. While the initial results showed high accuracy, further improvements could focus on expanding the dataset and test cases for more robust evaluation. More complex queries could be added and different embedding models and search parameters (e.g., top-k, distance type, filtering/reranking logic) including hybrid search could be explored to further improve retrieval quality. The [DeepEval framework](https://docs.confident-ai.com/) and [Vertex AI Gemini API](https://cloud.google.com/vertex-ai/generative-ai/docs/overview) provided crucial tools for this evaluation.\n",
"\n",
"While this notebook focused on retrieval quality of BigQuery vector search, future work could also include retrieval cost and performance (latency, throughput) analysis to provide a complete evaluation of the system's efficacy and scalability, leveraging BigQuery vector index and batch processing. Refer to this [guide](https://medium.com/google-cloud/bigquery-vector-search-a-practitioners-guide-0f85b0d988f0) for more details about scaling BigQuery vector search."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AGyrOPFJBTyG"
},
"source": []
}
],
"metadata": {
"colab": {
"cell_execution_strategy": "setup",
"provenance": [],
"toc_visible": true,
"name": "multimodal_search_evaluation.ipynb"
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}