gemini/use-cases/retrieval-augmented-generation/intro_multimodal_rag.ipynb (1,387 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "ijGzTHJJUCPY" }, "outputs": [], "source": [ "# Copyright 2024 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "NDsTUvKjwHBW" }, "source": [ "# Multimodal Retrieval Augmented Generation (RAG) using Gemini API in Vertex AI\n", "\n", "<table align=\"left\">\n", " <td style=\"text-align: center\">\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/retrieval-augmented-generation/intro_multimodal_rag.ipynb\">\n", " <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Run in Colab\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fuse-cases%2Fretrieval-augmented-generation%2Fintro_multimodal_rag.ipynb\">\n", " <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Run in Colab Enterprise\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/retrieval-augmented-generation/intro_multimodal_rag.ipynb\">\n", " <img width=\"32px\" src=\"https://www.svgrepo.com/download/217753/github.svg\" alt=\"GitHub logo\"><br> View on GitHub\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/use-cases/retrieval-augmented-generation/intro_multimodal_rag.ipynb\">\n", " <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://goo.gle/40z3Fun\">\n", " <img width=\"32px\" src=\"https://cdn.qwiklabs.com/assets/gcp_cloud-e3a77215f0b8bfa9b3f611c0d2208c7e8708ed31.svg\" alt=\"Google Cloud logo\"><br> Open in Cloud Skills Boost\n", " </a>\n", " </td>\n", "</table>\n", "\n", "<div style=\"clear: both;\"></div>\n", "\n", "<b>Share to:</b>\n", "\n", "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/retrieval-augmented-generation/intro_multimodal_rag.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n", "</a>\n", "\n", "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/retrieval-augmented-generation/intro_multimodal_rag.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n", "</a>\n", "\n", "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/retrieval-augmented-generation/intro_multimodal_rag.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n", "</a>\n", "\n", "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/retrieval-augmented-generation/intro_multimodal_rag.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n", "</a>\n", "\n", "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/retrieval-augmented-generation/intro_multimodal_rag.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n", "</a> " ] }, { "cell_type": "markdown", "metadata": { "id": "BsjCZ1v9rP7s" }, "source": [ "| | |\n", "|-|-|\n", "|Author(s) | [Lavi Nigam](https://github.com/lavinigam-gcp) |" ] }, { "cell_type": "markdown", "metadata": { "id": "7CBqVzyjHeBk" }, "source": [ "<div class=\"alert alert-block alert-warning\">\n", "<b>⚠️ There is a new version of this notebook with new data and some modifications here: ⚠️</b>\n", "</div>" ] }, { "cell_type": "markdown", "metadata": { "id": "m0KqoJXJHzkT" }, "source": [ "[**building_DIY_multimodal_qa_system_with_mRAG.ipynb**](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/qa-ops/building_DIY_multimodal_qa_system_with_mRAG.ipynb)\n", "\n", "You can, however, still use this notebook as it is fully functional and has updated Gemini and text-embedding models." ] }, { "cell_type": "markdown", "metadata": { "id": "VK1Q5ZYdVL4Y" }, "source": [ "## Overview\n", "\n", "Retrieval augmented generation (RAG) has become a popular paradigm for enabling LLMs to access external data and also as a mechanism for grounding to mitigate against hallucinations.\n", "\n", "In this notebook, you will learn how to perform multimodal RAG where you will perform Q&A over a financial document filled with both text and images.\n", "\n", "### Gemini\n", "\n", "Gemini is a family of generative AI models developed by Google DeepMind that is designed for multimodal use cases. The Gemini API gives you access to the Gemini models.\n", "\n", "### Comparing text-based and multimodal RAG\n", "\n", "Multimodal RAG offers several advantages over text-based RAG:\n", "\n", "1. **Enhanced knowledge access:** Multimodal RAG can access and process both textual and visual information, providing a richer and more comprehensive knowledge base for the LLM.\n", "2. **Improved reasoning capabilities:** By incorporating visual cues, multimodal RAG can make better informed inferences across different types of data modalities.\n", "\n", "This notebook shows you how to use RAG with Gemini API in Vertex AI, [text embeddings](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-embeddings), and [multimodal embeddings](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/multimodal-embeddings), to build a document search engine.\n", "\n", "Through hands-on examples, you will discover how to construct a multimedia-rich metadata repository of your document sources, enabling search, comparison, and reasoning across diverse information streams." ] }, { "cell_type": "markdown", "metadata": { "id": "RQT500QqVPIb" }, "source": [ "### Objectives\n", "\n", "This notebook provides a guide to building a document search engine using multimodal retrieval augmented generation (RAG), step by step:\n", "\n", "1. Extract and store metadata of documents containing both text and images, and generate embeddings the documents\n", "2. Search the metadata with text queries to find similar text or images\n", "3. Search the metadata with image queries to find similar images\n", "4. Using a text query as input, search for contextual answers using both text and images" ] }, { "cell_type": "markdown", "metadata": { "id": "KnpYxfesh2rI" }, "source": [ "### Costs\n", "\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "- Vertex AI\n", "\n", "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": { "id": "DXJpXzKrh2rJ" }, "source": [ "## Getting Started" ] }, { "cell_type": "markdown", "metadata": { "id": "N5afkyDMSBW5" }, "source": [ "### Install Vertex AI SDK for Python and other dependencies" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "id": "kc4WxYmLSBW5" }, "outputs": [], "source": [ "%pip install --upgrade --user google-cloud-aiplatform pymupdf rich colorama" ] }, { "cell_type": "markdown", "metadata": { "id": "R5Xep4W9lq-Z" }, "source": [ "### Restart current runtime\n", "\n", "To use the newly installed packages in this Jupyter runtime, you must restart the runtime. You can do this by running the cell below, which will restart the current kernel." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XRvKdaPDTznN" }, "outputs": [], "source": [ "# Restart kernel after installs so that your environment can access the new packages\n", "import IPython\n", "\n", "app = IPython.Application.instance()\n", "app.kernel.do_shutdown(True)" ] }, { "cell_type": "markdown", "metadata": { "id": "O1vKZZoEh2rL" }, "source": [ "### Define Google Cloud project information" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "gJqZ76rJh2rM" }, "outputs": [], "source": [ "# Define project information\n", "\n", "import sys\n", "\n", "PROJECT_ID = \"YOUR_PROJECT_ID\" # @param {type:\"string\"}\n", "LOCATION = \"us-central1\" # @param {type:\"string\"}\n", "\n", "# if not running on Colab, try to get the PROJECT_ID automatically\n", "if \"google.colab\" not in sys.modules:\n", " import subprocess\n", "\n", " PROJECT_ID = subprocess.check_output(\n", " [\"gcloud\", \"config\", \"get-value\", \"project\"], text=True\n", " ).strip()\n", "\n", "print(f\"Your project ID is: {PROJECT_ID}\")" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "D48gUW5-h2rM" }, "outputs": [], "source": [ "# Initialize Vertex AI\n", "import vertexai\n", "\n", "vertexai.init(project=PROJECT_ID, location=LOCATION)" ] }, { "cell_type": "markdown", "metadata": { "id": "BuQwwRiniVFG" }, "source": [ "### Import libraries" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "id": "rtMowvm-yQ97" }, "outputs": [], "source": [ "from IPython.display import Markdown, display\n", "from rich.markdown import Markdown as rich_Markdown\n", "from vertexai.generative_models import GenerationConfig, GenerativeModel, Image" ] }, { "cell_type": "markdown", "metadata": { "id": "r-TX_R_xh2rM" }, "source": [ "### Load the Gemini model" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "id": "SvMwSRJJh2rM" }, "outputs": [], "source": [ "text_model = GenerativeModel(\"gemini-2.0-flash\")\n", "multimodal_model = text_model\n", "multimodal_model_flash = text_model" ] }, { "cell_type": "markdown", "metadata": { "id": "1lCfREXK5SWD" }, "source": [ "### Download custom Python utilities & required files\n", "\n", "The cell below will download a helper functions needed for this notebook, to improve readability. It also downloads other required files. You can also view the code for the utils here: (`intro_multimodal_rag_utils.py`) directly on [GitHub](https://storage.googleapis.com/github-repo/rag/intro_multimodal_rag/intro_multimodal_rag_old_version/intro_multimodal_rag_utils.py)." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "KwbL89zcY39N" }, "outputs": [], "source": [ "# download documents and images used in this notebook\n", "!gsutil -m rsync -r gs://github-repo/rag/intro_multimodal_rag/intro_multimodal_rag_old_version .\n", "print(\"Download completed\")" ] }, { "cell_type": "markdown", "metadata": { "id": "Ps1G-cCfpibN" }, "source": [ "## Building metadata of documents containing text and images" ] }, { "cell_type": "markdown", "metadata": { "id": "jqLsy3iZ5t-R" }, "source": [ "### The data\n", "\n", "The source data that you will use in this notebook is a modified version of [Google-10K](https://abc.xyz/assets/investor/static/pdf/20220202_alphabet_10K.pdf) which provides a comprehensive overview of the company's financial performance, business operations, management, and risk factors. As the original document is rather large, you will be using a modified version with only 14 pages, split into two parts - [Part 1](https://storage.googleapis.com/github-repo/rag/intro_multimodal_rag/intro_multimodal_rag_old_version/data/google-10k-sample-part1.pdf) and [Part 2](https://storage.googleapis.com/github-repo/rag/intro_multimodal_rag/intro_multimodal_rag_old_version/data/google-10k-sample-part2.pdf) instead. Although it's truncated, the sample document still contains text along with images such as tables, charts, and graphs." ] }, { "cell_type": "markdown", "metadata": { "id": "zvt0sus5KSNX" }, "source": [ "### Import helper functions to build metadata\n", "\n", "Before building the multimodal RAG system, it's important to have metadata of all the text and images in the document. For references and citations purposes, the metadata should contain essential elements, including page number, file name, image counter, and so on. Hence, as a next step, you will generate embeddings from the metadata, which will is required to perform similarity search when querying the data." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "N3wo2jv2rP7v" }, "outputs": [], "source": [ "from intro_multimodal_rag_utils import get_document_metadata" ] }, { "cell_type": "markdown", "metadata": { "id": "5BOAkYN0KlSL" }, "source": [ "### Extract and store metadata of text and images from a document" ] }, { "cell_type": "markdown", "metadata": { "id": "Q9hBPPWs5CMd" }, "source": [ "You just imported a function called `get_document_metadata()`. This function extracts text and image metadata from a document, and returns two dataframes, namely *text_metadata* and *image_metadata*, as outputs. If you want to find out more about how `get_document_metadata()` function is implemented using Gemini and the embedding models, you can take look at the [source code](https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/use-cases/retrieval-augmented-generation/utils/intro_multimodal_rag_utils.py) directly.\n", "\n", "The reason for extraction and storing both text metadata and image metadata is that just by using either of the two alone is not sufficient to come out with a relevent answer. For example, the relevant answers could be in visual form within a document, but text-based RAG won't be able to take into consideration of the visual images. You will also be exploring this example later in this notebook." ] }, { "cell_type": "markdown", "metadata": { "id": "PnKru0sBh2rN" }, "source": [ "At the next step, you will use the function to extract and store metadata of text and images froma document. Please note that the following cell may take a few minutes to complete:" ] }, { "cell_type": "markdown", "metadata": { "id": "jFgRwzokrP7v" }, "source": [ "Note:\n", "\n", "The current implementation works best:\n", "\n", "* if your documents are a combination of text and images.\n", "* if the tables in your documents are available as images.\n", "* if the images in the document don't require too much context.\n", "\n", "Additionally,\n", "\n", "* If you want to run this on text-only documents, use normal RAG\n", "* If your documents contain particular domain knowledge, pass that information in the prompt below." ] }, { "cell_type": "markdown", "metadata": { "id": "nflT_j-9QzC_" }, "source": [ "<div class=\"alert alert-block alert-warning\">\n", "<b>⚠️ Do not send more than 50 pages in the logic below, its not degined to do that and you will get into quota issue. ⚠️</b>\n", "</div>" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "X8hE0tWD-lf8" }, "outputs": [], "source": [ "# Specify the PDF folder with multiple PDF\n", "\n", "# pdf_folder_path = \"/content/data/\" # if running in Google Colab/Colab Enterprise\n", "pdf_folder_path = \"data/\" # if running in Vertex AI Workbench.\n", "\n", "# Specify the image description prompt. Change it\n", "image_description_prompt = \"\"\"Explain what is going on in the image.\n", "If it's a table, extract all elements of the table.\n", "If it's a graph, explain the findings in the graph.\n", "Do not include any numbers that are not mentioned in the image.\n", "\"\"\"\n", "\n", "# Extract text and image metadata from the PDF document\n", "text_metadata_df, image_metadata_df = get_document_metadata(\n", " multimodal_model, # we are passing Gemini 2.0 model\n", " pdf_folder_path,\n", " image_save_dir=\"images\",\n", " image_description_prompt=image_description_prompt,\n", " embedding_size=1408,\n", " # add_sleep_after_page = True, # Uncomment this if you are running into API quota issues\n", " # sleep_time_after_page = 5,\n", " # generation_config = # see next cell\n", " # safety_settings = # see next cell\n", ")\n", "\n", "print(\"\\n\\n --- Completed processing. ---\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vQzMm5bNrP7w" }, "outputs": [], "source": [ "# # Parameters for Gemini API call.\n", "# # reference for parameters: https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini\n", "\n", "# generation_config= GenerationConfig(temperature=0.2, max_output_tokens=2048)\n", "\n", "# # Set the safety settings if Gemini is blocking your content or you are facing \"ValueError(\"Content has no parts\")\" error or \"Exception occurred\" in your data.\n", "# # ref for settings and thresholds: https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/configure-safety-attributes\n", "\n", "# safety_settings = {\n", "# HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,\n", "# HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,\n", "# HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,\n", "# HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,\n", "# }\n", "\n", "# # You can also pass parameters and safety_setting to \"get_gemini_response\" function" ] }, { "cell_type": "markdown", "metadata": { "id": "miBBoEXwh2rN" }, "source": [ "#### Inspect the processed text metadata\n", "\n", "\n", "The following cell will produce a metadata table which describes the different parts of text metadata, including:\n", "\n", "- **text**: the original text from the page\n", "- **text_embedding_page**: the embedding of the original text from the page\n", "- **chunk_text**: the original text divided into smaller chunks\n", "- **chunk_number**: the index of each text chunk\n", "- **text_embedding_chunk**: the embedding of each text chunk" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "6t3AIGFar8Mo" }, "outputs": [], "source": [ "text_metadata_df.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "NjIQYI3mh2rO" }, "source": [ "#### Inspect the processed image metadata\n", "\n", "The following cell will produce a metadata table which describes the different parts of image metadata, including:\n", "* **img_desc**: Gemini-generated textual description of the image.\n", "* **mm_embedding_from_text_desc_and_img**: Combined embedding of image and its description, capturing both visual and textual information.\n", "* **mm_embedding_from_img_only**: Image embedding without description, for comparison with description-based analysis.\n", "* **text_embedding_from_image_description**: Separate text embedding of the generated description, enabling textual analysis and comparison." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "id": "tkHtAYIK-y-q" }, "outputs": [], "source": [ "image_metadata_df.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "iBhoOkutUtPr" }, "source": [ "### Import the helper functions to implement RAG\n", "\n", "You will be importing the following functions which will be used in the remainder of this notebook to implement RAG:\n", "\n", "* **get_similar_text_from_query():** Given a text query, finds text from the document which are relevant, using cosine similarity algorithm. It uses text embeddings from the metadata to compute and the results can be filtered by top score, page/chunk number, or embedding size.\n", "* **print_text_to_text_citation():** Prints the source (citation) and details of the retrieved text from the `get_similar_text_from_query()` function.\n", "* **get_similar_image_from_query():** Given an image path or an image, finds images from the document which are relevant. It uses image embeddings from the metadata.\n", "* **print_text_to_image_citation():** Prints the source (citation) and the details of retrieved images from the `get_similar_image_from_query()` function.\n", "* **get_gemini_response():** Interacts with a Gemini model to answer questions based on a combination of text and image inputs.\n", "* **display_images():** Displays a series of images provided as paths or PIL Image objects." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "Tngn_vrIKdE1" }, "outputs": [], "source": [ "from intro_multimodal_rag_utils import (\n", " display_images,\n", " get_gemini_response,\n", " get_similar_image_from_query,\n", " get_similar_text_from_query,\n", " print_text_to_image_citation,\n", " print_text_to_text_citation,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "c9jGEj6DY1Rj" }, "source": [ "Before implementing a multimodal RAG, let's take a step back and explore what you can achieve with just text or image embeddings alone. It will help to set the foundation for implementing a multimodal RAG, which you will be doing in the later part of the notebook. You can also use these essential elements together to build applications for multimodal use cases for extracting meaningful information from the document." ] }, { "cell_type": "markdown", "metadata": { "id": "KHuLlEvSKFWt" }, "source": [ "## Text Search\n", "\n", "Let's start the search with a simple question and see if the simple text search using text embeddings can answer it. The expected answer is to show the value of basic and diluted net income per share of Google for different share types." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "id": "5mrFVhtCut7t" }, "outputs": [], "source": [ "query = \"I need details for basic and diluted net income per share of Class A, Class B, and Class C share for google?\"" ] }, { "cell_type": "markdown", "metadata": { "id": "XWw7-AIar-S8" }, "source": [ "### Search similar text with text query" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "id": "eEzP6Yyv7N-G" }, "outputs": [], "source": [ "# Matching user text query with \"chunk_embedding\" to find relevant chunks.\n", "matching_results_text = get_similar_text_from_query(\n", " query,\n", " text_metadata_df,\n", " column_name=\"text_embedding_chunk\",\n", " top_n=3,\n", " chunk_text=True,\n", ")\n", "\n", "# Print the matched text citations\n", "print_text_to_text_citation(matching_results_text, print_top=False, chunk_text=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "O0bnOOf2rP70" }, "source": [ "You can see that the first high score match does have what we are looking for, but upon closer inspection, it mentions that the information is available in the \"following\" table. The table data is available as an image rather than as text, and hence, the chances are you will miss the information unless you can find a way to process images and their data.\n", "\n", "However, Let's feed the relevant text chunk across the data into the Gemini model and see if it can get your desired answer by considering all the chunks across the document. This is like basic text-based RAG implementation." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "id": "ORCistIdDWoE" }, "outputs": [], "source": [ "print(\"\\n **** Result: ***** \\n\")\n", "\n", "# All relevant text chunk found across documents based on user query\n", "context = \"\\n\".join(\n", " [value[\"chunk_text\"] for key, value in matching_results_text.items()]\n", ")\n", "\n", "instruction = f\"\"\"Answer the question with the given context.\n", "If the information is not available in the context, just return \"not available in the context\".\n", "Question: {query}\n", "Context: {context}\n", "Answer:\n", "\"\"\"\n", "\n", "# Prepare the model input\n", "model_input = instruction\n", "\n", "# Generate Gemini response with streaming output\n", "get_gemini_response(\n", " text_model, # we are passing Gemini\n", " model_input=model_input,\n", " stream=True,\n", " generation_config=GenerationConfig(temperature=0.2),\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "v8ux39zFqyLh" }, "source": [ "You can see that it returned:\n", "\n", "*\"The provided context does not include the details for basic and diluted net income per share of Class A, Class B, and Class C share for google.\n", "\"*\n", "\n", "This is expected as discussed previously. No other text chunk (total 3) had the information you sought.\n", "This is because the information is only available in the images rather than in the text part of the document. Next, let's see if you can solve this problem by leveraging Gemini and Multimodal Embeddings." ] }, { "cell_type": "markdown", "metadata": { "id": "2itkRuikq_g6" }, "source": [ "Note: We handcrafted examples in our document to simulate real-world cases where information is often embedded in charts, table, graphs, and other image-based elements and unavailable as plain text. " ] }, { "cell_type": "markdown", "metadata": { "id": "uXm271jdD-Rl" }, "source": [ "### Search similar images with text query" ] }, { "cell_type": "markdown", "metadata": { "id": "oPxwfyVrr9-G" }, "source": [ "Since plain text search didn't provide the desired answer and the information may be visually represented in a table or another image format, you will use multimodal capability of Gemini model for the similar task. The goal here also is to find an image similar to the text query. You may also print the citations to verify." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "id": "0sRFH6tJlpXQ" }, "outputs": [], "source": [ "query = \"I need details for basic and diluted net income per share of Class A, Class B, and Class C share for google?\"" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "id": "knj4qQ4xni24" }, "outputs": [], "source": [ "matching_results_image = get_similar_image_from_query(\n", " text_metadata_df,\n", " image_metadata_df,\n", " query=query,\n", " column_name=\"text_embedding_from_image_description\", # Use image description text embedding\n", " image_emb=False, # Use text embedding instead of image embedding\n", " top_n=3,\n", " embedding_size=1408,\n", ")\n", "\n", "# Markdown(print_text_to_image_citation(matching_results_image, print_top=True))\n", "print(\"\\n **** Result: ***** \\n\")\n", "\n", "# Display the top matching image\n", "display(matching_results_image[0][\"image_object\"])" ] }, { "cell_type": "markdown", "metadata": { "id": "SnFdFkWEtYrF" }, "source": [ "Bingo! It found exactly what you were looking for. You wanted the details on Google's Class A, B, and C shares' basic and diluted net income, and guess what? This image fits the bill perfectly thanks to its descriptive metadata using Gemini.\n", "\n", "You can also send the image and its description to Gemini and get the answer as JSON:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "id": "-ax6ooI0rP70" }, "outputs": [], "source": [ "print(\"\\n **** Result: ***** \\n\")\n", "\n", "# All relevant text chunk found across documents based on user query\n", "context = f\"\"\"Image: {matching_results_image[0]['image_object']}\n", "Description: {matching_results_image[0]['image_description']}\n", "\"\"\"\n", "\n", "instruction = f\"\"\"Answer the question in JSON format with the given context of Image and its Description. Only include value.\n", "Question: {query}\n", "Context: {context}\n", "Answer:\n", "\"\"\"\n", "\n", "# Prepare the model input\n", "model_input = instruction\n", "\n", "# Generate Gemini response with streaming output\n", "Markdown(\n", " get_gemini_response(\n", " multimodal_model_flash, # we are passing Gemini 2.0 Flash\n", " model_input=model_input,\n", " stream=True,\n", " generation_config=GenerationConfig(temperature=1),\n", " )\n", ")" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "id": "eAxSk640rP70" }, "outputs": [], "source": [ "## you can check the citations to probe further.\n", "## check the \"image description:\" which is a description extracted through Gemini which helped search our query.\n", "Markdown(print_text_to_image_citation(matching_results_image, print_top=True))" ] }, { "cell_type": "markdown", "metadata": { "id": "oDd9rE4NrRod" }, "source": [ "## Image Search" ] }, { "cell_type": "markdown", "metadata": { "id": "pJL6ElyEy4mc" }, "source": [ "### Search similar image with image query" ] }, { "cell_type": "markdown", "metadata": { "id": "ReKjHleFxUu9" }, "source": [ "Imagine searching for images, but instead of typing words, you use an actual image as the clue. You have a table with numbers about the cost of revenue for two years, and you want to find other images that look like it, from the same document or across multiple documents.\n", "\n", "Think of it like searching with a mini-map instead of a written address. It's a different way to ask, \"Show me more stuff like this\". So, instead of typing \"cost of revenue 2020 2021 table\", you show a picture of that table and say, \"Find me more like this\"\n", "\n", "For demonstration purposes, we will only be finding similar images that show the cost of revenue or similar values in a single document below. However, you can scale this design pattern to match (find relevant images) across multiple documents." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "id": "DJhhS5eZw7QI" }, "outputs": [], "source": [ "# You can find a similar image as per the images you have in the metadata.\n", "# In this case, you have a table (picked from the same document source) and you would like to find similar tables in the document.\n", "image_query_path = \"tac_table_revenue.png\"\n", "\n", "# Print a message indicating the input image\n", "print(\"***Input image from user:***\")\n", "\n", "# Display the input image\n", "Image.load_from_file(image_query_path)" ] }, { "cell_type": "markdown", "metadata": { "id": "3zBTtGChTmrd" }, "source": [ "You expect to find tables (as images) that are similar in terms of \"Other/Total cost of revenues.\"" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "id": "nZcU7vZC-8vr" }, "outputs": [], "source": [ "# Search for Similar Images Based on Input Image and Image Embedding\n", "\n", "matching_results_image = get_similar_image_from_query(\n", " text_metadata_df,\n", " image_metadata_df,\n", " query=query, # Use query text for additional filtering (optional)\n", " column_name=\"mm_embedding_from_img_only\", # Use image embedding for similarity calculation\n", " image_emb=True,\n", " image_query_path=image_query_path, # Use input image for similarity calculation\n", " top_n=3, # Retrieve top 3 matching images\n", " embedding_size=1408, # Use embedding size of 1408\n", ")\n", "\n", "print(\"\\n **** Result: ***** \\n\")\n", "\n", "# Display the Top Matching Image\n", "display(\n", " matching_results_image[0][\"image_object\"]\n", ") # Display the top matching image object (Pillow Image)" ] }, { "cell_type": "markdown", "metadata": { "id": "uhT17rke15XY" }, "source": [ "It did find a similar-looking image (table), which gives more detail about different revenue, expenses, income, and a few more details based on the given image. More importantly, both tables show numbers related to the \"cost of revenue.\"\n", "\n", "You can also print the citation to see what it has matched." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "id": "mksXQoezweg0" }, "outputs": [], "source": [ "# Display citation details for the top matching image\n", "print_text_to_image_citation(\n", " matching_results_image, print_top=True\n", ") # Print citation details for the top matching image" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "id": "VJWnhDJwI-uO" }, "outputs": [], "source": [ "# Check Other Matched Images (Optional)\n", "# You can access the other two matched images using:\n", "\n", "print(\"---------------Matched Images------------------\\n\")\n", "display_images(\n", " [\n", " matching_results_image[0][\"img_path\"],\n", " matching_results_image[1][\"img_path\"],\n", " ],\n", " resize_ratio=0.5,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "JvwZIgD84CNc" }, "source": [ "The ability to identify similar text and images based on user input, using Gemini and embeddings, forms a crucial foundation for development of multimodal RAG systems, which you explore in the next section." ] }, { "cell_type": "markdown", "metadata": { "id": "lUnsv5Co6pJF" }, "source": [ "### Comparative reasoning" ] }, { "cell_type": "markdown", "metadata": { "id": "1AFbqHiz5vvo" }, "source": [ "Next, let's apply what you have done so far to doing comparative reasoning.\n", "\n", "For this example:\n", "\n", "Step 1: You will search all the images for a specific query\n", "\n", "Step 2: Send those images to Gemini to ask multiple questions, where it has to compare and provide you with answers." ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "id": "E6AHCSwojyX0" }, "outputs": [], "source": [ "matching_results_image_query_1 = get_similar_image_from_query(\n", " text_metadata_df,\n", " image_metadata_df,\n", " query=\"Show me all the graphs that shows Google Class A cumulative 5-year total return\",\n", " column_name=\"text_embedding_from_image_description\", # Use image description text embedding # mm_embedding_from_img_only text_embedding_from_image_description\n", " image_emb=False, # Use text embedding instead of image embedding\n", " top_n=3,\n", " embedding_size=1408,\n", ")" ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "id": "3FRXk-n0rP71" }, "outputs": [], "source": [ "# Check Matched Images\n", "# You can access the other two matched images using:\n", "\n", "print(\"---------------Matched Images------------------\\n\")\n", "display_images(\n", " [\n", " matching_results_image_query_1[0][\"img_path\"],\n", " matching_results_image_query_1[1][\"img_path\"],\n", " ],\n", " resize_ratio=0.5,\n", ")" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "id": "fSR_JWkSC_7p" }, "outputs": [], "source": [ "prompt = f\"\"\" Instructions: Compare the images and the Gemini extracted text provided as Context: to answer Question:\n", "Make sure to think thoroughly before answering the question and put the necessary steps to arrive at the answer in bullet points for easy explainability.\n", "\n", "Context:\n", "Image_1: {matching_results_image_query_1[0][\"image_object\"]}\n", "gemini_extracted_text_1: {matching_results_image_query_1[0]['image_description']}\n", "Image_2: {matching_results_image_query_1[1][\"image_object\"]}\n", "gemini_extracted_text_2: {matching_results_image_query_1[2]['image_description']}\n", "\n", "Question:\n", " - Key findings of Class A share?\n", " - What are the critical differences between the graphs for Class A Share?\n", " - What are the key findings of Class A shares concerning the S&P 500?\n", " - Which index best matches Class A share performance closely where Google is not already a part? Explain the reasoning.\n", " - Identify key chart patterns in both graphs.\n", " - Which index best matches Class A share performance closely where Google is not already a part? Explain the reasoning.\n", "\"\"\"\n", "\n", "# Generate Gemini response with streaming output\n", "rich_Markdown(\n", " get_gemini_response(\n", " multimodal_model, # we are passing Gemini 2.0\n", " model_input=[prompt],\n", " stream=True,\n", " generation_config=GenerationConfig(temperature=1),\n", " )\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "7dhazjyNLSGT" }, "source": [ "<div class=\"alert alert-block alert-warning\">\n", "<b>⚠️ Disclaimer: This is not a real investment advise and should not be taken seriously!! ⚠️</b>\n", "</div>" ] }, { "cell_type": "markdown", "metadata": { "id": "efJPPrzRhvIT" }, "source": [ "## Multimodal retrieval augmented generation (RAG)\n", "\n", "Let's bring everything together to implement multimodal RAG. You will use all the elements that you've explored in previous sections to implement the multimodal RAG. These are the steps:\n", "\n", "* **Step 1:** The user gives a query in text format where the expected information is available in the document and is embedded in images and text.\n", "* **Step 2:** Find all text chunks from the pages in the documents using a method similar to the one you explored in `Text Search`.\n", "* **Step 3:** Find all similar images from the pages based on the user query matched with `image_description` using a method identical to the one you explored in `Image Search`.\n", "* **Step 4:** Combine all similar text and images found in steps 2 and 3 as `context_text` and `context_images`.\n", "* **Step 5:** With the help of Gemini, we can pass the user query with text and image context found in steps 2 & 3. You can also add a specific instruction the model should remember while answering the user query.\n", "* **Step 6:** Gemini produces the answer, and you can print the citations to check all relevant text and images used to address the query." ] }, { "cell_type": "markdown", "metadata": { "id": "EI62Hzuw_0_b" }, "source": [ "### Step 1: User query" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "id": "XvTKFwOPHLQ_" }, "outputs": [], "source": [ "# this time we are not passing any images, but just a simple text query.\n", "\n", "query = \"\"\"Questions:\n", " - What are the critical difference between various graphs for Class A Share?\n", " - Which index best matches Class A share performance closely where Google is not already a part? Explain the reasoning.\n", " - Identify key chart patterns for Google Class A shares.\n", " - What is cost of revenues, operating expenses and net income for 2020. Do mention the percentage change\n", " - What was the effect of Covid in the 2020 financial year?\n", " - What are the total revenues for APAC and USA for 2021?\n", " - What is deferred income taxes?\n", " - How do you compute net income per share?\n", " - What drove percentage change in the consolidated revenue and cost of revenue for the year 2021 and was there any effect of Covid?\n", " - What is the cause of 41% increase in revenue from 2020 to 2021 and how much is dollar change?\n", " \"\"\"" ] }, { "cell_type": "markdown", "metadata": { "id": "UUqlkKUaYvZA" }, "source": [ "### Step 2: Get all relevant text chunks" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "id": "r65yBb5gR_NG" }, "outputs": [], "source": [ "# Retrieve relevant chunks of text based on the query\n", "matching_results_chunks_data = get_similar_text_from_query(\n", " query,\n", " text_metadata_df,\n", " column_name=\"text_embedding_chunk\",\n", " top_n=10,\n", " chunk_text=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "mIgXgVIpYzxj" }, "source": [ "### Step 3: Get all relevant images" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "id": "wzu5Gf4yR_J4" }, "outputs": [], "source": [ "# Get all relevant images based on user query\n", "matching_results_image_fromdescription_data = get_similar_image_from_query(\n", " text_metadata_df,\n", " image_metadata_df,\n", " query=query,\n", " column_name=\"text_embedding_from_image_description\",\n", " image_emb=False,\n", " top_n=10,\n", " embedding_size=1408,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "RhUpWlGAY2uG" }, "source": [ "### Step 4: Create context_text and context_images" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "id": "B_EEuuLCe6Y5" }, "outputs": [], "source": [ "# combine all the selected relevant text chunks\n", "context_text = []\n", "for key, value in matching_results_chunks_data.items():\n", " context_text.append(value[\"chunk_text\"])\n", "final_context_text = \"\\n\".join(context_text)\n", "\n", "# combine all the relevant images and their description generated by Gemini\n", "context_images = []\n", "for key, value in matching_results_image_fromdescription_data.items():\n", " context_images.extend(\n", " [\"Image: \", value[\"image_object\"], \"Caption: \", value[\"image_description\"]]\n", " )" ] }, { "cell_type": "markdown", "metadata": { "id": "rHrtodcBAEu9" }, "source": [ "### Step 5: Pass context to Gemini" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "id": "aZuhtJu7fW4n" }, "outputs": [], "source": [ "prompt = f\"\"\" Instructions: Compare the images and the text provided as Context: to answer multiple Question:\n", "Make sure to think thoroughly before answering the question and put the necessary steps to arrive at the answer in bullet points for easy explainability.\n", "If unsure, respond, \"Not enough context to answer\".\n", "\n", "Context:\n", " - Text Context:\n", " {final_context_text}\n", " - Image Context:\n", " {context_images}\n", "\n", "{query}\n", "\n", "Answer:\n", "\"\"\"\n", "\n", "# Generate Gemini response with streaming output\n", "rich_Markdown(\n", " get_gemini_response(\n", " multimodal_model,\n", " model_input=[prompt],\n", " stream=True,\n", " generation_config=GenerationConfig(temperature=1),\n", " )\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "L0FtXYl1fzKh" }, "source": [ "### Step 6: Print citations and references" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "id": "IYRLQ47or1I8" }, "outputs": [], "source": [ "print(\"---------------Matched Images------------------\\n\")\n", "display_images(\n", " [\n", " matching_results_image_fromdescription_data[0][\"img_path\"],\n", " matching_results_image_fromdescription_data[1][\"img_path\"],\n", " matching_results_image_fromdescription_data[2][\"img_path\"],\n", " matching_results_image_fromdescription_data[3][\"img_path\"],\n", " ],\n", " resize_ratio=0.5,\n", ")" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "id": "buwd_gp6HJ5K" }, "outputs": [], "source": [ "# Image citations. You can check how Gemini generated metadata helped in grounding the answer.\n", "\n", "print_text_to_image_citation(\n", " matching_results_image_fromdescription_data, print_top=False\n", ")" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "id": "06vYM4MOHJ1-" }, "outputs": [], "source": [ "# Text citations\n", "\n", "print_text_to_text_citation(\n", " matching_results_chunks_data,\n", " print_top=False,\n", " chunk_text=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "KwNrHCqbi3xi" }, "source": [ "## Conclusions" ] }, { "cell_type": "markdown", "metadata": { "id": "05jynhZnkgxn" }, "source": [ "Congratulations on making it through this multimodal RAG notebook!\n", "\n", "While multimodal RAG can be quite powerful, note that it can face some limitations:\n", "\n", "* **Data dependency:** Needs high-quality paired text and visuals.\n", "* **Computationally demanding:** Processing multimodal data is resource-intensive.\n", "* **Domain specific:** Models trained on general data may not shine in specialized fields like medicine.\n", "* **Black box:** Understanding how these models work can be tricky, hindering trust and adoption.\n", "\n", "\n", "Despite these challenges, multimodal RAG represents a significant step towards search and retrieval systems that can handle diverse, multimodal data." ] } ], "metadata": { "colab": { "name": "intro_multimodal_rag.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }