gemini/getting-started/intro_gemini_python.ipynb (971 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "ijGzTHJJUCPY" }, "outputs": [], "source": [ "# Copyright 2024 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "VEqbX8OhE8y9" }, "source": [ "# Getting Started with the Gemini API in Vertex AI & Python SDK\n", "\n", "<table align=\"left\">\n", " <td style=\"text-align: center\">\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_python.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Run in Colab\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fgetting-started%2Fintro_gemini_python.ipynb\">\n", " <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Run in Colab Enterprise\n", " </a>\n", " </td> \n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_python.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/getting-started/intro_gemini_python.ipynb\">\n", " <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://goo.gle/40xOAcf\">\n", " <img width=\"32px\" src=\"https://cdn.qwiklabs.com/assets/gcp_cloud-e3a77215f0b8bfa9b3f611c0d2208c7e8708ed31.svg\" alt=\"Google Cloud logo\"><br> Open in Cloud Skills Boost\n", " </a>\n", " </td>\n", "</table>\n", "\n", "<div style=\"clear: both;\"></div>\n", "\n", "<b>Share to:</b>\n", "\n", "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_python.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n", "</a>\n", "\n", "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_python.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n", "</a>\n", "\n", "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_python.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n", "</a>\n", "\n", "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_python.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n", "</a>\n", "\n", "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_python.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n", "</a> \n" ] }, { "cell_type": "markdown", "metadata": { "id": "f0cc0f48513b" }, "source": [ "| | |\n", "|-|-|\n", "|Author(s) | [Eric Dong](https://github.com/gericdong), [Polong Lin](https://github.com/polong-lin) |" ] }, { "cell_type": "markdown", "metadata": { "id": "CkHPv2myT2cx" }, "source": [ "## Overview\n", "\n", "**YouTube Video: Introduction to Gemini on Vertex AI**\n", "\n", "<a href=\"https://www.youtube.com/watch?v=YfiLUpNejpE&list=PLIivdWyY5sqJio2yeg1dlfILOUO2FoFRx\" target=\"_blank\">\n", " <img src=\"https://img.youtube.com/vi/YfiLUpNejpE/maxresdefault.jpg\" alt=\"Introduction to Gemini on Vertex AI\" width=\"500\">\n", "</a>\n", "\n", "### Gemini\n", "\n", "Gemini is a family of generative AI models developed by Google DeepMind that is designed for multimodal use cases. The Gemini API gives you access to the Gemini models.\n", "\n", "### Gemini API in Vertex AI\n", "\n", "The Gemini API in Vertex AI provides a unified interface for interacting with Gemini models. You can interact with the Gemini API using the following methods:\n", "\n", "- Use [Vertex AI Studio](https://cloud.google.com/generative-ai-studio) for quick testing and command generation\n", "- Use cURL commands\n", "- Use the Vertex AI SDK\n", "\n", "This notebook focuses on using the **Vertex AI SDK for Python** to call the Gemini API in Vertex AI.\n", "\n", "For more information, see the [Generative AI on Vertex AI](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview) documentation.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "DrkcqHrrwMAo" }, "source": [ "### Objectives\n", "\n", "In this tutorial, you will learn how to use the Gemini API in Vertex AI with the Vertex AI SDK for Python to interact with the Gemini 2.0 (`gemini-2.0-flash`) model.\n", "\n", "You will complete the following tasks:\n", "\n", "- Install the Vertex AI SDK for Python\n", "- Use the Gemini API in Vertex AI to interact with the Gemini 2.0 models\n", " - Generate text from text prompt\n", " - Explore various features and configuration options\n", " - Generate text from image(s) and text prompt\n", " - Generate text from video and text prompt\n" ] }, { "cell_type": "markdown", "metadata": { "id": "r11Gu7qNgx1p" }, "source": [ "## Getting Started\n" ] }, { "cell_type": "markdown", "metadata": { "id": "No17Cw5hgx12" }, "source": [ "### Install Vertex AI SDK for Python\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tFy3H3aPgx12" }, "outputs": [], "source": [ "%pip install --upgrade --user google-cloud-aiplatform" ] }, { "cell_type": "markdown", "metadata": { "id": "dmWOrTJ3gx13" }, "source": [ "### Authenticate your notebook environment (Colab only)\n", "\n", "If you are running this notebook on Google Colab, run the following cell to authenticate your environment. This step is not required if you are using [Vertex AI Workbench](https://cloud.google.com/vertex-ai-workbench).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NyKGtVQjgx13" }, "outputs": [], "source": [ "import sys\n", "\n", "if \"google.colab\" in sys.modules:\n", " from google.colab import auth\n", "\n", " auth.authenticate_user()" ] }, { "cell_type": "markdown", "metadata": { "id": "DF4l8DTdWgPY" }, "source": [ "### Set Google Cloud project information and initialize Vertex AI SDK\n", "\n", "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n", "\n", "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Nqwi-5ufWp_B" }, "outputs": [], "source": [ "# Use the environment variable if the user doesn't provide Project ID.\n", "import os\n", "\n", "import vertexai\n", "\n", "PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n", "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n", " PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n", "\n", "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n", "\n", "vertexai.init(project=PROJECT_ID, location=LOCATION)" ] }, { "cell_type": "markdown", "metadata": { "id": "jXHfaVS66_01" }, "source": [ "### Import libraries\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lslYAvw37JGQ" }, "outputs": [], "source": [ "from vertexai.generative_models import (\n", " GenerationConfig,\n", " GenerativeModel,\n", " HarmBlockThreshold,\n", " HarmCategory,\n", " Image,\n", " Part,\n", " SafetySetting,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "4437b7608c8e" }, "source": [ "## Use the Gemini model\n", "\n", "The Gemini model is a foundation model that performs well at a variety of multimodal tasks such as visual understanding, classification, summarization, and creating content from image, audio and video. It's adept at processing visual and text inputs such as photographs, documents, infographics, and screenshots.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "BY1nfXrqRxVX" }, "source": [ "### Load the Gemini 2.0 Flash model\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2998506fe6d1" }, "outputs": [], "source": [ "model = GenerativeModel(\"gemini-2.0-flash\")" ] }, { "cell_type": "markdown", "metadata": { "id": "AIl7R_jBUsaC" }, "source": [ "### Generate text from text prompt\n", "\n", "Send a text prompt to the model using the `generate_content` method. The `generate_content` method can handle a wide variety of use cases, including multi-turn chat and multimodal input, depending on what the underlying model supports.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3d69a8ae37bc" }, "outputs": [], "source": [ "response = model.generate_content(\"Why is the sky blue?\")\n", "\n", "print(response.text)" ] }, { "cell_type": "markdown", "metadata": { "id": "fbb4e5e3a5c0" }, "source": [ "### Streaming\n", "\n", "By default, the model returns a response after completing the entire generation process. You can also stream the response as it is being generated, and the model will return chunks of the response as soon as they are generated." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "eAo-UsfZECGF" }, "outputs": [], "source": [ "responses = model.generate_content(\"Why is the sky blue?\", stream=True)\n", "\n", "for response in responses:\n", " print(response.text, end=\"\")" ] }, { "cell_type": "markdown", "metadata": { "id": "Us8idXnVyQ97" }, "source": [ "#### Try your own prompts\n", "\n", "- What are the biggest challenges facing the healthcare industry?\n", "- What are the latest developments in the automotive industry?\n", "- What are the biggest opportunities in retail industry?\n", "- (Try your own prompts!)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MmAZQW1GyQ97" }, "outputs": [], "source": [ "prompt = \"\"\"Create a numbered list of 10 items. Each item in the list should be a trend in the tech industry.\n", "\n", "Each trend should be less than 5 words.\"\"\" # try your own prompt\n", "\n", "response = model.generate_content(prompt)\n", "\n", "print(response.text)" ] }, { "cell_type": "markdown", "metadata": { "id": "tDK4XLdz3Oqv" }, "source": [ "#### Model parameters\n", "\n", "Every prompt you send to the model includes parameter values that control how the model generates a response. The model can generate different results for different parameter values. You can experiment with different model parameters to see how the results change.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "s_2ann-F3WTo" }, "outputs": [], "source": [ "generation_config = GenerationConfig(\n", " temperature=0.9,\n", " top_p=1.0,\n", " top_k=32,\n", " candidate_count=1,\n", " max_output_tokens=8192,\n", ")\n", "\n", "response = model.generate_content(\n", " \"Why is the sky blue?\",\n", " generation_config=generation_config,\n", ")\n", "\n", "print(response.text)" ] }, { "cell_type": "markdown", "metadata": { "id": "7bff84b3f1c3" }, "source": [ "### Safety filters\n", "\n", "The Gemini API provides safety filters that you can adjust across multiple filter categories to restrict or allow certain types of content. You can use these filters to adjust what's appropriate for your use case. See the [Configure safety filters](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/configure-safety-filters) page for details.\n", "\n", "When you make a request to Gemini, the content is analyzed and assigned a safety rating. You can inspect the safety ratings of the generated content by printing out the model responses, as in this example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "6548f7974b26" }, "outputs": [], "source": [ "response = model.generate_content(\"Why is the sky blue?\")\n", "\n", "print(f\"Safety ratings:\\n{response.candidates[0].safety_ratings}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "3fe5bb6d26c8" }, "source": [ "In Gemini 2.0 002 and Gemini 2.0 002, the safety settings are `OFF` by default and the default block thresholds are `BLOCK_NONE`.\n", "\n", "You can use `safety_settings` to adjust the safety settings for each request you make to the API. This example demonstrates how you set the block threshold to BLOCK_ONLY_HIGH for the dangerous content category:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4c055f9f41a5" }, "outputs": [], "source": [ "safety_settings = [\n", " SafetySetting(\n", " category=HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,\n", " threshold=HarmBlockThreshold.BLOCK_ONLY_HIGH,\n", " ),\n", "]\n", "\n", "prompt = \"\"\"\n", " Write a list of 2 disrespectful things that I might say to the universe after stubbing my toe in the dark.\n", "\"\"\"\n", "\n", "response = model.generate_content(\n", " prompt,\n", " safety_settings=safety_settings,\n", ")\n", "\n", "print(response)" ] }, { "cell_type": "markdown", "metadata": { "id": "ga0xM9z9fAnR" }, "source": [ "### Test chat prompts\n", "\n", "The Gemini API supports natural multi-turn conversations and is ideal for text tasks that require back-and-forth interactions. The following examples show how the model responds during a multi-turn conversation.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SFbGVflTfBBk" }, "outputs": [], "source": [ "chat = model.start_chat()\n", "\n", "prompt = \"\"\"My name is Ned. You are my personal assistant. My favorite movies are Lord of the Rings and Hobbit.\n", "\n", "Suggest another movie I might like.\n", "\"\"\"\n", "\n", "response = chat.send_message(prompt)\n", "\n", "print(response.text)" ] }, { "cell_type": "markdown", "metadata": { "id": "ZP_z_Oh1J4nk" }, "source": [ "This follow-up prompt shows how the model responds based on the previous prompt:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "OCq7JNBKJrI8" }, "outputs": [], "source": [ "prompt = \"Are my favorite movies based on a book series?\"\n", "\n", "responses = chat.send_message(prompt)\n", "\n", "print(response.text)" ] }, { "cell_type": "markdown", "metadata": { "id": "be91004881d3" }, "source": [ "You can also view the chat history:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "930078f7a403" }, "outputs": [], "source": [ "print(chat.history)" ] }, { "cell_type": "markdown", "metadata": { "id": "OK6TsnYghrQk" }, "source": [ "## Generate text from multimodal prompt\n", "\n", "Gemini is a multimodal model that supports multimodal prompts. You can include text, image(s), and video in your prompt requests and get text or code responses.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "lwvfMDEDVyKI" }, "source": [ "### Define helper functions\n", "\n", "Define helper functions to load and display images.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NQS13DI6Pjp6" }, "outputs": [], "source": [ "import http.client\n", "import typing\n", "import urllib.request\n", "\n", "import IPython.display\n", "from PIL import Image as PIL_Image\n", "from PIL import ImageOps as PIL_ImageOps\n", "\n", "\n", "def display_images(\n", " images: typing.Iterable[Image],\n", " max_width: int = 600,\n", " max_height: int = 350,\n", ") -> None:\n", " for image in images:\n", " pil_image = typing.cast(PIL_Image.Image, image._pil_image)\n", " if pil_image.mode != \"RGB\":\n", " # RGB is supported by all Jupyter environments (e.g. RGBA is not yet)\n", " pil_image = pil_image.convert(\"RGB\")\n", " image_width, image_height = pil_image.size\n", " if max_width < image_width or max_height < image_height:\n", " # Resize to display a smaller notebook image\n", " pil_image = PIL_ImageOps.contain(pil_image, (max_width, max_height))\n", " IPython.display.display(pil_image)\n", "\n", "\n", "def get_image_bytes_from_url(image_url: str) -> bytes:\n", " with urllib.request.urlopen(image_url) as response:\n", " response = typing.cast(http.client.HTTPResponse, response)\n", " image_bytes = response.read()\n", " return image_bytes\n", "\n", "\n", "def load_image_from_url(image_url: str) -> Image:\n", " image_bytes = get_image_bytes_from_url(image_url)\n", " return Image.from_bytes(image_bytes)\n", "\n", "\n", "def get_url_from_gcs(gcs_uri: str) -> str:\n", " # converts GCS uri to url for image display.\n", " url = \"https://storage.googleapis.com/\" + gcs_uri.replace(\"gs://\", \"\").replace(\n", " \" \", \"%20\"\n", " )\n", " return url\n", "\n", "\n", "def print_multimodal_prompt(contents: list):\n", " \"\"\"\n", " Given contents that would be sent to Gemini,\n", " output the full multimodal prompt for ease of readability.\n", " \"\"\"\n", " for content in contents:\n", " if isinstance(content, Image):\n", " display_images([content])\n", " elif isinstance(content, Part):\n", " url = get_url_from_gcs(content.file_data.file_uri)\n", " IPython.display.display(load_image_from_url(url))\n", " else:\n", " print(content)" ] }, { "cell_type": "markdown", "metadata": { "id": "Wy75sLb-yjNn" }, "source": [ "### Generate text from local image and text\n", "\n", "Use the `Image.load_from_file` method to load a local file as the image to generate text for.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "KzqjpEiryjNo" }, "outputs": [], "source": [ "# Download an image from Google Cloud Storage\n", "! gsutil cp \"gs://cloud-samples-data/generative-ai/image/320px-Felis_catus-cat_on_snow.jpg\" ./image.jpg\n", "\n", "# Load from local file\n", "image = Image.load_from_file(\"image.jpg\")\n", "\n", "# Prepare contents\n", "prompt = \"Describe this image?\"\n", "contents = [image, prompt]\n", "\n", "response = model.generate_content(contents)\n", "\n", "print(\"-------Prompt--------\")\n", "print_multimodal_prompt(contents)\n", "\n", "print(\"\\n-------Response--------\")\n", "print(response.text)" ] }, { "cell_type": "markdown", "metadata": { "id": "a7d3c02be3a5" }, "source": [ "### Generate text from text & image(s)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "GJvME8gV2nyk" }, "source": [ "#### Images with Cloud Storage URIs\n", "\n", "If your images are stored in [Cloud Storage](https://cloud.google.com/storage/docs), you can specify the Cloud Storage URI of the image to include in the prompt. You must also specify the `mime_type` field. The supported MIME types for images include `image/png` and `image/jpeg`.\n", "\n", "Note that the URI (not to be confused with URL) for a Cloud Storage object should always start with `gs://`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dc86b963e8a8" }, "outputs": [], "source": [ "# Load image from Cloud Storage URI\n", "gcs_uri = \"gs://cloud-samples-data/generative-ai/image/boats.jpeg\"\n", "\n", "# Prepare contents\n", "image = Part.from_uri(gcs_uri, mime_type=\"image/jpeg\")\n", "prompt = \"Describe the scene?\"\n", "contents = [image, prompt]\n", "\n", "response = model.generate_content(contents)\n", "\n", "print(\"-------Prompt--------\")\n", "print_multimodal_prompt(contents)\n", "\n", "print(\"\\n-------Response--------\")\n", "print(response.text, end=\"\")" ] }, { "cell_type": "markdown", "metadata": { "id": "9aab8e304ed4" }, "source": [ "#### Images with direct links\n", "\n", "You can also use direct links to images, as shown below. The helper function `load_image_from_url()` (that was declared earlier) converts the image to bytes and returns it as an Image object that can be then be sent to the Gemini model with the text prompt." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "fb42e6dd96f3" }, "outputs": [], "source": [ "# Load image from Cloud Storage URI\n", "image_url = (\n", " \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/boats.jpeg\"\n", ")\n", "image = load_image_from_url(image_url) # convert to bytes\n", "\n", "# Prepare contents\n", "prompt = \"Describe the scene?\"\n", "contents = [image, prompt]\n", "\n", "response = model.generate_content(contents)\n", "\n", "print(\"-------Prompt--------\")\n", "print_multimodal_prompt(contents)\n", "\n", "print(\"\\n-------Response--------\")\n", "print(response.text)" ] }, { "cell_type": "markdown", "metadata": { "id": "09b990e7f130" }, "source": [ "#### Combining multiple images and text prompts for few-shot prompting" ] }, { "cell_type": "markdown", "metadata": { "id": "15793d11580e" }, "source": [ "You can send more than one image at a time, and also place your images anywhere alongside your text prompt.\n", "\n", "In the example below, few-shot prompting is performed to have the Gemini model return the city and landmark in a specific JSON format." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VfU7Qlz1hAEA" }, "outputs": [], "source": [ "# Load images from Cloud Storage URI\n", "image1_url = \"https://storage.googleapis.com/github-repo/img/gemini/intro/landmark1.jpg\"\n", "image2_url = \"https://storage.googleapis.com/github-repo/img/gemini/intro/landmark2.jpg\"\n", "image3_url = \"https://storage.googleapis.com/github-repo/img/gemini/intro/landmark3.jpg\"\n", "image1 = load_image_from_url(image1_url)\n", "image2 = load_image_from_url(image2_url)\n", "image3 = load_image_from_url(image3_url)\n", "\n", "# Prepare prompts\n", "prompt1 = \"\"\"{\"city\": \"London\", \"Landmark:\", \"Big Ben\"}\"\"\"\n", "prompt2 = \"\"\"{\"city\": \"Paris\", \"Landmark:\", \"Eiffel Tower\"}\"\"\"\n", "\n", "# Prepare contents\n", "contents = [image1, prompt1, image2, prompt2, image3]\n", "\n", "responses = model.generate_content(contents)\n", "\n", "print(\"-------Prompt--------\")\n", "print_multimodal_prompt(contents)\n", "\n", "print(\"\\n-------Response--------\")\n", "print(response.text)" ] }, { "cell_type": "markdown", "metadata": { "id": "vyjpi1fB7mgj" }, "source": [ "### Generate text from a video file\n", "\n", "Specify the Cloud Storage URI of the video to include in the prompt. The bucket that stores the file must be in the same Google Cloud project that's sending the request. You must also specify the `mime_type` field. The supported MIME type for video includes `video/mp4`.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "82165d4ed019" }, "outputs": [], "source": [ "file_path = \"github-repo/img/gemini/multimodality_usecases_overview/pixel8.mp4\"\n", "video_uri = f\"gs://{file_path}\"\n", "video_url = f\"https://storage.googleapis.com/{file_path}\"\n", "\n", "IPython.display.Video(video_url, width=450)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VXX1jLXq7ojB" }, "outputs": [], "source": [ "prompt = \"\"\"\n", "Answer the following questions using the video only:\n", "What is the profession of the main person?\n", "What are the main features of the phone highlighted?\n", "Which city was this recorded in?\n", "Provide the answer in JSON.\n", "\"\"\"\n", "\n", "video = Part.from_uri(video_uri, mime_type=\"video/mp4\")\n", "contents = [prompt, video]\n", "\n", "response = model.generate_content(contents)\n", "\n", "print(response.text)" ] }, { "cell_type": "markdown", "metadata": { "id": "5cdf391f2067" }, "source": [ "### Direct analysis of publicly available web media\n", "\n", "This new feature enables you to directly process publicly available URL resources including images, text, video and audio with Gemini. This feature supports all currently [supported modalities and file formats](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#blob).\n", "\n", "In this example, you add the file URL of a publicly available image file to the request to identify what's in the image." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "d778a7e12b56" }, "outputs": [], "source": [ "prompt = \"\"\"\n", "Extract the objects in the given image and output them in a list in alphabetical order.\n", "\"\"\"\n", "\n", "image_file = Part.from_uri(\n", " \"https://storage.googleapis.com/cloud-samples-data/generative-ai/image/office-desk.jpeg\",\n", " \"image/jpeg\",\n", ")\n", "\n", "response = model.generate_content([image_file, prompt])\n", "\n", "print(response.text)" ] }, { "cell_type": "markdown", "metadata": { "id": "68186ce494bc" }, "source": [ "This example demonstrates how to add the file URL of a publicly available video file to the request, and use the [controlled generation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/control-generated-output) capability to constraint the model output to a structured format." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "75d61049f1cb" }, "outputs": [], "source": [ "response_schema = {\n", " \"type\": \"ARRAY\",\n", " \"items\": {\n", " \"type\": \"OBJECT\",\n", " \"properties\": {\n", " \"timecode\": {\n", " \"type\": \"STRING\",\n", " },\n", " \"chapter_summary\": {\n", " \"type\": \"STRING\",\n", " },\n", " },\n", " \"required\": [\"timecode\", \"chapter_summary\"],\n", " },\n", "}\n", "\n", "prompt = \"\"\"\n", "Chapterize this video content by grouping the video content into chapters and providing a brief summary for each chapter. \n", "Please only capture key events and highlights. If you are not sure about any info, please do not make it up. \n", "\"\"\"\n", "\n", "video_file = Part.from_uri(\n", " \"https://storage.googleapis.com/cloud-samples-data/generative-ai/video/rio_de_janeiro_beyond_the_map_rio.mp4\",\n", " \"video/mp4\",\n", ")\n", "\n", "response = model.generate_content(\n", " contents=[video_file, prompt],\n", " generation_config=GenerationConfig(\n", " response_mime_type=\"application/json\",\n", " response_schema=response_schema,\n", " ),\n", ")\n", "\n", "print(response.text)" ] } ], "metadata": { "colab": { "name": "intro_gemini_python.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }