notebooks/community/model_garden/model_garden_jax_paligemma_deployment.ipynb (962 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "OdZIyZwjgsQcOXnmE8X0xy40"
},
"outputs": [],
"source": [
"# Copyright 2025 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VJWDivOv3OWy"
},
"source": [
"# Vertex AI Model Garden - PaliGemma (Deployment)\n",
"\n",
"<table><tbody><tr>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/instances\">\n",
" <img alt=\"Workbench logo\" src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" width=\"32px\"><br> Run in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fcommunity%2Fmodel_garden%2Fmodel_garden_jax_paligemma_deployment.ipynb\">\n",
" <img alt=\"Google Cloud Colab Enterprise logo\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" width=\"32px\"><br> Run in Colab Enterprise\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_jax_paligemma_deployment.ipynb\">\n",
" <img alt=\"GitHub logo\" src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" width=\"32px\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</tr></tbody></table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iOmVD9tZXucQ"
},
"source": [
"## Overview\n",
"\n",
"This notebook demonstrates deploying PaliGemma to a Vertex AI Endpoint and making online predictions for tasks listed below. The notebook also demonstrates creating a shareable link to a web interface that allows querying with the deployed PaliGemma model using [Gradio](https://www.gradio.app/).\n",
"\n",
"\n",
"### Objective\n",
"\n",
"- Deploy PaliGemma to a Vertex AI Endpoint.\n",
"- Make predictions to the endpoint including:\n",
" - Answering questions about a given image.\n",
" - Captioning images.\n",
" - Extracting texts.\n",
" - Detecting objects.\n",
"- Create a playground website to use with the PaliGemma Vertex AI Endpoint.\n",
"\n",
"### File a bug\n",
"\n",
"File a bug on [GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/issues/new) if you encounter any issue with the notebook.\n",
"\n",
"### Costs\n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing), [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2aFHbs1g6Wc-"
},
"source": [
"## Before you begin"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "QvQjsmIJ6Y3f"
},
"outputs": [],
"source": [
"# @title Setup Google Cloud project\n",
"# @markdown 1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
"\n",
"# @markdown 2. **[Optional]** Set region. If not set, the region will be set automatically according to Colab Enterprise environment.\n",
"\n",
"REGION = \"\" # @param {type:\"string\"}\n",
"\n",
"# @markdown 3. If you want to run predictions with A100 80GB or H100 GPUs, we recommend using the regions listed below. **NOTE:** Make sure you have associated quota in selected regions. Click the links to see your current quota for each GPU type: [Nvidia A100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_a100_80gb_gpus), [Nvidia H100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_h100_gpus). You can request for quota following the instructions at [\"Request a higher quota\"](https://cloud.google.com/docs/quota/view-manage#requesting_higher_quota).\n",
"\n",
"# @markdown > | Machine Type | Accelerator Type | Recommended Regions |\n",
"# @markdown | ----------- | ----------- | ----------- |\n",
"# @markdown | a2-ultragpu-1g | 1 NVIDIA_A100_80GB | us-central1, us-east4, europe-west4, asia-southeast1, us-east4 |\n",
"# @markdown | a3-highgpu-2g | 2 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n",
"# @markdown | a3-highgpu-4g | 4 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n",
"# @markdown | a3-highgpu-8g | 8 NVIDIA_H100_80GB | us-central1, europe-west4, us-west1, asia-southeast1 |\n",
"\n",
"# Upgrade Vertex AI SDK.\n",
"! pip3 install --upgrade --quiet 'google-cloud-aiplatform>=1.84.0'\n",
"\n",
"# Import the necessary packages\n",
"! pip install -q gradio==4.21.0\n",
"import enum\n",
"import importlib\n",
"import io\n",
"import os\n",
"import re\n",
"from typing import Sequence, Tuple\n",
"\n",
"import gradio as gr\n",
"import matplotlib as mpl\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"from google.cloud import aiplatform\n",
"from PIL import Image\n",
"\n",
"# Upgrade Vertex AI SDK.\n",
"if os.environ.get(\"VERTEX_PRODUCT\") != \"COLAB_ENTERPRISE\":\n",
" ! pip install --upgrade tensorflow\n",
"! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n",
"\n",
"common_util = importlib.import_module(\n",
" \"vertex-ai-samples.community-content.vertex_model_garden.model_oss.notebook_util.common_util\"\n",
")\n",
"\n",
"LABEL = \"endpoint\"\n",
"models, endpoints = {}, {}\n",
"\n",
"\n",
"# Get the default cloud project id.\n",
"PROJECT_ID = os.environ[\"GOOGLE_CLOUD_PROJECT\"]\n",
"\n",
"# Get the default region for launching jobs.\n",
"if not REGION:\n",
" REGION = os.environ[\"GOOGLE_CLOUD_REGION\"]\n",
"\n",
"# Initialize Vertex AI API.\n",
"print(\"Initializing Vertex AI API.\")\n",
"aiplatform.init(project=PROJECT_ID, location=REGION)\n",
"\n",
"! gcloud config set project $PROJECT_ID\n",
"\n",
"import vertexai\n",
"\n",
"vertexai.init(\n",
" project=PROJECT_ID,\n",
" location=REGION,\n",
")\n",
"\n",
"# @markdown ### Access PaliGemma models on Vertex AI for GPU based serving\n",
"# @markdown Accept the model agreement to access the models:\n",
"# @markdown 1. Open the [PaliGemma model card](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363) from [Vertex AI Model Garden](https://cloud.google.com/model-garden).\n",
"# @markdown 1. Review and accept the agreement in the pop-up window on the model card page. If you have previously accepted the model agreement, there will not be a pop-up window on the model card page and this step is not needed.\n",
"# @markdown 1. After accepting the agreement of PaliGemma, a `gs://` URI containing PaliGemma pretrained models will be shared.\n",
"# @markdown 1. Paste the link in the `VERTEX_AI_MODEL_GARDEN_PALIGEMMA` field below.\n",
"# @markdown 1. The PaliGemma models will be copied into `BUCKET_URI`.\n",
"# @markdown The file transfer can take anywhere from 15 minutes to 30 minutes.\n",
"VERTEX_AI_MODEL_GARDEN_PALIGEMMA = \"gs://\" # @param {type:\"string\", isTemplate:true}\n",
"assert (\n",
" VERTEX_AI_MODEL_GARDEN_PALIGEMMA\n",
"), \"Click the agreement of PaliGemma in Vertex AI Model Garden, and get the GCS path of PaliGemma model artifacts.\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kyMJXkfviWgl"
},
"source": [
"## Deploy PaliGemma to a Vertex AI Endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "JThvioAxy8-a"
},
"outputs": [],
"source": [
"# @title Select the model variants\n",
"\n",
"pretrained_filename_lookup = {\n",
" \"paligemma-224-float32\": \"pt_224.npz\",\n",
" \"paligemma-224-float16\": \"pt_224.f16.npz\",\n",
" \"paligemma-224-bfloat16\": \"pt_224.bf16.npz\",\n",
" \"paligemma-448-float32\": \"pt_448.npz\",\n",
" \"paligemma-448-float16\": \"pt_448.f16.npz\",\n",
" \"paligemma-448-bfloat16\": \"pt_448.bf16.npz\",\n",
" \"paligemma-896-float32\": \"pt_896.npz\",\n",
" \"paligemma-896-float16\": \"pt_896.f16.npz\",\n",
" \"paligemma-896-bfloat16\": \"pt_896.bf16.npz\",\n",
" \"paligemma-mix-224-float32\": \"mix_224.npz\",\n",
" \"paligemma-mix-224-float16\": \"mix_224.f16.npz\",\n",
" \"paligemma-mix-224-bfloat16\": \"mix_224.bf16.npz\",\n",
" \"paligemma-mix-448-float32\": \"mix_448.npz\",\n",
" \"paligemma-mix-448-float16\": \"mix_448.f16.npz\",\n",
" \"paligemma-mix-448-bfloat16\": \"mix_448.bf16.npz\",\n",
"}\n",
"\n",
"# @markdown Select the desired resolution and precision of prebuilt model to deploy, leaving the optional `custom_paligemma_model_uri` as is. Higher resolution and precision_type can result in better inference results, but may require additional GPU.\n",
"\n",
"# @markdown You can also serve a finetuned PaliGemma model by setting `resolution` and `precision_type` to the resolution and precision type of the original base model and then setting `custom_paligemma_model_uri` to the GCS URI containing the model.\n",
"\n",
"# @markdown **Note**: You cannot use accelerator type `NVIDIA_TESLA_V100` to serve prebuilt or finetuned PaliGemma models with resolution `896` and precision_type `float32`.\n",
"\n",
"model_variant = \"mix\" # @param [\"mix\", \"pt\"]\n",
"resolution = 224 # @param [224, 448, 896]\n",
"precision_type = \"float32\" # @param [\"float32\", \"float16\", \"bfloat16\"]\n",
"custom_paligemma_model_uri = \"gs://\" # @param {type: \"string\"}\n",
"\n",
"if model_variant == \"mix\":\n",
" model_name_prefix = \"paligemma-mix\"\n",
"else:\n",
" model_name_prefix = \"paligemma\"\n",
"\n",
"\n",
"if custom_paligemma_model_uri == \"gs://\" or not custom_paligemma_model_uri:\n",
" model_name = f\"{model_name_prefix}-{resolution}-{precision_type}\"\n",
" checkpoint_filename = pretrained_filename_lookup[model_name]\n",
" checkpoint_path = os.path.join(\n",
" VERTEX_AI_MODEL_GARDEN_PALIGEMMA, checkpoint_filename\n",
" )\n",
" PUBLISHER_MODEL_NAME = f\"publishers/google/models/paligemma@{model_name}\"\n",
"else:\n",
" model_name = f\"{model_name_prefix}-{resolution}-{precision_type}-custom\"\n",
" checkpoint_path = custom_paligemma_model_uri\n",
"\n",
"# @markdown If you want to use other accelerator types not listed below, then check other Vertex AI prediction supported accelerators and regions at https://cloud.google.com/vertex-ai/docs/predictions/configure-compute. You may need to manually set the `machine_type`, `accelerator_type`, and `accelerator_count` in the code by clicking `Show code` first.\n",
"# @markdown Select the accelerator type to use to deploy the model:\n",
"accelerator_type = \"NVIDIA_L4\" # @param [\"NVIDIA_L4\", \"NVIDIA_TESLA_V100\"]\n",
"if accelerator_type == \"NVIDIA_L4\":\n",
" machine_type = \"g2-standard-16\"\n",
" accelerator_count = 1\n",
"elif accelerator_type == \"NVIDIA_TESLA_V100\":\n",
" if resolution == 896 and precision_type == \"float32\":\n",
" raise ValueError(\n",
" \"NVIDIA_TESLA_V100 is not sufficient. Multi-gpu is not supported for PaLIGemma.\"\n",
" )\n",
" else:\n",
" machine_type = \"n1-highmem-8\"\n",
" accelerator_count = 1\n",
"else:\n",
" raise ValueError(\n",
" f\"Recommended machine settings not found for: {accelerator_type}. To use another another accelerator, edit this code block to pass in an appropriate `machine_type`, `accelerator_type`, and `accelerator_count` to the deploy_model function by clicking `Show Code` and then modifying the code.\"\n",
" )\n",
"\n",
"# @markdown Set use_dedicated_endpoint to False if you don't want to use [dedicated endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#create-dedicated-endpoint). Note that [dedicated endpoint does not support VPC Service Controls](https://cloud.google.com/vertex-ai/docs/predictions/choose-endpoint-type), uncheck the box if you are using VPC-SC.\n",
"use_dedicated_endpoint = True # @param {type:\"boolean\"}\n",
"\n",
"common_util.check_quota(\n",
" project_id=PROJECT_ID,\n",
" region=REGION,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" is_for_training=False,\n",
")\n",
"\n",
"# @markdown Click \"Show Code\" to see more details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "144IKkHrzrMs"
},
"outputs": [],
"source": [
"# @title [Option 1] Deploy with Model Garden SDK\n",
"\n",
"# @markdown Kindly note that the deployment using custom_paligemma_model_uri is not supported.\n",
"\n",
"# @markdown Deploy with Gen AI model-centric SDK. This section uploads the prebuilt model to Model Registry and deploys it to a Vertex AI Endpoint. It takes 15 minutes to 1 hour to finish depending on the size of the model. See [use open models with Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/open-models/use-open-models) for documentation on other use cases.\n",
"from vertexai.preview import model_garden\n",
"\n",
"model = model_garden.OpenModel(PUBLISHER_MODEL_NAME)\n",
"endpoints[LABEL] = model.deploy(\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" use_dedicated_endpoint=use_dedicated_endpoint,\n",
" accept_eula=True, # Accept the End User License Agreement (EULA) on the model card before deploy. Otherwise, the deployment will be forbidden.\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "toY-WPKDFesF"
},
"outputs": [],
"source": [
"# @title [Option 2] Deploy with custom configs\n",
"\n",
"# @markdown This section uploads the prebuilt PaliGemma model to Model Registry and deploys it to a Vertex AI Endpoint. It takes approximately 15 minutes to finish.\n",
"\n",
"# The pre-built serving docker image.\n",
"SERVE_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/jax-paligemma-serve-gpu:20240807_0916_RC00\"\n",
"\n",
"\n",
"def deploy_model(\n",
" model_name: str,\n",
" checkpoint_path: str,\n",
" machine_type: str = \"g2-standard-32\",\n",
" accelerator_type: str = \"NVIDIA_L4\",\n",
" accelerator_count: int = 1,\n",
" resolution: int = 224,\n",
" use_dedicated_endpoint: bool = False,\n",
") -> Tuple[aiplatform.Model, aiplatform.Endpoint]:\n",
" \"\"\"Create a Vertex AI Endpoint and deploy the specified model to the endpoint.\"\"\"\n",
" model_name_with_time = common_util.get_job_name_with_datetime(model_name)\n",
" endpoint = aiplatform.Endpoint.create(\n",
" display_name=f\"{model_name_with_time}-endpoint\",\n",
" dedicated_endpoint_enabled=use_dedicated_endpoint,\n",
" )\n",
" model = aiplatform.Model.upload(\n",
" display_name=model_name_with_time,\n",
" serving_container_image_uri=SERVE_DOCKER_URI,\n",
" serving_container_ports=[8080],\n",
" serving_container_predict_route=\"/predict\",\n",
" serving_container_health_route=\"/health\",\n",
" serving_container_environment_variables={\n",
" \"CKPT_PATH\": checkpoint_path,\n",
" \"RESOLUTION\": resolution,\n",
" \"MODEL_ID\": \"google/\" + model_name,\n",
" \"DEPLOY_SOURCE\": \"notebook\",\n",
" },\n",
" model_garden_source_model_name=\"publishers/google/models/paligemma\",\n",
" )\n",
" print(\n",
" f\"Deploying {model_name_with_time} on {machine_type} with {accelerator_count} {accelerator_type} GPU(s).\"\n",
" )\n",
" model.deploy(\n",
" endpoint=endpoint,\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" deploy_request_timeout=1800,\n",
" enable_access_logging=True,\n",
" min_replica_count=1,\n",
" sync=True,\n",
" system_labels={\n",
" \"NOTEBOOK_NAME\": \"model_garden_jax_paligemma_deployment.ipynb\",\n",
" \"NOTEBOOK_ENVIRONMENT\": common_util.get_deploy_source(),\n",
" },\n",
" )\n",
" return model, endpoint\n",
"\n",
"\n",
"models[LABEL], endpoints[LABEL] = deploy_model(\n",
" model_name=model_name,\n",
" checkpoint_path=checkpoint_path,\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" resolution=resolution,\n",
" use_dedicated_endpoint=use_dedicated_endpoint,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "tOtYOhZa3lsx"
},
"outputs": [],
"source": [
"# @title [Optional] Loading an existing Endpoint\n",
"# @markdown If you've already deployed an Endpoint, you can load it by filling in the Endpoint's ID below.\n",
"# @markdown You can view deployed Endpoints at [Vertex Online Prediction](https://console.cloud.google.com/vertex-ai/online-prediction/endpoints).\n",
"endpoint_id = \"\" # @param {type: \"string\"}\n",
"\n",
"if endpoint_id:\n",
" endpoints[LABEL] = aiplatform.Endpoint(\n",
" endpoint_name=endpoint_id,\n",
" project=PROJECT_ID,\n",
" location=REGION,\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MlP2Y7XE4SS5"
},
"source": [
"### Predict\n",
"\n",
"The following sections will use images from [pexels.com](https://www.pexels.com/) for demoing purposes. All the images have the following license: https://www.pexels.com/license/.\n",
"\n",
"Images will be resized to a width of 1000 pixels by default since requests made to a Vertex Endpoint are limited to 1.500MB."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "xnZw8wNyQhmN"
},
"outputs": [],
"source": [
"# @title Visual Question Answering\n",
"\n",
"# @markdown This section uses the deployed PaliGemma model to answer questions about a given image.\n",
"\n",
"# @markdown Once deployment succeeds, you can send requests to the endpoint with images and questions.\n",
"# @markdown \n",
"image_url = \"https://images.pexels.com/photos/4012966/pexels-photo-4012966.jpeg\" # @param {type:\"string\"}\n",
"\n",
"image = common_util.download_image(image_url)\n",
"display(image)\n",
"\n",
"# @markdown You may leave question prompts empty and they will be ignored.\n",
"question_prompt_1 = \"Which of laptop, book, pencil, clock, flower are in the image?\" # @param {type: \"string\"}\n",
"question_prompt_2 = \"Do the book and the cup have the same color?\" # @param {type: \"string\"}\n",
"question_prompt_3 = \"Is there a person in the image?\" # @param {type: \"string\"}\n",
"question_prompt_4 = \"How many laptop are in the image?\" # @param {type: \"string\"}\n",
"question_prompt_5 = \"桌子是什么颜色的?\" # @param {type: \"string\"}\n",
"\n",
"# @markdown The question prompt can be non-English languages.\n",
"questions_list = [\n",
" question_prompt_1,\n",
" question_prompt_2,\n",
" question_prompt_3,\n",
" question_prompt_4,\n",
" question_prompt_5,\n",
"]\n",
"questions = [question for question in questions_list if question]\n",
"\n",
"answers = common_util.vqa_predict(\n",
" endpoints[\"endpoint\"],\n",
" questions,\n",
" image,\n",
" use_dedicated_endpoint=use_dedicated_endpoint,\n",
")\n",
"\n",
"for question, answer in zip(questions, answers):\n",
" print(f\"Question: {question}\")\n",
" print(f\"Answer: {answer}\")\n",
"# @markdown Click \"Show Code\" to see more details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "mF1MxC1ouzqj"
},
"outputs": [],
"source": [
"# @title Image Captioning\n",
"# @markdown This section uses the deployed PaliGemma model to caption and describe an image in a chosen language.\n",
"\n",
"caption_prompt = True\n",
"\n",
"# @markdown <img src=\"https://storage.googleapis.com/longcap100/91.jpeg\" width=\"400\" >\n",
"\n",
"image_url = \"https://storage.googleapis.com/longcap100/91.jpeg\" # @param {type:\"string\"}\n",
"\n",
"language_code = \"en\" # @param {type: \"string\"}\n",
"\n",
"image = common_util.download_image(image_url)\n",
"display(image)\n",
"\n",
"# Make a prediction.\n",
"image_base64 = common_util.image_to_base64(image)\n",
"\n",
"caption = common_util.caption_predict(\n",
" endpoints[\"endpoint\"],\n",
" language_code,\n",
" image,\n",
" caption_prompt,\n",
" use_dedicated_endpoint=use_dedicated_endpoint,\n",
")\n",
"\n",
"print(\"Caption: \", caption)\n",
"# @markdown Click \"Show Code\" to see more details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "TtkXMZTIegLq"
},
"outputs": [],
"source": [
"# @title OCR\n",
"# @markdown This section uses the deployed PaliGemma model to extract text from an image, starting from the top left.\n",
"ocr_prompt = \"ocr\"\n",
"\n",
"# @markdown \n",
"image_url = \"https://images.pexels.com/photos/8919535/pexels-photo-8919535.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=2\" # @param {type:\"string\"}\n",
"\n",
"image = common_util.download_image(image_url)\n",
"display(image)\n",
"text_found = common_util.ocr_predict(\n",
" endpoints[\"endpoint\"],\n",
" ocr_prompt,\n",
" image,\n",
" use_dedicated_endpoint=use_dedicated_endpoint,\n",
")\n",
"\n",
"print(f\"Text found: {text_found}\")\n",
"# @markdown Click \"Show Code\" to see more details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "JlLr3nu-YEon"
},
"outputs": [],
"source": [
"# @title Object Detection\n",
"# @markdown This section uses the deployed PaliGemma model to output bounding boxes for specified object image in a given image.\n",
"# @markdown The text output will be parsed into bounding boxes and overlaid on the original image.\n",
"\n",
"# @markdown Specify what object to detect. To specify multiple objects, enter them as a semicolon separated list as shown below.\n",
"objects = \"plant ; pineapple ; glasses\" # @param {type:\"string\"}\n",
"detect_promt = f\"detect {objects}\"\n",
"\n",
"\n",
"def parse_detections(txt):\n",
" \"\"\"Parses bounding boxes from a detection string.\"\"\"\n",
" bboxes = []\n",
" for loc_text in txt.split(\" ; \"):\n",
" m = re.match(\n",
" r\"<loc(?P<y0>\\d\\d\\d\\d)><loc(?P<x0>\\d\\d\\d\\d)><loc(?P<y1>\\d\\d\\d\\d)><loc(?P<x1>\\d\\d\\d\\d)>.*\",\n",
" loc_text,\n",
" )\n",
" if m is not None:\n",
" d = m.groupdict()\n",
" else:\n",
" raise ValueError(f\"{txt} is not a value detection string.\")\n",
"\n",
" def fmt_box(x):\n",
" return float(x) / 1024.0\n",
"\n",
" box = np.array(\n",
" [fmt_box(d[\"y0\"]), fmt_box(d[\"x0\"]), fmt_box(d[\"y1\"]), fmt_box(d[\"x1\"])]\n",
" )\n",
" bboxes.append(box)\n",
" return bboxes\n",
"\n",
"\n",
"def plot_bounding_boxes(im: Image.Image, bboxes: Sequence[np.ndarray]) -> Image.Image:\n",
" fig, ax = plt.subplots(figsize=(5, 5))\n",
" ax.imshow(im, zorder=-1)\n",
" ax.set_xlim(*ax.get_xlim())\n",
" ax.set_ylim(*ax.get_ylim())\n",
"\n",
" for y0, x0, y1, x1 in bboxes:\n",
" box = np.array([y0, x0, y1, x1])\n",
" w, h = im.size\n",
" y1, x1, y2, x2 = box * [h, w, h, w]\n",
" ax.add_patch(\n",
" mpl.patches.Rectangle(\n",
" (x1, y1), x2 - x1, y2 - y1, linewidth=1, edgecolor=\"r\", facecolor=\"none\"\n",
" )\n",
" )\n",
" buf = io.BytesIO()\n",
" fig.savefig(buf)\n",
" buf.seek(0)\n",
" return Image.open(buf)\n",
"\n",
"\n",
"# @markdown \n",
"image_url = \"https://images.pexels.com/photos/1006293/pexels-photo-1006293.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=2\" # @param {type:\"string\"}\n",
"\n",
"image = common_util.download_image(image_url)\n",
"display(image)\n",
"\n",
"# Make a prediction.\n",
"detection_response = common_util.detect_predict(\n",
" endpoints[\"endpoint\"],\n",
" detect_promt,\n",
" image,\n",
" use_dedicated_endpoint=use_dedicated_endpoint,\n",
")\n",
"\n",
"print(\"Output: \", detection_response)\n",
"\n",
"\n",
"bboxes = parse_detections(detection_response)\n",
"plot_bounding_boxes(image, bboxes)\n",
"# @markdown Click \"Show Code\" to see more details."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "464Ew1pZOjm_"
},
"source": [
"## Creating a webpage playground with Gradio"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "qEF6NbhnABZq"
},
"outputs": [],
"source": [
"# @title How to use\n",
"# @markdown This is a playground similar to the popular [Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui).\n",
"\n",
"# @markdown **Prerequisites**\n",
"# @markdown - Before you can upload an image to make a prediction, you need to select a Vertex prediction endpoint serving PaliGemma\n",
"# @markdown from the endpoint dropdown list that has been deployed in the current project and region.\n",
"# @markdown - If no models have been deployed, you can create a new Vertex prediction\n",
"# @markdown endpoint by clicking \"Deploy to Vertex\" in the playground or running the `Deploy` cell above.\n",
"# @markdown * New model deployment takes approximately 15 minutes. You can check the progress at [Vertex Online Prediction](https://console.cloud.google.com/vertex-ai/online-prediction/endpoints).\n",
"\n",
"# @markdown **How to use**\n",
"\n",
"# @markdown Just run this cell and a link to the playground formatted as `https://####.gradio.live` will be outputted.\n",
"# @markdown This link will take you to the playground in a separate browser tab.\n",
"\n",
"\n",
"class Task(enum.Enum):\n",
" VQA = \"Visual Question Answering\"\n",
" CAPTION = \"Image Captioning\"\n",
" OCR = \"OCR\"\n",
" DETECT = \"Object Detection\"\n",
"\n",
"\n",
"def list_paligemma_endpoints() -> list[str]:\n",
" \"\"\"Returns all valid prediction endpoints for in the project and region.\"\"\"\n",
" # Gets all the valid endpoints in the project and region.\n",
" endpoints = aiplatform.Endpoint.list(order_by=\"create_time desc\")\n",
" # Filters out the endpoints which do not have a deployed model, and the endpoint is for image generation\n",
" endpoints = list(\n",
" filter(\n",
" lambda endpoint: endpoint.traffic_split\n",
" and \"pali\" in endpoint.display_name.lower(),\n",
" endpoints,\n",
" )\n",
" )\n",
"\n",
" endpoint_names = list(\n",
" map(\n",
" lambda endpoint: f\"{endpoint.name} - {endpoint.display_name[:40]}\",\n",
" endpoints,\n",
" )\n",
" )\n",
"\n",
" if not endpoint_names:\n",
" gr.Warning(\"No prediction endpoints were found. Create an Endpoint first.\")\n",
"\n",
" return endpoint_names\n",
"\n",
"\n",
"def get_endpoint(endpoint_name: str) -> aiplatform.Endpoint:\n",
" \"\"\"Returns a Vertex endpoint for the given endpoint_name.\"\"\"\n",
" endpoint_id = endpoint_name.split(\" - \")[0]\n",
" endpoint = aiplatform.Endpoint(\n",
" f\"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{endpoint_id}\"\n",
" )\n",
" return endpoint\n",
"\n",
"\n",
"def select_interface(interface_name: str):\n",
" if interface_name == Task.VQA.value:\n",
" return {\n",
" text_input_box: gr.update(label=\"Question\", value=None, visible=True),\n",
" language_code_box: gr.update(visible=False),\n",
" submit_button: gr.update(value=\"Answer\"),\n",
" text_output: gr.update(value=None),\n",
" image_output: gr.update(value=None, visible=False),\n",
" }\n",
" elif interface_name == Task.CAPTION.value:\n",
" return {\n",
" text_input_box: gr.update(value=None, visible=False),\n",
" language_code_box: gr.update(visible=True),\n",
" submit_button: gr.update(value=\"Caption\"),\n",
" text_output: gr.update(value=None),\n",
" image_output: gr.update(value=None, visible=False),\n",
" }\n",
" elif interface_name == Task.OCR.value:\n",
" return {\n",
" text_input_box: gr.update(value=None, visible=False),\n",
" language_code_box: gr.update(visible=False),\n",
" submit_button: gr.update(value=\"Extract text\"),\n",
" text_output: gr.update(value=None),\n",
" image_output: gr.update(value=None, visible=False),\n",
" }\n",
" elif interface_name == Task.DETECT.value:\n",
" return {\n",
" text_input_box: gr.update(label=\"Object(s)\", value=None, visible=True),\n",
" language_code_box: gr.update(visible=False),\n",
" submit_button: gr.update(value=\"Detect\"),\n",
" text_output: gr.update(value=None),\n",
" image_output: gr.update(value=None, visible=True),\n",
" }\n",
" else:\n",
" raise gr.Error(f\"Invalid interface name: {interface_name}\")\n",
"\n",
"\n",
"def deploy_model_handler(model_choice: str) -> None:\n",
" gr.Info(\"Starting model deployment.\")\n",
" model_name = model_choice.replace(\"-pt-\", \"-\")\n",
" checkpoint_filename = pretrained_filename_lookup[model_name]\n",
" _, _, resolution, _ = model_choice.split(\"-\")\n",
" resolution = int(resolution)\n",
" model, endpoint = deploy_model(\n",
" model_name=model_choice,\n",
" checkpoint_path=os.path.join(\n",
" VERTEX_AI_MODEL_GARDEN_PALIGEMMA, checkpoint_filename\n",
" ),\n",
" machine_type=\"g2-standard-16\",\n",
" accelerator_type=\"NVIDIA_L4\",\n",
" accelerator_count=1,\n",
" resolution=resolution,\n",
" )\n",
" gr.Info(f\"Deploying model ID: {model.name}, endpoint ID: {endpoint.name}\")\n",
"\n",
"\n",
"def predict_handler(\n",
" interface_name: str,\n",
" endpoint_name: str,\n",
" image: Image.Image,\n",
" prompt: str,\n",
" language_code: str,\n",
") -> Tuple[str, Image.Image]:\n",
" if not endpoint_name:\n",
" raise gr.Error(\"Select (or deploy) a model first!\")\n",
" if not image:\n",
" raise gr.Error(\"You must upload an image!\")\n",
" endpoint = get_endpoint(endpoint_name)\n",
" if interface_name == Task.VQA.value:\n",
" return common_util.vqa_predict(endpoint, [prompt], image)[0], None\n",
" elif interface_name == Task.CAPTION.value:\n",
" return common_util.caption_predict(endpoint, language_code, image, True), None\n",
" elif interface_name == Task.OCR.value:\n",
" return common_util.ocr_predict(endpoint, ocr_prompt, image), None\n",
" elif interface_name == Task.DETECT.value:\n",
" text_output = common_util.detect_predict(endpoint, f\"detect {prompt}\", image)\n",
" bboxes = parse_detections(text_output)\n",
" return text_output, plot_bounding_boxes(image, bboxes)\n",
" else:\n",
" raise gr.Error(\"Select an interface first!\")\n",
"\n",
"\n",
"tip_text = r\"\"\"\n",
"<b> Tips: </b>\n",
"1. Select a Vertex prediction endpoint with a deployed PaLIGemma model or click `Deploy to Vertex` to deploy PaLIGemma to Vertex.\n",
"2. New model deployment takes approximately 15 minutes. You can check the progress at [Vertex Online Prediction](https://console.cloud.google.com/vertex-ai/online-prediction/endpoints).\n",
"3. After the model deployment is complete, click `Refresh Endpoints list` to view the new endpoint in the dropdown list.\n",
"\"\"\"\n",
"\n",
"css = \"\"\"\n",
".gradio-container {\n",
" width: 85% !important\n",
"}\n",
"\"\"\"\n",
"with gr.Blocks(\n",
" css=css, theme=gr.themes.Default(primary_hue=\"orange\", secondary_hue=\"blue\")\n",
") as demo:\n",
" gr.Markdown(\"# Model Garden Playground for PaliGemma\")\n",
" with gr.Row(equal_height=True):\n",
" with gr.Column(scale=3):\n",
" gr.Markdown(tip_text)\n",
" with gr.Column(scale=2):\n",
" with gr.Row():\n",
" endpoint_name = gr.Dropdown(\n",
" scale=7,\n",
" label=\"Select a model previously deployed on Vertex\",\n",
" choices=list_paligemma_endpoints(),\n",
" value=None,\n",
" )\n",
" refresh_button = gr.Button(\n",
" \"Refresh Endpoints list\",\n",
" scale=1,\n",
" variant=\"primary\",\n",
" min_width=10,\n",
" )\n",
" with gr.Row():\n",
" selected_model = gr.Dropdown(\n",
" scale=7,\n",
" label=\"Deploy a new model to Vertex\",\n",
" choices=[\n",
" \"paligemma-mix-224-float32\",\n",
" \"paligemma-mix-224-float16\",\n",
" \"paligemma-mix-224-bfloat16\",\n",
" \"paligemma-mix-448-float32\",\n",
" \"paligemma-mix-448-float16\",\n",
" \"paligemma-mix-448-bfloat16\",\n",
" \"paligemma-pt-224-float32\",\n",
" \"paligemma-pt-224-float16\",\n",
" \"paligemma-pt-224-bfloat16\",\n",
" \"paligemma-pt-448-float32\",\n",
" \"paligemma-pt-448-float16\",\n",
" \"paligemma-pt-448-bfloat16\",\n",
" \"paligemma-pt-896-float32\",\n",
" \"paligemma-pt-896-float16\",\n",
" \"paligemma-pt-896-bfloat16\",\n",
" ],\n",
" value=None,\n",
" )\n",
" deploy_model_button = gr.Button(\n",
" \"Deploy a new model\",\n",
" scale=1,\n",
" variant=\"primary\",\n",
" min_width=10,\n",
" )\n",
" with gr.Row(equal_height=True):\n",
" with gr.Column(scale=1):\n",
" image_input = gr.Image(\n",
" show_label=True,\n",
" type=\"pil\",\n",
" label=\"Upload\",\n",
" visible=True,\n",
" height=400,\n",
" )\n",
" with gr.Group():\n",
" with gr.Tab(\"Task\"):\n",
" interfaces_box = gr.Radio(\n",
" show_label=False,\n",
" choices=[\n",
" Task.VQA.value,\n",
" Task.CAPTION.value,\n",
" Task.OCR.value,\n",
" Task.DETECT.value,\n",
" ],\n",
" value=Task.VQA.value,\n",
" )\n",
" text_input_box = gr.Textbox(label=\"Question\", lines=1)\n",
" language_code_box = gr.Textbox(\n",
" value=\"en\", label=\"Language code\", lines=1, visible=False\n",
" )\n",
" submit_button = gr.Button(\"Answer\", variant=\"primary\")\n",
" with gr.Column(scale=1):\n",
" image_output = gr.Image(label=\"Image response:\", visible=False)\n",
" text_output = gr.Textbox(label=\"Text response:\")\n",
"\n",
" refresh_button.click(\n",
" fn=lambda: gr.update(choices=list_paligemma_endpoints()),\n",
" outputs=[endpoint_name],\n",
" )\n",
" deploy_model_button.click(\n",
" deploy_model_handler,\n",
" inputs=[selected_model],\n",
" outputs=[],\n",
" )\n",
" interfaces_box.change(\n",
" fn=select_interface,\n",
" inputs=interfaces_box,\n",
" outputs=[\n",
" text_input_box,\n",
" language_code_box,\n",
" submit_button,\n",
" text_output,\n",
" image_output,\n",
" ],\n",
" )\n",
" submit_button.click(\n",
" fn=predict_handler,\n",
" inputs=[\n",
" interfaces_box,\n",
" endpoint_name,\n",
" image_input,\n",
" text_input_box,\n",
" language_code_box,\n",
" ],\n",
" outputs=[text_output, image_output],\n",
" )\n",
"show_debug_logs = True # @param {type: \"boolean\"}\n",
"demo.queue()\n",
"demo.launch(\n",
" share=True, inline=False, inbrowser=True, debug=show_debug_logs, show_error=True\n",
")\n",
"\n",
"# @markdown Click \"Show Code\" to see more details."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IrVZ030i4XMY"
},
"source": [
"## Clean up resources"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "YsMpOI1kYjil"
},
"outputs": [],
"source": [
"# @title Delete the models and endpoints\n",
"\n",
"# @markdown Delete the experiment models and endpoints to recycle the resources\n",
"# @markdown and avoid unnecessary continuous charges that may incur.\n",
"\n",
"# Undeploy model and delete endpoint.\n",
"for endpoint in endpoints.values():\n",
" endpoint.delete(force=True)\n",
"\n",
"# Delete models.\n",
"for model in models.values():\n",
" model.delete()"
]
}
],
"metadata": {
"colab": {
"name": "model_garden_jax_paligemma_deployment.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}