notebooks/community/model_garden/model_garden_hf_paligemma2_deployment.ipynb (492 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "OdZIyZwjgsQcOXnmE8X0xy40"
},
"outputs": [],
"source": [
"# Copyright 2025 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VJWDivOv3OWy"
},
"source": [
"# Vertex AI Model Garden - PaliGemma 2 (Deployment)\n",
"\n",
"<table><tbody><tr>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/instances\">\n",
" <img alt=\"Workbench logo\" src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" width=\"32px\"><br> Run in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fcommunity%2Fmodel_garden%2Fmodel_garden_hf_paligemma2_deployment.ipynb\">\n",
" <img alt=\"Google Cloud Colab Enterprise logo\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" width=\"32px\"><br> Run in Colab Enterprise\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_hf_paligemma2_deployment.ipynb\">\n",
" <img alt=\"GitHub logo\" src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" width=\"32px\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</tr></tbody></table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iOmVD9tZXucQ"
},
"source": [
"## Overview\n",
"\n",
"This notebook provides a practical introduction to using the PaLiGemma 2 model, a powerful vision-language model developed by Google. We'll demonstrate how to leverage its multimodal capabilities to perform tasks like vision question answering. Consult the [model card](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/paligemma) for more information.\n",
"\n",
"\n",
"### Objective\n",
"\n",
"- Deploy PaliGemma 2 to a Vertex AI Endpoint.\n",
"- Make predictions to the endpoint including:\n",
" - Answering questions about a given image.\n",
"\n",
"### File a bug\n",
"\n",
"File a bug on [GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/issues/new) if you encounter any issue with the notebook.\n",
"\n",
"### Costs\n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing), [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2aFHbs1g6Wc-"
},
"source": [
"## Before you begin"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "QvQjsmIJ6Y3f"
},
"outputs": [],
"source": [
"# @title Setup Google Cloud project\n",
"\n",
"# Upgrade Vertex AI SDK.\n",
"! pip3 install --upgrade --quiet 'google-cloud-aiplatform>=1.84.0'\n",
"\n",
"# Used for common utilities.\n",
"! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n",
"\n",
"import importlib\n",
"# Import the necessary packages\n",
"import os\n",
"from typing import Any, Dict, Tuple\n",
"\n",
"from google.cloud import aiplatform\n",
"\n",
"# @markdown 1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
"\n",
"# @markdown 2. **[Optional]** Set region. If not set, the region will be set automatically according to Colab Enterprise environment.\n",
"\n",
"REGION = \"\" # @param {type:\"string\"}\n",
"\n",
"# @markdown 3. If you want to run predictions with A100 80GB or H100 GPUs, we recommend using the regions listed below. **NOTE:** Make sure you have associated quota in selected regions. Click the links to see your current quota for each GPU type: [Nvidia A100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_a100_80gb_gpus), [Nvidia H100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_h100_gpus). You can request for quota following the instructions at [\"Request a higher quota\"](https://cloud.google.com/docs/quota/view-manage#requesting_higher_quota).\n",
"\n",
"# @markdown > | Machine Type | Accelerator Type | Recommended Regions |\n",
"# @markdown | ----------- | ----------- | ----------- |\n",
"# @markdown | a2-ultragpu-1g | 1 NVIDIA_A100_80GB | us-central1, us-east4, europe-west4, asia-southeast1, us-east4 |\n",
"# @markdown | a3-highgpu-2g | 2 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n",
"# @markdown | a3-highgpu-4g | 4 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n",
"# @markdown | a3-highgpu-8g | 8 NVIDIA_H100_80GB | us-central1, europe-west4, us-west1, asia-southeast1 |\n",
"\n",
"\n",
"# Get the default cloud project id.\n",
"PROJECT_ID = os.environ[\"GOOGLE_CLOUD_PROJECT\"]\n",
"\n",
"# Get the default region for launching jobs.\n",
"if not REGION:\n",
" REGION = os.environ[\"GOOGLE_CLOUD_REGION\"]\n",
"\n",
"# Initialize Vertex AI API.\n",
"print(\"Initializing Vertex AI API.\")\n",
"aiplatform.init(project=PROJECT_ID, location=REGION)\n",
"\n",
"! gcloud config set project $PROJECT_ID\n",
"\n",
"import vertexai\n",
"\n",
"vertexai.init(\n",
" project=PROJECT_ID,\n",
" location=REGION,\n",
")\n",
"\n",
"common_util = importlib.import_module(\n",
" \"vertex-ai-samples.community-content.vertex_model_garden.model_oss.notebook_util.common_util\"\n",
")\n",
"\n",
"models, endpoints = {}, {}\n",
"LABEL = \"paligemma2\"\n",
"\n",
"# The pre-built serving docker images.\n",
"SERVE_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/pytorch-one-serve:20250205_0822_RC00\"\n",
"\n",
"\n",
"def deploy_model(\n",
" model_name: str = None,\n",
" model_id: str = None,\n",
" task: str = None,\n",
" machine_type: str = \"g2-standard-8\",\n",
" accelerator_type: str = \"NVIDIA_L4\",\n",
" accelerator_count: int = 1,\n",
" serving_port: int = 7080,\n",
" serving_route: str = \"/predict\",\n",
" serving_docker_uri: str = SERVE_DOCKER_URI,\n",
") -> Tuple[aiplatform.Endpoint, aiplatform.Model]:\n",
" \"\"\"Deploys a model to a real-time prediction endpoint.\n",
"\n",
" Args:\n",
" model_name: The base name of the model.\n",
" model_id: The model ID.\n",
" task: The task to perform.\n",
" machine_type: The machine type.\n",
" accelerator_type: The accelerator type.\n",
" accelerator_count: The accelerator count.\n",
" serving_port: The serving port.\n",
" serving_route: The serving route.\n",
" hf_token: HuggingFace token for model access.\n",
"\n",
" Returns:\n",
" A tuple containing the created endpoint and deployed model objects.\n",
" \"\"\"\n",
"\n",
" endpoint = aiplatform.Endpoint.create(\n",
" display_name=common_util.get_job_name_with_datetime(prefix=model_name)\n",
" )\n",
" serving_env = {\n",
" \"MODEL_ID\": model_id,\n",
" \"DEPLOY_SOURCE\": \"notebook\",\n",
" \"TASK\": task,\n",
" }\n",
" model = aiplatform.Model.upload(\n",
" display_name=task,\n",
" serving_container_image_uri=serving_docker_uri,\n",
" serving_container_ports=[serving_port],\n",
" serving_container_predict_route=serving_route,\n",
" serving_container_health_route=\"/ping\",\n",
" serving_container_environment_variables=serving_env,\n",
" model_garden_source_model_name=\"publishers/google/models/paligemma\",\n",
" )\n",
" model.deploy(\n",
" endpoint=endpoint,\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" sync=False,\n",
" system_labels={\n",
" \"NOTEBOOK_NAME\": \"model_garden_hf_paligemma2_deployment.ipynb\",\n",
" \"NOTEBOOK_ENVIRONMENT\": common_util.get_deploy_source(),\n",
" },\n",
" )\n",
" return endpoint, model\n",
"\n",
"\n",
"def vqa_predict(\n",
" endpoint: aiplatform.Endpoint,\n",
" image_url: str,\n",
" text_prompt: str,\n",
" parameters: Dict[str, Any] = None,\n",
") -> str:\n",
" \"\"\"Predicts the answer to a question about an image using an Endpoint,\n",
"\n",
" and passes parameters in the payload.\n",
"\n",
" Args:\n",
" endpoint: The deployed Vertex AI endpoint.\n",
" image_url: URL of the image to ask about.\n",
" text_prompt: The text prompt question.\n",
" parameters: Additional parameters for the prediction request.\n",
"\n",
" Returns:\n",
" The predicted answer string or None if no prediction.\n",
" \"\"\"\n",
"\n",
" instances = []\n",
" if text_prompt:\n",
" instances.append(\n",
" {\n",
" \"text_prompt\": text_prompt,\n",
" \"image_url\": image_url,\n",
" }\n",
" )\n",
"\n",
" # Construct the prediction payload\n",
" payload = {\"instances\": instances}\n",
" if parameters:\n",
" payload[\"parameters\"] = parameters\n",
"\n",
" response = endpoint.predict(instances=instances, parameters=parameters)\n",
" answer = None\n",
" if response.predictions:\n",
" answer = response.predictions[0][\"text\"].split(\"\\n\")[1]\n",
" return answer"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kyMJXkfviWgl"
},
"source": [
"## Deploy Model to a Vertex AI Endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "toY-WPKDFesF"
},
"outputs": [],
"source": [
"# @title Deploy\n",
"\n",
"MODEL_NAME = \"paligemma2-3b-pt-224\" # @param [\"paligemma2-3b-pt-224\", \"paligemma2-3b-mix-224\", \"paligemma2-3b-ft-docci-448\", \"paligemma2-3b-mix-448\", \"paligemma2-3b-pt-448\", \"paligemma2-3b-pt-896\", \"paligemma2-10b-mix-224\", \"paligemma2-10b-pt-224\", \"paligemma2-10b-ft-docci-448\", \"paligemma2-10b-mix-448\", \"paligemma2-10b-pt-448\", \"paligemma2-10b-pt-896\", \"paligemma2-28b-mix-224\", \"paligemma2-28b-pt-224\", \"paligemma2-28b-mix-448\", \"paligemma2-28b-pt-448\", \"paligemma2-28b-pt-896\"]\n",
"GCS_PREFIX = \"gs://vertex-model-garden-restricted-us/paligemma2\"\n",
"\n",
"MODEL_ID = os.path.join(GCS_PREFIX, MODEL_NAME)\n",
"\n",
"PUBLISHER_MODEL_NAME = f\"publishers/google/models/paligemma@{MODEL_NAME}\"\n",
"\n",
"\n",
"# @markdown If you want to use other accelerator types not listed above, then check other Vertex AI prediction supported accelerators and regions at https://cloud.google.com/vertex-ai/docs/predictions/configure-compute. You may need to manually set the `machine_type`, `accelerator_type`, and `accelerator_count` in the code by clicking `Show code` first.\n",
"\n",
"if \"3b\" in MODEL_NAME:\n",
" accelerator_type = \"NVIDIA_L4\"\n",
" machine_type = \"g2-standard-8\"\n",
" accelerator_count = 1\n",
"elif \"10b\" in MODEL_NAME:\n",
" accelerator_type = \"NVIDIA_TESLA_A100\"\n",
" machine_type = \"a2-highgpu-1g\"\n",
" accelerator_count = 1\n",
"elif \"28b\" in MODEL_NAME:\n",
" accelerator_type = \"NVIDIA_H100_80GB\"\n",
" machine_type = \"a3-highgpu-8g\"\n",
" accelerator_count = 8\n",
"else:\n",
" raise ValueError(f\"Recommended GPU setting not found for: {MODEL_NAME}.\")\n",
"\n",
"common_util.check_quota(\n",
" project_id=PROJECT_ID,\n",
" region=REGION,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" is_for_training=False,\n",
")\n",
"\n",
"# @markdown Set use_dedicated_endpoint to False if you don't want to use [dedicated endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#create-dedicated-endpoint). Note that [dedicated endpoint does not support VPC Service Controls](https://cloud.google.com/vertex-ai/docs/predictions/choose-endpoint-type), uncheck the box if you are using VPC-SC.\n",
"use_dedicated_endpoint = True # @param {type:\"boolean\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "pe_qbTCA6nKf"
},
"outputs": [],
"source": [
"# @title [Option 1] Deploy with Model Garden SDK\n",
"\n",
"# @markdown Deploy with Gen AI model-centric SDK. This section uploads the prebuilt model to Model Registry and deploys it to a Vertex AI Endpoint. It takes 15 minutes to 1 hour to finish depending on the size of the model. See [use open models with Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/open-models/use-open-models) for documentation on other use cases.\n",
"from vertexai.preview import model_garden\n",
"\n",
"model = model_garden.OpenModel(PUBLISHER_MODEL_NAME)\n",
"endpoints[LABEL] = model.deploy(\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" use_dedicated_endpoint=use_dedicated_endpoint,\n",
" accept_eula=True, # Accept the End User License Agreement (EULA) on the model card before deploy. Otherwise, the deployment will be forbidden.\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "jbeLl-9C6nKf"
},
"outputs": [],
"source": [
"# @title [Option 2] Deploy with customized configs\n",
"\n",
"# @markdown This section uploads the prebuilt PaliGemma 2 models to Model Registry and deploys it to a Vertex AI Endpoint. It takes approximately 15 minutes to finish.\n",
"\n",
"# @markdown Select the desired resolution and precision of prebuilt model to deploy, leaving the optional `custom_paligemma_model_uri` as is. Higher resolution and precision_type can result in better inference results, but may require additional GPU.\n",
"\n",
"TASK = \"paligemma_VQA\"\n",
"\n",
"endpoints[\"paligemma2\"], models[\"paligemma2\"] = deploy_model(\n",
" model_name=MODEL_NAME,\n",
" model_id=MODEL_ID,\n",
" task=TASK,\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" serving_port=7080,\n",
" serving_route=\"/predict\",\n",
" serving_docker_uri=SERVE_DOCKER_URI,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "tOtYOhZa3lsx"
},
"outputs": [],
"source": [
"# @title [Optional] Loading an existing Endpoint\n",
"# @markdown If you've already deployed an Endpoint, you can load it by filling in the Endpoint's ID below.\n",
"# @markdown You can view deployed Endpoints at [Vertex Online Prediction](https://console.cloud.google.com/vertex-ai/online-prediction/endpoints).\n",
"endpoint_id = \"\" # @param {type: \"string\"}\n",
"\n",
"if endpoint_id:\n",
" endpoint = aiplatform.Endpoint(\n",
" endpoint_name=endpoint_id,\n",
" project=PROJECT_ID,\n",
" location=REGION,\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2Idtx2ETNQtn"
},
"source": [
"### Predict\n",
"\n",
"The following sections will use images from [pexels.com](https://www.pexels.com/) for demoing purposes. All the images have the following license: https://www.pexels.com/license/.\n",
"\n",
"Images will be resized to a width of 1000 pixels by default since requests made to a Vertex Endpoint are limited to 1.500MB."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "FxdKXm6INQtn"
},
"outputs": [],
"source": [
"# @title Visual Question Answering\n",
"\n",
"# @markdown This section uses the deployed PaliGemma model to answer questions about a given image.\n",
"\n",
"# @markdown \n",
"image_url = \"https://images.pexels.com/photos/1006293/pexels-photo-1006293.jpeg\" # @param {type:\"string\"}\n",
"\n",
"# @markdown You may leave question prompts empty and they will be ignored.\n",
"question_prompt = \"What is shown in the picture?\" # @param {type: \"string\"}\n",
"\n",
"# @markdown The question prompt can be non-English languages.\n",
"\n",
"# Using max_new_tokens along with other parameters\n",
"parameters_with_tokens = {\"max_new_tokens\": 50}\n",
"predictions_with_tokens = vqa_predict(\n",
" endpoint=endpoints[\"paligemma2\"],\n",
" image_url=image_url,\n",
" text_prompt=question_prompt,\n",
" parameters=parameters_with_tokens,\n",
")\n",
"\n",
"print(f\"Prediction Response: {predictions_with_tokens}\")\n",
"# @markdown Click \"Show Code\" to see more details."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IrVZ030i4XMY"
},
"source": [
"## Clean up resources"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "YsMpOI1kYjil"
},
"outputs": [],
"source": [
"# @markdown Delete the experiment models and endpoints to recycle the resources\n",
"# @markdown and avoid unnecessary continuous charges that may incur.\n",
"\n",
"# Undeploy model and delete endpoint.\n",
"for endpoint in endpoints.values():\n",
" endpoint.delete(force=True)\n",
"\n",
"# Delete models.\n",
"for model in models.values():\n",
" model.delete()\n",
"\n",
"delete_bucket = False # @param {type:\"boolean\"}\n",
"if delete_bucket:\n",
" ! gsutil -m rm -r $BUCKET_NAME"
]
}
],
"metadata": {
"colab": {
"name": "model_garden_hf_paligemma2_deployment.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}