notebooks/community/model_garden/model_garden_pytorch_owlvit.ipynb (339 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "7d9bbf86da5e"
},
"outputs": [],
"source": [
"# Copyright 2025 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "paFB-hw6-hfu"
},
"source": [
"# Vertex AI Model Garden - OWL-ViT\n",
"\n",
"<table><tbody><tr>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/instances\">\n",
" <img alt=\"Workbench logo\" src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" width=\"32px\"><br> Run in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fcommunity%2Fmodel_garden%2Fmodel_garden_pytorch_owlvit.ipynb\">\n",
" <img alt=\"Google Cloud Colab Enterprise logo\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" width=\"32px\"><br> Run in Colab Enterprise\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_owlvit.ipynb\">\n",
" <img alt=\"GitHub logo\" src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" width=\"32px\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</tr></tbody></table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d8cd12648da4"
},
"source": [
"## Overview\n",
"\n",
"This notebook demonstrates deploying the pre-trained [OWL-ViT(Vision Transformer for Open-World Localization)](https://huggingface.co/google/owlvit-base-patch32) model on Vertex AI for online prediction.\n",
"OWL-ViT is a zero-shot text-conditioned object detection model that can be used to query an image with one or multiple text queries.\n",
"\n",
"\n",
"### Objective\n",
"\n",
"- Upload the model to [Model Registry](https://cloud.google.com/vertex-ai/docs/model-registry/introduction).\n",
"- Deploy the model on [Endpoint](https://cloud.google.com/vertex-ai/docs/predictions/using-private-endpoints).\n",
"- Run online predictions for image captioning.\n",
"\n",
"### File a bug\n",
"\n",
"File a bug on [GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/issues/new) if you encounter any issue with the notebook.\n",
"\n",
"### Costs\n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing), [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "vG14S23x--2z"
},
"source": [
"## Before you begin"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "260oGk4Z_IgA"
},
"outputs": [],
"source": [
"# @title Setup Google Cloud project\n",
"\n",
"# @markdown 1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
"\n",
"# @markdown 2. **[Optional]** Set region. If not set, the region will be set automatically according to Colab Enterprise environment.\n",
"\n",
"REGION = \"\" # @param {type:\"string\"}\n",
"\n",
"# @markdown 3. If you want to run predictions with A100 80GB or H100 GPUs, we recommend using the regions listed below. **NOTE:** Make sure you have associated quota in selected regions. Click the links to see your current quota for each GPU type: [Nvidia A100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_a100_80gb_gpus), [Nvidia H100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_h100_gpus). You can request for quota following the instructions at [\"Request a higher quota\"](https://cloud.google.com/docs/quota/view-manage#requesting_higher_quota).\n",
"\n",
"# @markdown > | Machine Type | Accelerator Type | Recommended Regions |\n",
"# @markdown | ----------- | ----------- | ----------- |\n",
"# @markdown | a2-ultragpu-1g | 1 NVIDIA_A100_80GB | us-central1, us-east4, europe-west4, asia-southeast1, us-east4 |\n",
"# @markdown | a3-highgpu-2g | 2 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n",
"# @markdown | a3-highgpu-4g | 4 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n",
"# @markdown | a3-highgpu-8g | 8 NVIDIA_H100_80GB | us-central1, europe-west4, us-west1, asia-southeast1 |\n",
"\n",
"import importlib\n",
"import os\n",
"\n",
"import matplotlib.patches as patches\n",
"import matplotlib.pyplot as plt\n",
"from google.cloud import aiplatform\n",
"\n",
"if os.environ.get(\"VERTEX_PRODUCT\") != \"COLAB_ENTERPRISE\":\n",
" ! pip install --upgrade tensorflow\n",
"! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n",
"\n",
"common_util = importlib.import_module(\n",
" \"vertex-ai-samples.community-content.vertex_model_garden.model_oss.notebook_util.common_util\"\n",
")\n",
"\n",
"\n",
"# Get the default cloud project id.\n",
"PROJECT_ID = os.environ[\"GOOGLE_CLOUD_PROJECT\"]\n",
"\n",
"# Get the default region for launching jobs.\n",
"if not REGION:\n",
" REGION = os.environ[\"GOOGLE_CLOUD_REGION\"]\n",
"\n",
"# Initialize Vertex AI API.\n",
"print(\"Initializing Vertex AI API.\")\n",
"aiplatform.init(project=PROJECT_ID, location=REGION)\n",
"\n",
"! gcloud config set project $PROJECT_ID\n",
"\n",
"models, endpoints = {}, {}\n",
"\n",
"# @markdown Click \"Show Code\" to see more details."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d2d72ecdb8c9"
},
"source": [
"## Deploy"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "b4b46c28d8b1"
},
"outputs": [],
"source": [
"# @title Upload and deploy OWL-ViT model to Vertex\n",
"# @markdown This section uploads the pre-trained model to Model Registry and\n",
"# @markdown deploys it on the Endpoint with 1 T4 GPU.\n",
"\n",
"# @markdown The model deployment step will take ~10 minutes to complete.\n",
"\n",
"# The pre-built serving docker image. It contains serving scripts and models.\n",
"SERVE_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/pytorch-transformers-serve:20241029_0832_RC00\"\n",
"\n",
"# Set the machine type and accelerator type.\n",
"ACCELERATOR_TYPE = \"NVIDIA_TESLA_T4\"\n",
"MACHINE_TYPE = \"n1-standard-8\"\n",
"ACCELERATOR_COUNT = 1\n",
"\n",
"\n",
"def deploy_model(model_id, task, accelerator_type, machine_type, accelerator_count):\n",
" model_name = \"owl-vit\"\n",
" endpoint = aiplatform.Endpoint.create(display_name=f\"{model_name}-endpoint\")\n",
" serving_env = {\n",
" \"MODEL_ID\": model_id,\n",
" \"TASK\": task,\n",
" \"DEPLOY_SOURCE\": common_util.get_deploy_source(),\n",
" }\n",
" # If the model_id is a GCS path, use artifact_uri to pass it to serving docker.\n",
" artifact_uri = model_id if model_id.startswith(\"gs://\") else None\n",
" model = aiplatform.Model.upload(\n",
" display_name=model_name,\n",
" serving_container_image_uri=SERVE_DOCKER_URI,\n",
" serving_container_ports=[7080],\n",
" serving_container_predict_route=\"/predictions/transformers_serving\",\n",
" serving_container_health_route=\"/ping\",\n",
" serving_container_environment_variables=serving_env,\n",
" artifact_uri=artifact_uri,\n",
" model_garden_source_model_name=\"publishers/google/models/owlvit-base-patch32\",\n",
" )\n",
" model.deploy(\n",
" endpoint=endpoint,\n",
" machine_type=MACHINE_TYPE,\n",
" accelerator_type=ACCELERATOR_TYPE,\n",
" accelerator_count=ACCELERATOR_COUNT,\n",
" deploy_request_timeout=1800,\n",
" system_labels={\"NOTEBOOK_NAME\": \"model_garden_pytorch_owlvit.ipynb\"},\n",
" )\n",
" return model, endpoint\n",
"\n",
"\n",
"common_util.check_quota(\n",
" project_id=PROJECT_ID,\n",
" region=REGION,\n",
" accelerator_type=ACCELERATOR_TYPE,\n",
" accelerator_count=ACCELERATOR_COUNT,\n",
" is_for_training=False,\n",
")\n",
"\n",
"\n",
"models[\"model\"], endpoints[\"endpoint\"] = deploy_model(\n",
" model_id=\"google/owlvit-base-patch32\",\n",
" task=\"zero-shot-object-detection\",\n",
" accelerator_type=ACCELERATOR_TYPE,\n",
" machine_type=MACHINE_TYPE,\n",
" accelerator_count=ACCELERATOR_COUNT,\n",
")\n",
"\n",
"# @markdown Click \"Show Code\" to see more details."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0VHMH4JjCHFp"
},
"source": [
"## Predict"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "6be655247cb1"
},
"outputs": [],
"source": [
"# @title Predict\n",
"\n",
"# @markdown Once deployment succeeds, you can generate Object Detection based\n",
"# @markdown on the input image and the text.\n",
"\n",
"image_url = \"http://images.cocodataset.org/val2017/000000039769.jpg\" # @param {type:\"string\"}\n",
"text = \"cat\" # @param {type:\"string\"}\n",
"image = common_util.download_image(image_url)\n",
"\n",
"instances = [\n",
" {\"image\": common_util.image_to_base64(image), \"text\": text},\n",
"]\n",
"preds = endpoints[\"endpoint\"].predict(instances=instances).predictions\n",
"\n",
"\n",
"def draw_image_with_boxes(image, boxes):\n",
" fig, ax = plt.subplots()\n",
" plt.axis(\"off\")\n",
" ax.imshow(image)\n",
" if len(boxes) == 0:\n",
" return\n",
" boxes = boxes[\"boxes\"]\n",
" for box in boxes:\n",
" x, y = box[\"xmin\"], box[\"ymin\"]\n",
" width, height = box[\"xmax\"] - x, box[\"ymax\"] - y\n",
" rect = patches.Rectangle(\n",
" (x, y), width, height, linewidth=2, edgecolor=\"yellow\", facecolor=\"none\"\n",
" )\n",
" ax.add_patch(rect)\n",
" plt.show()\n",
"\n",
"\n",
"draw_image_with_boxes(image, preds[0])\n",
"print(preds)\n",
"\n",
"# @markdown Click \"Show Code\" to see more details."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "db7ffebdb4be"
},
"source": [
"## Clean up resources"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "2ccf3714dbe9"
},
"outputs": [],
"source": [
"# @title Delete the models and endpoints\n",
"# @markdown Delete the experiment models and endpoints to recycle the resources\n",
"# @markdown and avoid unnecessary continuous charges that may incur.\n",
"\n",
"# Undeploy model and delete endpoint.\n",
"for endpoint in endpoints.values():\n",
" endpoint.delete(force=True)\n",
"\n",
"# Delete models.\n",
"for model in models.values():\n",
" model.delete()"
]
}
],
"metadata": {
"colab": {
"name": "model_garden_pytorch_owlvit.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}