notebooks/community/model_garden/model_garden_pytorch_sam.ipynb (395 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "6ad30fe2-1fc1-47e3-8a9f-624170b5aae6" }, "outputs": [], "source": [ "# Copyright 2025 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "cYYdTBQoKeCP" }, "source": [ " # Vertex AI Model Garden - Segment Anything Model (SAM) Serving on Vertex AI\n", "\n", "<table><tbody><tr>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/workbench/instances\">\n", " <img alt=\"Workbench logo\" src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" width=\"32px\"><br> Run in Workbench\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fcommunity%2Fmodel_garden%2Fmodel_garden_pytorch_sam.ipynb\">\n", " <img alt=\"Google Cloud Colab Enterprise logo\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" width=\"32px\"><br> Run in Colab Enterprise\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_sam.ipynb\">\n", " <img alt=\"GitHub logo\" src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" width=\"32px\"><br> View on GitHub\n", " </a>\n", " </td>\n", "</tr></tbody></table>" ] }, { "cell_type": "markdown", "metadata": { "id": "JbmPgTp2LRCY" }, "source": [ "## Overview\n", "\n", "This notebook demonstrates using the [huggingface/transformers](https://github.com/huggingface/transformers) framework to serve Segment Anything Model (SAM) models and deploy them for online prediction on Vertex AI.\n", "\n", "Following the notebook you will conduct experiments using the pre-built docker image on Vertex AI.\n", "\n", "- With the pre-built docker images, you can **deploy** models for the following tasks:\n", " - Mask Generation\n", "\n", "### Objective\n", "\n", "- Upload the model to [Model Registry](https://cloud.google.com/vertex-ai/docs/model-registry/introduction).\n", "- Deploy the model on [Endpoint](https://cloud.google.com/vertex-ai/docs/predictions/using-private-endpoints).\n", "- Run online predictions for image captioning.\n", "\n", "### File a bug\n", "\n", "File a bug on [GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/issues/new) if you encounter any issue with the notebook.\n", "\n", "### Costs\n", "\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "* Vertex AI\n", "* Cloud Storage\n", "\n", "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing), [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": { "id": "78f72e0a-52e5-4de5-ac0f-2171b3493825" }, "source": [ "## Before you begin" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "m-Ql0m9edvcA" }, "outputs": [], "source": [ "# @title Setup Google Cloud project\n", "\n", "# @markdown 1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n", "\n", "# @markdown 2. **[Optional]** Set region. If not set, the region will be set automatically according to Colab Enterprise environment.\n", "\n", "REGION = \"\" # @param {type:\"string\"}\n", "\n", "# Upgrade Vertex AI SDK.\n", "! pip3 install --upgrade --quiet 'google-cloud-aiplatform>=1.84.0'\n", "! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n", "\n", "import importlib\n", "import os\n", "\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import pycocotools.mask as mask_util\n", "from google.cloud import aiplatform\n", "\n", "if os.environ.get(\"VERTEX_PRODUCT\") != \"COLAB_ENTERPRISE\":\n", " ! pip install --upgrade tensorflow\n", "! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n", "\n", "common_util = importlib.import_module(\n", " \"vertex-ai-samples.community-content.vertex_model_garden.model_oss.notebook_util.common_util\"\n", ")\n", "\n", "models, endpoints = {}, {}\n", "LABEL = \"sam_model\"\n", "\n", "\n", "# Get the default cloud project id.\n", "PROJECT_ID = os.environ[\"GOOGLE_CLOUD_PROJECT\"]\n", "\n", "# Get the default region for launching jobs.\n", "if not REGION:\n", " REGION = os.environ[\"GOOGLE_CLOUD_REGION\"]\n", "\n", "# Initialize Vertex AI API.\n", "print(\"Initializing Vertex AI API.\")\n", "aiplatform.init(project=PROJECT_ID, location=REGION)\n", "\n", "! gcloud config set project $PROJECT_ID\n", "\n", "import vertexai\n", "\n", "vertexai.init(\n", " project=PROJECT_ID,\n", " location=REGION,\n", ")\n", "\n", "base_model_name = \"sam-vit-large\"\n", "PUBLISHER_MODEL_NAME = f\"publishers/meta/models/segment-anything@{base_model_name}\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "ot2HhTqxRYri" }, "outputs": [], "source": [ "# @title Deploy\n", "\n", "# The pre-built serving docker image.\n", "# The model artifacts are embedded within the container, except for model weights which will be downloaded during deployment.\n", "SERVE_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/sam-serve:public-image-20240121\"\n", "\n", "# @markdown Set the accelerator type.\n", "accelerator_type = \"NVIDIA_L4\" # @param[\"NVIDIA_TESLA_V100\", \"NVIDIA_L4\"]\n", "\n", "if accelerator_type == \"NVIDIA_TESLA_V100\":\n", " machine_type = \"n1-standard-8\"\n", " accelerator_count = 1\n", "elif accelerator_type == \"NVIDIA_L4\":\n", " machine_type = \"g2-standard-12\"\n", " accelerator_count = 1\n", "else:\n", " print(f\"Unsupported accelerator type: {accelerator_type}\")\n", "\n", "MODEL_ID = \"facebook/sam-vit-large\"\n", "task = \"mask-generation\"\n", "\n", "# @markdown Set use_dedicated_endpoint to False if you don't want to use [dedicated endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#create-dedicated-endpoint). Note that [dedicated endpoint does not support VPC Service Controls](https://cloud.google.com/vertex-ai/docs/predictions/choose-endpoint-type), uncheck the box if you are using VPC-SC.\n", "use_dedicated_endpoint = True # @param {type:\"boolean\"}\n", "\n", "\n", "def deploy_model(\n", " task, display_name, model_id, machine_type, accelerator_type, accelerator_count\n", "):\n", " endpoint = aiplatform.Endpoint.create(\n", " display_name=common_util.get_job_name_with_datetime(prefix=task)\n", " )\n", " serving_env = {\n", " \"MODEL_ID\": model_id,\n", " \"TASK\": task,\n", " \"DEPLOY_SOURCE\": \"notebook\",\n", " }\n", " model = aiplatform.Model.upload(\n", " display_name=task,\n", " serving_container_image_uri=SERVE_DOCKER_URI,\n", " serving_container_ports=[7080],\n", " serving_container_predict_route=\"/predictions/sam_serving\",\n", " serving_container_health_route=\"/ping\",\n", " serving_container_environment_variables=serving_env,\n", " model_garden_source_model_name=\"publishers/meta/models/segment-anything\",\n", " )\n", " model.deploy(\n", " endpoint=endpoint,\n", " machine_type=machine_type,\n", " accelerator_type=accelerator_type,\n", " accelerator_count=1,\n", " deploy_request_timeout=1800,\n", " system_labels={\n", " \"NOTEBOOK_NAME\": \"model_garden_pytorch_sam.ipynb\",\n", " \"NOTEBOOK_ENVIRONMENT\": common_util.get_deploy_source(),\n", " },\n", " )\n", " return model, endpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "HF8tcQUFG7jB" }, "outputs": [], "source": [ "# @title [Option 1] Deploy with Model Garden SDK\n", "\n", "# @markdown Deploy with Gen AI model-centric SDK. This section uploads the prebuilt model to Model Registry and deploys it to a Vertex AI Endpoint. It takes 15 minutes to 1 hour to finish depending on the size of the model. See [use open models with Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/open-models/use-open-models) for documentation on other use cases.\n", "from vertexai.preview import model_garden\n", "\n", "model = model_garden.OpenModel(PUBLISHER_MODEL_NAME)\n", "endpoints[LABEL] = model.deploy(\n", " machine_type=machine_type,\n", " accelerator_type=accelerator_type,\n", " accelerator_count=accelerator_count,\n", " use_dedicated_endpoint=use_dedicated_endpoint,\n", " accept_eula=True, # Accept the End User License Agreement (EULA) on the model card before deploy. Otherwise, the deployment will be forbidden.\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "lyZSkLIwG7jB" }, "outputs": [], "source": [ "# @title [Option 2] Deploy with customized configs\n", "\n", "# @markdown This section deploys a pre-trained `sam-vit-large` model on Model Registry by using 1 L4 Machine.\n", "\n", "# @markdown The model deploy step will take around 20 minutes to complete.\n", "\n", "common_util.check_quota(\n", " project_id=PROJECT_ID,\n", " region=REGION,\n", " accelerator_type=accelerator_type,\n", " accelerator_count=accelerator_count,\n", " is_for_training=False,\n", ")\n", "\n", "models[\"sam_model\"], endpoints[\"sam_endpoint\"] = deploy_model(\n", " task=task,\n", " display_name=common_util.get_job_name_with_datetime(prefix=task),\n", " model_id=\"facebook/sam-vit-large\",\n", " machine_type=machine_type,\n", " accelerator_type=accelerator_type,\n", " accelerator_count=1,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "d6e51c57-b5e2-4ae7-888a-5391cceee5fb" }, "outputs": [], "source": [ "# @title Predict\n", "\n", "input_image1 = \"http://images.cocodataset.org/val2017/000000039769.jpg\" # @param {type:\"string\"}\n", "input_image2 = \"http://images.cocodataset.org/val2017/000000000285.jpg\" # @param {type:\"string\"}\n", "\n", "\n", "def decode_rle_masks(pred_masks_rle):\n", " return np.stack([mask_util.decode(rle) for rle in pred_masks_rle])\n", "\n", "\n", "def show_mask(mask, ax, random_color=False):\n", " if random_color:\n", " color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)\n", " else:\n", " color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6])\n", " h, w = mask.shape[-2:]\n", " mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)\n", " ax.imshow(mask_image)\n", "\n", "\n", "def show_predictions(preds):\n", " fig = plt.figure(figsize=(10, 7))\n", "\n", " fig.add_subplot(1, 2, 1)\n", " plt.imshow(np.array(image1))\n", " ax = plt.gca()\n", " masks = decode_rle_masks(preds[0][\"masks_rle\"])\n", " for mask in masks:\n", " show_mask(mask, ax=ax, random_color=True)\n", " plt.axis(\"off\")\n", "\n", " fig.add_subplot(1, 2, 2)\n", " plt.imshow(np.array(image2))\n", " ax = plt.gca()\n", " masks = decode_rle_masks(preds[1][\"masks_rle\"])\n", " for mask in masks:\n", " show_mask(mask, ax=ax, random_color=True)\n", " plt.axis(\"off\")\n", " plt.show()\n", "\n", "\n", "image1 = common_util.download_image(input_image1)\n", "image2 = common_util.download_image(input_image2)\n", "grid = common_util.image_grid([image1, image2], 1, 2)\n", "display(grid)\n", "\n", "instances = [\n", " {\"image\": common_util.image_to_base64(image1)},\n", " {\"image\": common_util.image_to_base64(image2)},\n", "]\n", "\n", "preds = endpoints[\"sam_endpoint\"].predict(instances=instances).predictions\n", "show_predictions(preds)" ] }, { "cell_type": "markdown", "metadata": { "id": "5UdquXMWR1E4" }, "source": [ "## Clean up resources" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "TGTWIJK8R136" }, "outputs": [], "source": [ "# @markdown Delete the experiment models and endpoints to recycle the resources\n", "# @markdown and avoid unnecessary continuous charges that may incur.\n", "\n", "# Undeploy model and delete endpoint.\n", "for endpoint in endpoints.values():\n", " endpoint.delete(force=True)\n", "\n", "# Delete models.\n", "for model in models.values():\n", " model.delete()" ] } ], "metadata": { "colab": { "name": "model_garden_pytorch_sam.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }