notebooks/community/gapic/custom/showcase_tfhub_image_classification_online.ipynb (1,490 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "copyright"
},
"outputs": [],
"source": [
"# Copyright 2020 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "title"
},
"source": [
"# Vertex client library: TF Hub image classification model for online prediction\n",
"\n",
"<table align=\"left\">\n",
" <td>\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_tfhub_image_classification_online.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_tfhub_image_classification_online.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n",
" View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>\n",
"<br/><br/><br/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "overview:tfhub,prediction"
},
"source": [
"## Overview\n",
"\n",
"\n",
"This tutorial demonstrates how to use the Vertex client library for Python to deploy a pretrained TensorFlow Hub image classification model for online prediction."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dataset:flowers,icn"
},
"source": [
"### Dataset\n",
"\n",
"The dataset used for this tutorial is the [Flowers dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "objective:tfhub,online_prediction"
},
"source": [
"### Objective\n",
"\n",
"In this tutorial, you will deploy a TensorFlow Hub pretrained model, and then do a prediction on the deployed model by sending data.\n",
"\n",
"The steps performed include:\n",
"\n",
"- Download a TensorFlow Hub pretrained model.\n",
"- Retrieve and load the model artifacts.\n",
"- Upload the model as a Vertex `Model` resource.\n",
"- Deploy the `Model` resource to a serving `Endpoint` resource.\n",
"- Make a prediction.\n",
"- Undeploy the `Model` resource."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "costs"
},
"source": [
"### Costs\n",
"\n",
"This tutorial uses billable components of Google Cloud (GCP):\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI\n",
"pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n",
"pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
"Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "install_aip"
},
"source": [
"## Installation\n",
"\n",
"Install the latest version of Vertex client library."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "install_aip"
},
"outputs": [],
"source": [
"import os\n",
"import sys\n",
"\n",
"# Google Cloud Notebook\n",
"if os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n",
" USER_FLAG = \"--user\"\n",
"else:\n",
" USER_FLAG = \"\"\n",
"\n",
"! pip3 install -U google-cloud-aiplatform $USER_FLAG"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "install_storage"
},
"source": [
"Install the latest GA version of *google-cloud-storage* library as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "install_storage"
},
"outputs": [],
"source": [
"! pip3 install -U google-cloud-storage $USER_FLAG"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "restart"
},
"source": [
"### Restart the kernel\n",
"\n",
"Once you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "restart"
},
"outputs": [],
"source": [
"if not os.getenv(\"IS_TESTING\"):\n",
" # Automatically restart kernel after installs\n",
" import IPython\n",
"\n",
" app = IPython.Application.instance()\n",
" app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "before_you_begin"
},
"source": [
"## Before you begin\n",
"\n",
"### GPU runtime\n",
"\n",
"*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**\n",
"\n",
"### Set up your Google Cloud project\n",
"\n",
"**The following steps are required, regardless of your notebook environment.**\n",
"\n",
"1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.\n",
"\n",
"2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)\n",
"\n",
"3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)\n",
"\n",
"4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.\n",
"\n",
"5. Enter your project ID in the cell below. Then run the cell to make sure the\n",
"Cloud SDK uses the right project for all the commands in this notebook.\n",
"\n",
"**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "set_project_id"
},
"outputs": [],
"source": [
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "autoset_project_id"
},
"outputs": [],
"source": [
"if PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n",
" # Get your GCP project id from gcloud\n",
" shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n",
" PROJECT_ID = shell_output[0]\n",
" print(\"Project ID:\", PROJECT_ID)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "set_gcloud_project_id"
},
"outputs": [],
"source": [
"! gcloud config set project $PROJECT_ID"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "region"
},
"source": [
"#### Region\n",
"\n",
"You can also change the `REGION` variable, which is used for operations\n",
"throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.\n",
"\n",
"- Americas: `us-central1`\n",
"- Europe: `europe-west4`\n",
"- Asia Pacific: `asia-east1`\n",
"\n",
"You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "region"
},
"outputs": [],
"source": [
"REGION = \"us-central1\" # @param {type: \"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "timestamp"
},
"source": [
"#### Timestamp\n",
"\n",
"If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "timestamp"
},
"outputs": [],
"source": [
"from datetime import datetime\n",
"\n",
"TIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gcp_authenticate"
},
"source": [
"### Authenticate your Google Cloud account\n",
"\n",
"**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.\n",
"\n",
"**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\n",
"\n",
"**Otherwise**, follow these steps:\n",
"\n",
"In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.\n",
"\n",
"**Click Create service account**.\n",
"\n",
"In the **Service account name** field, enter a name, and click **Create**.\n",
"\n",
"In the **Grant this service account access to project** section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select **Vertex Administrator**. Type \"Storage Object Admin\" into the filter box, and select **Storage Object Admin**.\n",
"\n",
"Click Create. A JSON file that contains your key downloads to your local environment.\n",
"\n",
"Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gcp_authenticate"
},
"outputs": [],
"source": [
"# If you are running this notebook in Colab, run this cell and follow the\n",
"# instructions to authenticate your GCP account. This provides access to your\n",
"# Cloud Storage bucket and lets you submit training jobs and prediction\n",
"# requests.\n",
"\n",
"# If on Google Cloud Notebook, then don't execute this code\n",
"if not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n",
" if \"google.colab\" in sys.modules:\n",
" from google.colab import auth as google_auth\n",
"\n",
" google_auth.authenticate_user()\n",
"\n",
" # If you are running this notebook locally, replace the string below with the\n",
" # path to your service account key and run this cell to authenticate your GCP\n",
" # account.\n",
" elif not os.getenv(\"IS_TESTING\"):\n",
" %env GOOGLE_APPLICATION_CREDENTIALS ''"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bucket:custom"
},
"source": [
"### Create a Cloud Storage bucket\n",
"\n",
"**The following steps are required, regardless of your notebook environment.**\n",
"\n",
"When you submit a custom training job using the Vertex client library, you upload a Python package\n",
"containing your training code to a Cloud Storage bucket. Vertex runs\n",
"the code from this package. In this tutorial, Vertex also saves the\n",
"trained model that results from your job in the same bucket. You can then\n",
"create an `Endpoint` resource based on this output in order to serve\n",
"online predictions.\n",
"\n",
"Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bucket"
},
"outputs": [],
"source": [
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "autoset_bucket"
},
"outputs": [],
"source": [
"if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n",
" BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "create_bucket"
},
"source": [
"**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "create_bucket"
},
"outputs": [],
"source": [
"! gsutil mb -l $REGION $BUCKET_NAME"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "validate_bucket"
},
"source": [
"Finally, validate access to your Cloud Storage bucket by examining its contents:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "validate_bucket"
},
"outputs": [],
"source": [
"! gsutil ls -al $BUCKET_NAME"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "setup_vars"
},
"source": [
"### Set up variables\n",
"\n",
"Next, set up some variables used throughout the tutorial.\n",
"### Import libraries and define constants"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "import_aip:protobuf"
},
"source": [
"#### Import Vertex client library\n",
"\n",
"Import the Vertex client library into our Python environment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "import_aip:protobuf"
},
"outputs": [],
"source": [
"import time\n",
"\n",
"from google.cloud.aiplatform import gapic as aip\n",
"from google.protobuf import json_format\n",
"from google.protobuf.json_format import MessageToJson, ParseDict\n",
"from google.protobuf.struct_pb2 import Struct, Value"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aip_constants"
},
"source": [
"#### Vertex constants\n",
"\n",
"Setup up the following constants for Vertex:\n",
"\n",
"- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.\n",
"- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "aip_constants"
},
"outputs": [],
"source": [
"# API service endpoint\n",
"API_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n",
"\n",
"# Vertex location root path for your dataset, model and endpoint resources\n",
"PARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "accelerators:prediction,cpu"
},
"source": [
"#### Hardware Accelerators\n",
"\n",
"Set the hardware accelerators (e.g., GPU), if any, for prediction.\n",
"\n",
"Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:\n",
"\n",
" (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n",
"\n",
"For GPU, available accelerators include:\n",
" - aip.AcceleratorType.NVIDIA_TESLA_K80\n",
" - aip.AcceleratorType.NVIDIA_TESLA_P100\n",
" - aip.AcceleratorType.NVIDIA_TESLA_P4\n",
" - aip.AcceleratorType.NVIDIA_TESLA_T4\n",
" - aip.AcceleratorType.NVIDIA_TESLA_V100\n",
"\n",
"Otherwise specify `(None, None)` to use a container image to run on a CPU."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "accelerators:prediction,cpu"
},
"outputs": [],
"source": [
"if os.getenv(\"IS_TESTING_DEPOLY_GPU\"):\n",
" DEPLOY_GPU, DEPLOY_NGPU = (\n",
" aip.AcceleratorType.NVIDIA_TESLA_K80,\n",
" int(os.getenv(\"IS_TESTING_DEPOLY_GPU\")),\n",
" )\n",
"else:\n",
" DEPLOY_GPU, DEPLOY_NGPU = (None, None)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "container:prediction"
},
"source": [
"#### Container (Docker) image\n",
"\n",
"Next, we will set the Docker container images for prediction\n",
"\n",
"- Set the variable `TF` to the TensorFlow version of the container image. For example, `2-1` would be version 2.1, and `1-15` would be version 1.15. The following list shows some of the pre-built images available:\n",
"\n",
" - TensorFlow 1.15\n",
" - `gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest`\n",
" - `gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest`\n",
" - TensorFlow 2.1\n",
" - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest`\n",
" - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest`\n",
" - TensorFlow 2.2\n",
" - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest`\n",
" - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest`\n",
" - TensorFlow 2.3\n",
" - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest`\n",
" - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest`\n",
" - XGBoost\n",
" - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest`\n",
" - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest`\n",
" - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest`\n",
" - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest`\n",
" - Scikit-learn\n",
" - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest`\n",
" - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest`\n",
" - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest`\n",
"\n",
"For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "container:prediction"
},
"outputs": [],
"source": [
"if os.getenv(\"IS_TESTING_TF\"):\n",
" TF = os.getenv(\"IS_TESTING_TF\")\n",
"else:\n",
" TF = \"2-1\"\n",
"\n",
"if TF[0] == \"2\":\n",
" if DEPLOY_GPU:\n",
" DEPLOY_VERSION = \"tf2-gpu.{}\".format(TF)\n",
" else:\n",
" DEPLOY_VERSION = \"tf2-cpu.{}\".format(TF)\n",
"else:\n",
" if DEPLOY_GPU:\n",
" DEPLOY_VERSION = \"tf-gpu.{}\".format(TF)\n",
" else:\n",
" DEPLOY_VERSION = \"tf-cpu.{}\".format(TF)\n",
"\n",
"DEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)\n",
"\n",
"print(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "machine:prediction"
},
"source": [
"#### Machine Type\n",
"\n",
"Next, set the machine type to use for prediction.\n",
"\n",
"- Set the variable `DEPLOY_COMPUTE` to configure the compute resources for the VM you will use for prediction.\n",
" - `machine type`\n",
" - `n1-standard`: 3.75GB of memory per vCPU.\n",
" - `n1-highmem`: 6.5GB of memory per vCPU\n",
" - `n1-highcpu`: 0.9 GB of memory per vCPU\n",
" - `vCPUs`: number of \\[2, 4, 8, 16, 32, 64, 96 \\]\n",
"\n",
"*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "machine:prediction"
},
"outputs": [],
"source": [
"if os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n",
" MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\n",
"else:\n",
" MACHINE_TYPE = \"n1-standard\"\n",
"\n",
"VCPU = \"4\"\n",
"DEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\n",
"print(\"Deploy machine type\", DEPLOY_COMPUTE)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tutorial_start:tfhub"
},
"source": [
"# Tutorial\n",
"\n",
"Now you are ready to deploy a TensorFlow Hub pretrained image classification model."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "clients:tfhub"
},
"source": [
"## Set up clients\n",
"\n",
"The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.\n",
"\n",
"You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.\n",
"\n",
"- Model Service for `Model` resources.\n",
"- Endpoint Service for deployment.\n",
"- Prediction Service for serving."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "clients:tfhub"
},
"outputs": [],
"source": [
"# client options same for all services\n",
"client_options = {\"api_endpoint\": API_ENDPOINT}\n",
"\n",
"\n",
"def create_model_client():\n",
" client = aip.ModelServiceClient(client_options=client_options)\n",
" return client\n",
"\n",
"\n",
"def create_endpoint_client():\n",
" client = aip.EndpointServiceClient(client_options=client_options)\n",
" return client\n",
"\n",
"\n",
"def create_prediction_client():\n",
" client = aip.PredictionServiceClient(client_options=client_options)\n",
" return client\n",
"\n",
"\n",
"clients = {}\n",
"clients[\"model\"] = create_model_client()\n",
"clients[\"endpoint\"] = create_endpoint_client()\n",
"clients[\"prediction\"] = create_prediction_client()\n",
"\n",
"for client in clients.items():\n",
" print(client)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "get_tf_hub_model"
},
"source": [
"## Get pretrained model from TFHub\n",
"\n",
"Next, you download a pre-trained model from $(TENSORFLOW) Hub."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "get_tf_hub_model"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import tensorflow_hub as hub\n",
"\n",
"IMAGE_SHAPE = (224, 224)\n",
"\n",
"model = tf.keras.Sequential(\n",
" [\n",
" hub.KerasLayer(\n",
" \"https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/4\",\n",
" input_shape=IMAGE_SHAPE + (3,),\n",
" )\n",
" ]\n",
")\n",
"\n",
"model_path_to_deploy = BUCKET_NAME + \"/resnet\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "how_serving_function_works"
},
"source": [
"## Upload the model for serving\n",
"\n",
"Next, you will upload your TF.Keras model from the custom job to Vertex `Model` service, which will create a Vertex `Model` resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.\n",
"\n",
"### How does the serving function work\n",
"\n",
"When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a `tf.string`.\n",
"\n",
"The serving function consists of two parts:\n",
"\n",
"- `preprocessing function`:\n",
" - Converts the input (`tf.string`) to the input shape and data type of the underlying model (dynamic graph).\n",
" - Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.\n",
"- `post-processing function`:\n",
" - Converts the model output to format expected by the receiving application -- e.q., compresses the output.\n",
" - Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.\n",
"\n",
"Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.\n",
"\n",
"One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "serving_function_image"
},
"source": [
"### Serving function for image data\n",
"\n",
"To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.\n",
"\n",
"To resolve this, define a serving function (`serving_fn`) and attach it to the model as a preprocessing step. Add a `@tf.function` decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).\n",
"\n",
"When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (`tf.string`), which is passed to the serving function (`serving_fn`). The serving function preprocesses the `tf.string` into raw (uncompressed) numpy bytes (`preprocess_fn`) to match the input requirements of the model:\n",
"- `io.decode_jpeg`- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).\n",
"- `image.convert_image_dtype` - Changes integer pixel values to float 32.\n",
"- `image.resize` - Resizes the image to match the input shape for the model.\n",
"- `resized / 255.0` - Rescales (normalization) the pixel data between 0 and 1.\n",
"\n",
"At this point, the data can be passed to the model (`m_call`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "serving_function_image"
},
"outputs": [],
"source": [
"CONCRETE_INPUT = \"numpy_inputs\"\n",
"\n",
"\n",
"def _preprocess(bytes_input):\n",
" decoded = tf.io.decode_jpeg(bytes_input, channels=3)\n",
" decoded = tf.image.convert_image_dtype(decoded, tf.float32)\n",
" resized = tf.image.resize(decoded, size=(32, 32))\n",
" rescale = tf.cast(resized / 255.0, tf.float32)\n",
" return rescale\n",
"\n",
"\n",
"@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])\n",
"def preprocess_fn(bytes_inputs):\n",
" decoded_images = tf.map_fn(\n",
" _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False\n",
" )\n",
" return {\n",
" CONCRETE_INPUT: decoded_images\n",
" } # User needs to make sure the key matches model's input\n",
"\n",
"\n",
"@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])\n",
"def serving_fn(bytes_inputs):\n",
" images = preprocess_fn(bytes_inputs)\n",
" prob = m_call(**images)\n",
" return prob\n",
"\n",
"\n",
"m_call = tf.function(model.call).get_concrete_function(\n",
" [tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]\n",
")\n",
"\n",
"tf.saved_model.save(\n",
" model, model_path_to_deploy, signatures={\"serving_default\": serving_fn}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "serving_function_signature:image"
},
"source": [
"## Get the serving function signature\n",
"\n",
"You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.\n",
"\n",
"For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.\n",
"\n",
"When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "serving_function_signature:image"
},
"outputs": [],
"source": [
"loaded = tf.saved_model.load(model_path_to_deploy)\n",
"\n",
"serving_input = list(\n",
" loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n",
")[0]\n",
"print(\"Serving function input:\", serving_input)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "upload_the_model"
},
"source": [
"### Upload the model\n",
"\n",
"Use this helper function `upload_model` to upload your model, stored in SavedModel format, up to the `Model` service, which will instantiate a Vertex `Model` resource instance for your model. Once you've done that, you can use the `Model` resource instance in the same way as any other Vertex `Model` resource instance, such as deploying to an `Endpoint` resource for serving predictions.\n",
"\n",
"The helper function takes the following parameters:\n",
"\n",
"- `display_name`: A human readable name for the `Endpoint` service.\n",
"- `image_uri`: The container image for the model deployment.\n",
"- `model_uri`: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the `trainer/task.py` saved the model artifacts, which we specified in the variable `MODEL_DIR`.\n",
"\n",
"The helper function calls the `Model` client service's method `upload_model`, which takes the following parameters:\n",
"\n",
"- `parent`: The Vertex location root path for `Dataset`, `Model` and `Endpoint` resources.\n",
"- `model`: The specification for the Vertex `Model` resource instance.\n",
"\n",
"Let's now dive deeper into the Vertex model specification `model`. This is a dictionary object that consists of the following fields:\n",
"\n",
"- `display_name`: A human readable name for the `Model` resource.\n",
"- `metadata_schema_uri`: Since your model was built without an Vertex `Dataset` resource, you will leave this blank (`''`).\n",
"- `artificat_uri`: The Cloud Storage path where the model is stored in SavedModel format.\n",
"- `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the `Model` resource will serve predictions. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.\n",
"\n",
"Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.\n",
"\n",
"The helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "upload_the_model"
},
"outputs": [],
"source": [
"IMAGE_URI = DEPLOY_IMAGE\n",
"\n",
"\n",
"def upload_model(display_name, image_uri, model_uri):\n",
" model = {\n",
" \"display_name\": display_name,\n",
" \"metadata_schema_uri\": \"\",\n",
" \"artifact_uri\": model_uri,\n",
" \"container_spec\": {\n",
" \"image_uri\": image_uri,\n",
" \"command\": [],\n",
" \"args\": [],\n",
" \"env\": [{\"name\": \"env_name\", \"value\": \"env_value\"}],\n",
" \"ports\": [{\"container_port\": 8080}],\n",
" \"predict_route\": \"\",\n",
" \"health_route\": \"\",\n",
" },\n",
" }\n",
" response = clients[\"model\"].upload_model(parent=PARENT, model=model)\n",
" print(\"Long running operation:\", response.operation.name)\n",
" upload_model_response = response.result(timeout=180)\n",
" print(\"upload_model_response\")\n",
" print(\" model:\", upload_model_response.model)\n",
" return upload_model_response.model\n",
"\n",
"\n",
"model_to_deploy_id = upload_model(\n",
" \"flowers-\" + TIMESTAMP, IMAGE_URI, model_path_to_deploy\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "get_model"
},
"source": [
"### Get `Model` resource information\n",
"\n",
"Now let's get the model information for just your model. Use this helper function `get_model`, with the following parameter:\n",
"\n",
"- `name`: The Vertex unique identifier for the `Model` resource.\n",
"\n",
"This helper function calls the Vertex `Model` client service's method `get_model`, with the following parameter:\n",
"\n",
"- `name`: The Vertex unique identifier for the `Model` resource."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "get_model"
},
"outputs": [],
"source": [
"def get_model(name):\n",
" response = clients[\"model\"].get_model(name=name)\n",
" print(response)\n",
"\n",
"\n",
"get_model(model_to_deploy_id)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "create_endpoint:custom"
},
"source": [
"## Deploy the `Model` resource\n",
"\n",
"Now deploy the trained Vertex custom `Model` resource. This requires two steps:\n",
"\n",
"1. Create an `Endpoint` resource for deploying the `Model` resource to.\n",
"\n",
"2. Deploy the `Model` resource to the `Endpoint` resource."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "create_endpoint"
},
"source": [
"### Create an `Endpoint` resource\n",
"\n",
"Use this helper function `create_endpoint` to create an endpoint to deploy the model to for serving predictions, with the following parameter:\n",
"\n",
"- `display_name`: A human readable name for the `Endpoint` resource.\n",
"\n",
"The helper function uses the endpoint client service's `create_endpoint` method, which takes the following parameter:\n",
"\n",
"- `display_name`: A human readable name for the `Endpoint` resource.\n",
"\n",
"Creating an `Endpoint` resource returns a long running operation, since it may take a few moments to provision the `Endpoint` resource for serving. You call `response.result()`, which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the `Endpoint` resource: `response.name`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "create_endpoint"
},
"outputs": [],
"source": [
"ENDPOINT_NAME = \"flowers_endpoint-\" + TIMESTAMP\n",
"\n",
"\n",
"def create_endpoint(display_name):\n",
" endpoint = {\"display_name\": display_name}\n",
" response = clients[\"endpoint\"].create_endpoint(parent=PARENT, endpoint=endpoint)\n",
" print(\"Long running operation:\", response.operation.name)\n",
"\n",
" result = response.result(timeout=300)\n",
" print(\"result\")\n",
" print(\" name:\", result.name)\n",
" print(\" display_name:\", result.display_name)\n",
" print(\" description:\", result.description)\n",
" print(\" labels:\", result.labels)\n",
" print(\" create_time:\", result.create_time)\n",
" print(\" update_time:\", result.update_time)\n",
" return result\n",
"\n",
"\n",
"result = create_endpoint(ENDPOINT_NAME)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "endpoint_id:result"
},
"source": [
"Now get the unique identifier for the `Endpoint` resource you created."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "endpoint_id:result"
},
"outputs": [],
"source": [
"# The full unique ID for the endpoint\n",
"endpoint_id = result.name\n",
"# The short numeric ID for the endpoint\n",
"endpoint_short_id = endpoint_id.split(\"/\")[-1]\n",
"\n",
"print(endpoint_id)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "instance_scaling"
},
"source": [
"### Compute instance scaling\n",
"\n",
"You have several choices on scaling the compute instances for handling your online prediction requests:\n",
"\n",
"- Single Instance: The online prediction requests are processed on a single compute instance.\n",
" - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.\n",
"\n",
"- Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.\n",
" - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.\n",
"\n",
"- Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.\n",
" - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.\n",
"\n",
"The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "instance_scaling"
},
"outputs": [],
"source": [
"MIN_NODES = 1\n",
"MAX_NODES = 1"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "deploy_model:dedicated"
},
"source": [
"### Deploy `Model` resource to the `Endpoint` resource\n",
"\n",
"Use this helper function `deploy_model` to deploy the `Model` resource to the `Endpoint` resource you created for serving predictions, with the following parameters:\n",
"\n",
"- `model`: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.\n",
"- `deploy_model_display_name`: A human readable name for the deployed model.\n",
"- `endpoint`: The Vertex fully qualified endpoint identifier to deploy the model to.\n",
"\n",
"The helper function calls the `Endpoint` client service's method `deploy_model`, which takes the following parameters:\n",
"\n",
"- `endpoint`: The Vertex fully qualified `Endpoint` resource identifier to deploy the `Model` resource to.\n",
"- `deployed_model`: The requirements specification for deploying the model.\n",
"- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.\n",
" - If only one model, then specify as **{ \"0\": 100 }**, where \"0\" refers to this model being uploaded and 100 means 100% of the traffic.\n",
" - If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ \"0\": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100.\n",
"\n",
"Let's now dive deeper into the `deployed_model` parameter. This parameter is specified as a Python dictionary with the minimum required fields:\n",
"\n",
"- `model`: The Vertex fully qualified model identifier of the (upload) model to deploy.\n",
"- `display_name`: A human readable name for the deployed model.\n",
"- `disable_container_logging`: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.\n",
"- `dedicated_resources`: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.\n",
" - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.\n",
" - `min_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`.\n",
" - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.\n",
"\n",
"#### Traffic Split\n",
"\n",
"Let's now dive deeper into the `traffic_split` parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.\n",
"\n",
"Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.\n",
"\n",
"#### Response\n",
"\n",
"The method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "deploy_model:dedicated"
},
"outputs": [],
"source": [
"DEPLOYED_NAME = \"flowers_deployed-\" + TIMESTAMP\n",
"\n",
"\n",
"def deploy_model(\n",
" model, deployed_model_display_name, endpoint, traffic_split={\"0\": 100}\n",
"):\n",
"\n",
" if DEPLOY_GPU:\n",
" machine_spec = {\n",
" \"machine_type\": DEPLOY_COMPUTE,\n",
" \"accelerator_type\": DEPLOY_GPU,\n",
" \"accelerator_count\": DEPLOY_NGPU,\n",
" }\n",
" else:\n",
" machine_spec = {\n",
" \"machine_type\": DEPLOY_COMPUTE,\n",
" \"accelerator_count\": 0,\n",
" }\n",
"\n",
" deployed_model = {\n",
" \"model\": model,\n",
" \"display_name\": deployed_model_display_name,\n",
" \"dedicated_resources\": {\n",
" \"min_replica_count\": MIN_NODES,\n",
" \"max_replica_count\": MAX_NODES,\n",
" \"machine_spec\": machine_spec,\n",
" },\n",
" \"disable_container_logging\": False,\n",
" }\n",
"\n",
" response = clients[\"endpoint\"].deploy_model(\n",
" endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split\n",
" )\n",
"\n",
" print(\"Long running operation:\", response.operation.name)\n",
" result = response.result()\n",
" print(\"result\")\n",
" deployed_model = result.deployed_model\n",
" print(\" deployed_model\")\n",
" print(\" id:\", deployed_model.id)\n",
" print(\" model:\", deployed_model.model)\n",
" print(\" display_name:\", deployed_model.display_name)\n",
" print(\" create_time:\", deployed_model.create_time)\n",
"\n",
" return deployed_model.id\n",
"\n",
"\n",
"deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "make_prediction"
},
"source": [
"## Make a online prediction request\n",
"\n",
"Now do a online prediction to your deployed model."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "get_test_item:image"
},
"source": [
"### Get test item\n",
"\n",
"You will use an example image from your dataset as a test item."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "get_test_item:image"
},
"outputs": [],
"source": [
"FLOWERS_CSV = \"gs://cloud-ml-data/img/flower_photos/all_data.csv\"\n",
"\n",
"test_images = ! gsutil cat $FLOWERS_CSV | head -n1\n",
"test_image = test_images[0].split(\",\")[0]\n",
"print(test_image)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "prepare_test_item:image"
},
"source": [
"### Prepare the request content\n",
"\n",
"You are going to send the flowers image as compressed JPG image, instead of the raw uncompressed bytes:\n",
"\n",
"- `tf.io.read_file`: Read the compressed JPG images back into memory as raw bytes.\n",
"- `base64.b64encode`: Encode the raw bytes into a base 64 encoded string."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "prepare_test_item:image"
},
"outputs": [],
"source": [
"import base64\n",
"\n",
"bytes = tf.io.read_file(test_image)\n",
"b64str = base64.b64encode(bytes.numpy()).decode(\"utf-8\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "send_prediction_request:image"
},
"source": [
"### Send the prediction request\n",
"\n",
"Ok, now you have a test image. Use this helper function `predict_image`, which takes the following parameters:\n",
"\n",
"- `image`: The test image data as a numpy array.\n",
"- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed to.\n",
"- `parameters_dict`: Additional parameters for serving.\n",
"\n",
"This function calls the prediction client service `predict` method with the following parameters:\n",
"\n",
"- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed to.\n",
"- `instances`: A list of instances (encoded images) to predict.\n",
"- `parameters`: Additional parameters for serving.\n",
"\n",
"To pass the image data to the prediction service, in the previous step you encoded the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network. You need to tell the serving binary where your model is deployed to, that the content has been base64 encoded, so it will decode it on the other end in the serving binary.\n",
"\n",
"Each instance in the prediction request is a dictionary entry of the form:\n",
"\n",
" {serving_input: {'b64': content}}\n",
"\n",
"- `input_name`: the name of the input layer of the underlying model.\n",
"- `'b64'`: A key that indicates the content is base64 encoded.\n",
"- `content`: The compressed JPG image bytes as a base64 encoded string.\n",
"\n",
"Since the `predict()` service can take multiple images (instances), you will send your single image as a list of one image. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the `predict()` service.\n",
"\n",
"The `response` object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:\n",
"\n",
"- `predictions`: Confidence level for the prediction, between 0 and 1, for each of the classes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "send_prediction_request:image"
},
"outputs": [],
"source": [
"def predict_image(image, endpoint, parameters_dict):\n",
" # The format of each instance should conform to the deployed model's prediction input schema.\n",
" instances_list = [{serving_input: {\"b64\": image}}]\n",
" instances = [json_format.ParseDict(s, Value()) for s in instances_list]\n",
"\n",
" response = clients[\"prediction\"].predict(\n",
" endpoint=endpoint, instances=instances, parameters=parameters_dict\n",
" )\n",
" print(\"response\")\n",
" print(\" deployed_model_id:\", response.deployed_model_id)\n",
" predictions = response.predictions\n",
" print(\"predictions\")\n",
" for prediction in predictions:\n",
" print(\" prediction:\", prediction)\n",
"\n",
"\n",
"predict_image(b64str, endpoint_id, None)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "undeploy_model"
},
"source": [
"## Undeploy the `Model` resource\n",
"\n",
"Now undeploy your `Model` resource from the serving `Endpoint` resoure. Use this helper function `undeploy_model`, which takes the following parameters:\n",
"\n",
"- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed to.\n",
"- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` is deployed to.\n",
"\n",
"This function calls the endpoint client service's method `undeploy_model`, with the following parameters:\n",
"\n",
"- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed.\n",
"- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource is deployed.\n",
"- `traffic_split`: How to split traffic among the remaining deployed models on the `Endpoint` resource.\n",
"\n",
"Since this is the only deployed model on the `Endpoint` resource, you simply can leave `traffic_split` empty by setting it to {}."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "undeploy_model"
},
"outputs": [],
"source": [
"def undeploy_model(deployed_model_id, endpoint):\n",
" response = clients[\"endpoint\"].undeploy_model(\n",
" endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}\n",
" )\n",
" print(response)\n",
"\n",
"\n",
"undeploy_model(deployed_model_id, endpoint_id)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cleanup"
},
"source": [
"# Cleaning up\n",
"\n",
"To clean up all GCP resources used in this project, you can [delete the GCP\n",
"project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
"\n",
"Otherwise, you can delete the individual resources you created in this tutorial:\n",
"\n",
"- Dataset\n",
"- Pipeline\n",
"- Model\n",
"- Endpoint\n",
"- Batch Job\n",
"- Custom Job\n",
"- Hyperparameter Tuning Job\n",
"- Cloud Storage Bucket"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cleanup"
},
"outputs": [],
"source": [
"delete_dataset = True\n",
"delete_pipeline = True\n",
"delete_model = True\n",
"delete_endpoint = True\n",
"delete_batchjob = True\n",
"delete_customjob = True\n",
"delete_hptjob = True\n",
"delete_bucket = True\n",
"\n",
"# Delete the dataset using the Vertex fully qualified identifier for the dataset\n",
"try:\n",
" if delete_dataset and \"dataset_id\" in globals():\n",
" clients[\"dataset\"].delete_dataset(name=dataset_id)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline\n",
"try:\n",
" if delete_pipeline and \"pipeline_id\" in globals():\n",
" clients[\"pipeline\"].delete_training_pipeline(name=pipeline_id)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"# Delete the model using the Vertex fully qualified identifier for the model\n",
"try:\n",
" if delete_model and \"model_to_deploy_id\" in globals():\n",
" clients[\"model\"].delete_model(name=model_to_deploy_id)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"# Delete the endpoint using the Vertex fully qualified identifier for the endpoint\n",
"try:\n",
" if delete_endpoint and \"endpoint_id\" in globals():\n",
" clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"# Delete the batch job using the Vertex fully qualified identifier for the batch job\n",
"try:\n",
" if delete_batchjob and \"batch_job_id\" in globals():\n",
" clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"# Delete the custom job using the Vertex fully qualified identifier for the custom job\n",
"try:\n",
" if delete_customjob and \"job_id\" in globals():\n",
" clients[\"job\"].delete_custom_job(name=job_id)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job\n",
"try:\n",
" if delete_hptjob and \"hpt_job_id\" in globals():\n",
" clients[\"job\"].delete_hyperparameter_tuning_job(name=hpt_job_id)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"if delete_bucket and \"BUCKET_NAME\" in globals():\n",
" ! gsutil rm -r $BUCKET_NAME"
]
}
],
"metadata": {
"colab": {
"name": "showcase_tfhub_image_classification_online.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}