notebooks/community/sdk/sdk_custom_tabular_regression_online_explain.ipynb (1,820 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "copyright" }, "outputs": [], "source": [ "# Copyright 2021 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "title" }, "source": [ "# Vertex SDK: Custom training tabular regression model for online prediction with explainabilty\n", "\n", "<table align=\"left\">\n", " <td>\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n", " </a>\n", " </td>\n", " <td>\n", " <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n", " View on GitHub\n", " </a>\n", " </td>\n", " <td>\n", " <a href=\"https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain.ipynb\">\n", " Open in Google Cloud Notebooks\n", " </a>\n", " </td>\n", "</table>\n", "<br/><br/><br/>" ] }, { "cell_type": "markdown", "metadata": { "id": "overview:custom,xai" }, "source": [ "## Overview\n", "\n", "\n", "This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom tabular regression model for online prediction with explanation." ] }, { "cell_type": "markdown", "metadata": { "id": "dataset:custom,boston,lrg" }, "source": [ "### Dataset\n", "\n", "The dataset used for this tutorial is the [Boston Housing Prices dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html). The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD." ] }, { "cell_type": "markdown", "metadata": { "id": "objective:custom,training,online_prediction,xai" }, "source": [ "### Objective\n", "\n", "In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a prediction with explanations on the deployed model by sending data. You can alternatively create custom models using `gcloud` command-line tool or online using Cloud Console.\n", "\n", "The steps performed include:\n", "\n", "- Create a Vertex custom job for training a model.\n", "- Train a TensorFlow model.\n", "- Retrieve and load the model artifacts.\n", "- View the model evaluation.\n", "- Set explanation parameters.\n", "- Upload the model as a Vertex `Model` resource.\n", "- Deploy the `Model` resource to a serving `Endpoint` resource.\n", "- Make a prediction with explanation.\n", "- Undeploy the `Model` resource." ] }, { "cell_type": "markdown", "metadata": { "id": "costs" }, "source": [ "### Costs\n", "\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "* Vertex AI\n", "* Cloud Storage\n", "\n", "Learn about [Vertex AI\n", "pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n", "pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n", "Calculator](https://cloud.google.com/products/calculator/)\n", "to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": { "id": "setup_local" }, "source": [ "### Set up your local development environment\n", "\n", "If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.\n", "\n", "Otherwise, make sure your environment meets this notebook's requirements. You need the following:\n", "\n", "- The Cloud Storage SDK\n", "- Git\n", "- Python 3\n", "- virtualenv\n", "- Jupyter notebook running in a virtual environment with Python 3\n", "\n", "The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n", "\n", "1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).\n", "\n", "2. [Install Python 3](https://cloud.google.com/python/setup#installing_python).\n", "\n", "3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.\n", "\n", "4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.\n", "\n", "5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.\n", "\n", "6. Open this notebook in the Jupyter Notebook Dashboard.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "install_aip:mbsdk" }, "source": [ "## Installation\n", "\n", "Install the latest version of Vertex SDK for Python." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "install_aip:mbsdk" }, "outputs": [], "source": [ "import os\n", "\n", "# Google Cloud Notebook\n", "if os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n", " USER_FLAG = \"--user\"\n", "else:\n", " USER_FLAG = \"\"\n", "\n", "! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG" ] }, { "cell_type": "markdown", "metadata": { "id": "install_storage" }, "source": [ "Install the latest GA version of *google-cloud-storage* library as well." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "install_storage" }, "outputs": [], "source": [ "! pip3 install -U google-cloud-storage $USER_FLAG" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "install_tensorflow" }, "outputs": [], "source": [ "if os.environ[\"IS_TESTING\"]:\n", " ! pip3 install --upgrade tensorflow $USER_FLAG" ] }, { "cell_type": "markdown", "metadata": { "id": "restart" }, "source": [ "### Restart the kernel\n", "\n", "Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "restart" }, "outputs": [], "source": [ "import os\n", "\n", "if not os.getenv(\"IS_TESTING\"):\n", " # Automatically restart kernel after installs\n", " import IPython\n", "\n", " app = IPython.Application.instance()\n", " app.kernel.do_shutdown(True)" ] }, { "cell_type": "markdown", "metadata": { "id": "before_you_begin:nogpu" }, "source": [ "## Before you begin\n", "\n", "### GPU runtime\n", "\n", "This tutorial does not require a GPU runtime.\n", "\n", "### Set up your Google Cloud project\n", "\n", "**The following steps are required, regardless of your notebook environment.**\n", "\n", "1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.\n", "\n", "2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)\n", "\n", "3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)\n", "\n", "4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).\n", "\n", "5. Enter your project ID in the cell below. Then run the cell to make sure the\n", "Cloud SDK uses the right project for all the commands in this notebook.\n", "\n", "**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "set_project_id" }, "outputs": [], "source": [ "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "autoset_project_id" }, "outputs": [], "source": [ "if PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n", " # Get your GCP project id from gcloud\n", " shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n", " PROJECT_ID = shell_output[0]\n", " print(\"Project ID:\", PROJECT_ID)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "set_gcloud_project_id" }, "outputs": [], "source": [ "! gcloud config set project $PROJECT_ID" ] }, { "cell_type": "markdown", "metadata": { "id": "region" }, "source": [ "#### Region\n", "\n", "You can also change the `REGION` variable, which is used for operations\n", "throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n", "\n", "- Americas: `us-central1`\n", "- Europe: `europe-west4`\n", "- Asia Pacific: `asia-east1`\n", "\n", "You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\n", "\n", "Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "region" }, "outputs": [], "source": [ "REGION = \"us-central1\" # @param {type: \"string\"}" ] }, { "cell_type": "markdown", "metadata": { "id": "timestamp" }, "source": [ "#### Timestamp\n", "\n", "If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "timestamp" }, "outputs": [], "source": [ "from datetime import datetime\n", "\n", "TIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")" ] }, { "cell_type": "markdown", "metadata": { "id": "gcp_authenticate" }, "source": [ "### Authenticate your Google Cloud account\n", "\n", "**If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step.\n", "\n", "**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\n", "\n", "**Otherwise**, follow these steps:\n", "\n", "In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.\n", "\n", "**Click Create service account**.\n", "\n", "In the **Service account name** field, enter a name, and click **Create**.\n", "\n", "In the **Grant this service account access to project** section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select **Vertex Administrator**. Type \"Storage Object Admin\" into the filter box, and select **Storage Object Admin**.\n", "\n", "Click Create. A JSON file that contains your key downloads to your local environment.\n", "\n", "Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "gcp_authenticate" }, "outputs": [], "source": [ "# If you are running this notebook in Colab, run this cell and follow the\n", "# instructions to authenticate your GCP account. This provides access to your\n", "# Cloud Storage bucket and lets you submit training jobs and prediction\n", "# requests.\n", "\n", "import os\n", "import sys\n", "\n", "# If on Google Cloud Notebook, then don't execute this code\n", "if not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n", " if \"google.colab\" in sys.modules:\n", " from google.colab import auth as google_auth\n", "\n", " google_auth.authenticate_user()\n", "\n", " # If you are running this notebook locally, replace the string below with the\n", " # path to your service account key and run this cell to authenticate your GCP\n", " # account.\n", " elif not os.getenv(\"IS_TESTING\"):\n", " %env GOOGLE_APPLICATION_CREDENTIALS ''" ] }, { "cell_type": "markdown", "metadata": { "id": "bucket:mbsdk" }, "source": [ "### Create a Cloud Storage bucket\n", "\n", "**The following steps are required, regardless of your notebook environment.**\n", "\n", "When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\n", "\n", "Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bucket" }, "outputs": [], "source": [ "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "autoset_bucket" }, "outputs": [], "source": [ "if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n", " BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP" ] }, { "cell_type": "markdown", "metadata": { "id": "create_bucket" }, "source": [ "**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "create_bucket" }, "outputs": [], "source": [ "! gsutil mb -l $REGION $BUCKET_NAME" ] }, { "cell_type": "markdown", "metadata": { "id": "validate_bucket" }, "source": [ "Finally, validate access to your Cloud Storage bucket by examining its contents:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "validate_bucket" }, "outputs": [], "source": [ "! gsutil ls -al $BUCKET_NAME" ] }, { "cell_type": "markdown", "metadata": { "id": "setup_vars" }, "source": [ "### Set up variables\n", "\n", "Next, set up some variables used throughout the tutorial.\n", "### Import libraries and define constants" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "import_aip:mbsdk" }, "outputs": [], "source": [ "import google.cloud.aiplatform as aip" ] }, { "cell_type": "markdown", "metadata": { "id": "init_aip:mbsdk" }, "source": [ "## Initialize Vertex SDK for Python\n", "\n", "Initialize the Vertex SDK for Python for your project and corresponding bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "init_aip:mbsdk" }, "outputs": [], "source": [ "aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)" ] }, { "cell_type": "markdown", "metadata": { "id": "accelerators:training,cpu,prediction,cpu,mbsdk" }, "source": [ "#### Set hardware accelerators\n", "\n", "You can set hardware accelerators for training and prediction.\n", "\n", "Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:\n", "\n", " (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n", "\n", "\n", "Otherwise specify `(None, None)` to use a container image to run on a CPU.\n", "\n", "Learn more [here](https://cloud.google.com/vertex-ai/docs/general/locations#accelerators) hardware accelerator support for your region\n", "\n", "*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "accelerators:training,cpu,prediction,cpu,mbsdk" }, "outputs": [], "source": [ "if os.getenv(\"IS_TESTING_TRAIN_GPU\"):\n", " TRAIN_GPU, TRAIN_NGPU = (\n", " aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,\n", " int(os.getenv(\"IS_TESTING_TRAIN_GPU\")),\n", " )\n", "else:\n", " TRAIN_GPU, TRAIN_NGPU = (None, None)\n", "\n", "if os.getenv(\"IS_TESTING_DEPLOY_GPU\"):\n", " DEPLOY_GPU, DEPLOY_NGPU = (\n", " aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,\n", " int(os.getenv(\"IS_TESTING_DEPLOY_GPU\")),\n", " )\n", "else:\n", " DEPLOY_GPU, DEPLOY_NGPU = (None, None)" ] }, { "cell_type": "markdown", "metadata": { "id": "container:training,prediction" }, "source": [ "#### Set pre-built containers\n", "\n", "Set the pre-built Docker container image for training and prediction.\n", "\n", "\n", "For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).\n", "\n", "\n", "For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "container:training,prediction" }, "outputs": [], "source": [ "if os.getenv(\"IS_TESTING_TF\"):\n", " TF = os.getenv(\"IS_TESTING_TF\")\n", "else:\n", " TF = \"2-1\"\n", "\n", "if TF[0] == \"2\":\n", " if TRAIN_GPU:\n", " TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n", " else:\n", " TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n", " if DEPLOY_GPU:\n", " DEPLOY_VERSION = \"tf2-gpu.{}\".format(TF)\n", " else:\n", " DEPLOY_VERSION = \"tf2-cpu.{}\".format(TF)\n", "else:\n", " if TRAIN_GPU:\n", " TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n", " else:\n", " TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n", " if DEPLOY_GPU:\n", " DEPLOY_VERSION = \"tf-gpu.{}\".format(TF)\n", " else:\n", " DEPLOY_VERSION = \"tf-cpu.{}\".format(TF)\n", "\n", "TRAIN_IMAGE = \"gcr.io/cloud-aiplatform/training/{}:latest\".format(TRAIN_VERSION)\n", "DEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)\n", "\n", "print(\"Training:\", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)\n", "print(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)" ] }, { "cell_type": "markdown", "metadata": { "id": "machine:training,prediction" }, "source": [ "#### Set machine type\n", "\n", "Next, set the machine type to use for training and prediction.\n", "\n", "- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction.\n", " - `machine type`\n", " - `n1-standard`: 3.75GB of memory per vCPU.\n", " - `n1-highmem`: 6.5GB of memory per vCPU\n", " - `n1-highcpu`: 0.9 GB of memory per vCPU\n", " - `vCPUs`: number of \\[2, 4, 8, 16, 32, 64, 96 \\]\n", "\n", "*Note: The following is not supported for training:*\n", "\n", " - `standard`: 2 vCPUs\n", " - `highcpu`: 2, 4 and 8 vCPUs\n", "\n", "*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "machine:training,prediction" }, "outputs": [], "source": [ "if os.getenv(\"IS_TESTING_TRAIN_MACHINE\"):\n", " MACHINE_TYPE = os.getenv(\"IS_TESTING_TRAIN_MACHINE\")\n", "else:\n", " MACHINE_TYPE = \"n1-standard\"\n", "\n", "VCPU = \"4\"\n", "TRAIN_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\n", "print(\"Train machine type\", TRAIN_COMPUTE)\n", "\n", "if os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n", " MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\n", "else:\n", " MACHINE_TYPE = \"n1-standard\"\n", "\n", "VCPU = \"4\"\n", "DEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\n", "print(\"Deploy machine type\", DEPLOY_COMPUTE)" ] }, { "cell_type": "markdown", "metadata": { "id": "tutorial_start:custom" }, "source": [ "# Tutorial\n", "\n", "Now you are ready to start creating your own custom model and training for Boston Housing." ] }, { "cell_type": "markdown", "metadata": { "id": "examine_training_package" }, "source": [ "### Examine the training package\n", "\n", "#### Package layout\n", "\n", "Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.\n", "\n", "- PKG-INFO\n", "- README.md\n", "- setup.cfg\n", "- setup.py\n", "- trainer\n", " - \\_\\_init\\_\\_.py\n", " - task.py\n", "\n", "The files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.\n", "\n", "The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`).\n", "\n", "#### Package Assembly\n", "\n", "In the following cells, you will assemble the training package." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "examine_training_package" }, "outputs": [], "source": [ "# Make folder for Python training script\n", "! rm -rf custom\n", "! mkdir custom\n", "\n", "# Add package information\n", "! touch custom/README.md\n", "\n", "setup_cfg = \"[egg_info]\\n\\ntag_build =\\n\\ntag_date = 0\"\n", "! echo \"$setup_cfg\" > custom/setup.cfg\n", "\n", "setup_py = \"import setuptools\\n\\nsetuptools.setup(\\n\\n install_requires=[\\n\\n 'tensorflow_datasets==1.3.0',\\n\\n ],\\n\\n packages=setuptools.find_packages())\"\n", "! echo \"$setup_py\" > custom/setup.py\n", "\n", "pkg_info = \"Metadata-Version: 1.0\\n\\nName: Boston Housing tabular regression\\n\\nVersion: 0.0.0\\n\\nSummary: Demostration training script\\n\\nHome-page: www.google.com\\n\\nAuthor: Google\\n\\nAuthor-email: aferlitsch@google.com\\n\\nLicense: Public\\n\\nDescription: Demo\\n\\nPlatform: Vertex\"\n", "! echo \"$pkg_info\" > custom/PKG-INFO\n", "\n", "# Make the training subfolder\n", "! mkdir custom/trainer\n", "! touch custom/trainer/__init__.py" ] }, { "cell_type": "markdown", "metadata": { "id": "taskpy_contents:boston" }, "source": [ "#### Task.py contents\n", "\n", "In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:\n", "\n", "- Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`.\n", "- Loads Boston Housing dataset from TF.Keras builtin datasets\n", "- Builds a simple deep neural network model using TF.Keras model API.\n", "- Compiles the model (`compile()`).\n", "- Sets a training distribution strategy according to the argument `args.distribute`.\n", "- Trains the model (`fit()`) with epochs specified by `args.epochs`.\n", "- Saves the trained model (`save(args.model_dir)`) to the specified model directory.\n", "- Saves the maximum value for each feature `f.write(str(params))` to the specified parameters file." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "taskpy_contents:boston" }, "outputs": [], "source": [ "%%writefile custom/trainer/task.py\n", "# Single, Mirror and Multi-Machine Distributed Training for Boston Housing\n", "\n", "import tensorflow_datasets as tfds\n", "import tensorflow as tf\n", "from tensorflow.python.client import device_lib\n", "import numpy as np\n", "import argparse\n", "import os\n", "import sys\n", "tfds.disable_progress_bar()\n", "\n", "parser = argparse.ArgumentParser()\n", "parser.add_argument('--model-dir', dest='model_dir',\n", " default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')\n", "parser.add_argument('--lr', dest='lr',\n", " default=0.001, type=float,\n", " help='Learning rate.')\n", "parser.add_argument('--epochs', dest='epochs',\n", " default=20, type=int,\n", " help='Number of epochs.')\n", "parser.add_argument('--steps', dest='steps',\n", " default=100, type=int,\n", " help='Number of steps per epoch.')\n", "parser.add_argument('--distribute', dest='distribute', type=str, default='single',\n", " help='distributed training strategy')\n", "parser.add_argument('--param-file', dest='param_file',\n", " default='/tmp/param.txt', type=str,\n", " help='Output file for parameters')\n", "args = parser.parse_args()\n", "\n", "print('Python Version = {}'.format(sys.version))\n", "print('TensorFlow Version = {}'.format(tf.__version__))\n", "print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\n", "\n", "# Single Machine, single compute device\n", "if args.distribute == 'single':\n", " if tf.test.is_gpu_available():\n", " strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n", " else:\n", " strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\n", "# Single Machine, multiple compute device\n", "elif args.distribute == 'mirror':\n", " strategy = tf.distribute.MirroredStrategy()\n", "# Multiple Machine, multiple compute device\n", "elif args.distribute == 'multi':\n", " strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n", "\n", "# Multi-worker configuration\n", "print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n", "\n", "\n", "def make_dataset():\n", "\n", " # Scaling Boston Housing data features\n", " def scale(feature):\n", " max = np.max(feature)\n", " feature = (feature / max).astype(np.float)\n", " return feature, max\n", "\n", " (x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(\n", " path=\"boston_housing.npz\", test_split=0.2, seed=113\n", " )\n", " params = []\n", " for _ in range(13):\n", " x_train[_], max = scale(x_train[_])\n", " x_test[_], _ = scale(x_test[_])\n", " params.append(max)\n", "\n", " # store the normalization (max) value for each feature\n", " with tf.io.gfile.GFile(args.param_file, 'w') as f:\n", " f.write(str(params))\n", " return (x_train, y_train), (x_test, y_test)\n", "\n", "\n", "# Build the Keras model\n", "def build_and_compile_dnn_model():\n", " model = tf.keras.Sequential([\n", " tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),\n", " tf.keras.layers.Dense(128, activation='relu'),\n", " tf.keras.layers.Dense(1, activation='linear')\n", " ])\n", " model.compile(\n", " loss='mse',\n", " optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))\n", " return model\n", "\n", "NUM_WORKERS = strategy.num_replicas_in_sync\n", "# Here the batch size scales up by number of workers since\n", "# `tf.data.Dataset.batch` expects the global batch size.\n", "BATCH_SIZE = 16\n", "GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS\n", "\n", "with strategy.scope():\n", " # Creation of dataset, and model building/compiling need to be within\n", " # `strategy.scope()`.\n", " model = build_and_compile_dnn_model()\n", "\n", "# Train the model\n", "(x_train, y_train), (x_test, y_test) = make_dataset()\n", "model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)\n", "model.save(args.model_dir)" ] }, { "cell_type": "markdown", "metadata": { "id": "tarball_training_script" }, "source": [ "#### Store training script on your Cloud Storage bucket\n", "\n", "Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tarball_training_script" }, "outputs": [], "source": [ "! rm -f custom.tar custom.tar.gz\n", "! tar cvf custom.tar custom\n", "! gzip custom.tar\n", "! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz" ] }, { "cell_type": "markdown", "metadata": { "id": "create_custom_training_job:mbsdk,no_model" }, "source": [ "### Create and run custom training job\n", "\n", "\n", "To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.\n", "\n", "#### Create custom training job\n", "\n", "A custom training job is created with the `CustomTrainingJob` class, with the following parameters:\n", "\n", "- `display_name`: The human readable name for the custom training job.\n", "- `container_uri`: The training container image.\n", "- `requirements`: Package requirements for the training container image (e.g., pandas).\n", "- `script_path`: The relative path to the training script." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "create_custom_training_job:mbsdk,no_model" }, "outputs": [], "source": [ "job = aip.CustomTrainingJob(\n", " display_name=\"boston_\" + TIMESTAMP,\n", " script_path=\"custom/trainer/task.py\",\n", " container_uri=TRAIN_IMAGE,\n", " requirements=[\"gcsfs==0.7.1\", \"tensorflow-datasets==4.4\"],\n", ")\n", "\n", "print(job)" ] }, { "cell_type": "markdown", "metadata": { "id": "prepare_custom_cmdargs" }, "source": [ "### Prepare your command-line arguments\n", "\n", "Now define the command-line arguments for your custom training container:\n", "\n", "- `args`: The command-line arguments to pass to the executable that is set as the entry point into the container.\n", " - `--model-dir` : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.\n", " - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or\n", " - indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification.\n", " - `\"--epochs=\" + EPOCHS`: The number of epochs for training.\n", " - `\"--steps=\" + STEPS`: The number of steps per epoch." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "prepare_custom_cmdargs" }, "outputs": [], "source": [ "MODEL_DIR = \"{}/{}\".format(BUCKET_NAME, TIMESTAMP)\n", "\n", "EPOCHS = 20\n", "STEPS = 100\n", "\n", "DIRECT = True\n", "if DIRECT:\n", " CMDARGS = [\n", " \"--model-dir=\" + MODEL_DIR,\n", " \"--epochs=\" + str(EPOCHS),\n", " \"--steps=\" + str(STEPS),\n", " ]\n", "else:\n", " CMDARGS = [\n", " \"--epochs=\" + str(EPOCHS),\n", " \"--steps=\" + str(STEPS),\n", " ]" ] }, { "cell_type": "markdown", "metadata": { "id": "run_custom_job:mbsdk,no_model" }, "source": [ "#### Run the custom training job\n", "\n", "Next, you run the custom job to start the training job by invoking the method `run`, with the following parameters:\n", "\n", "- `args`: The command-line arguments to pass to the training script.\n", "- `replica_count`: The number of compute instances for training (replica_count = 1 is single node training).\n", "- `machine_type`: The machine type for the compute instances.\n", "- `accelerator_type`: The hardware accelerator type.\n", "- `accelerator_count`: The number of accelerators to attach to a worker replica.\n", "- `base_output_dir`: The Cloud Storage location to write the model artifacts to.\n", "- `sync`: Whether to block until completion of the job." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "run_custom_job:mbsdk,no_model" }, "outputs": [], "source": [ "if TRAIN_GPU:\n", " job.run(\n", " args=CMDARGS,\n", " replica_count=1,\n", " machine_type=TRAIN_COMPUTE,\n", " accelerator_type=TRAIN_GPU.name,\n", " accelerator_count=TRAIN_NGPU,\n", " base_output_dir=MODEL_DIR,\n", " sync=True,\n", " )\n", "else:\n", " job.run(\n", " args=CMDARGS,\n", " replica_count=1,\n", " machine_type=TRAIN_COMPUTE,\n", " base_output_dir=MODEL_DIR,\n", " sync=True,\n", " )\n", "\n", "model_path_to_deploy = MODEL_DIR" ] }, { "cell_type": "markdown", "metadata": { "id": "load_saved_model" }, "source": [ "## Load the saved model\n", "\n", "Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.\n", "\n", "To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "load_saved_model" }, "outputs": [], "source": [ "import tensorflow as tf\n", "\n", "local_model = tf.keras.models.load_model(MODEL_DIR)" ] }, { "cell_type": "markdown", "metadata": { "id": "evaluate_custom_model:tabular" }, "source": [ "## Evaluate the model\n", "\n", "Now let's find out how good the model is.\n", "\n", "### Load evaluation data\n", "\n", "You will load the Boston Housing test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).\n", "\n", "You don't need the training data, and hence why we loaded it as `(_, _)`.\n", "\n", "Before you can run the data through evaluation, you need to preprocess it:\n", "\n", "`x_test`:\n", "1. Normalize (rescale) the data in each column by dividing each value by the maximum value of that column. This replaces each single value with a 32-bit floating point number between 0 and 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "evaluate_custom_model:tabular,boston" }, "outputs": [], "source": [ "import numpy as np\n", "from tensorflow.keras.datasets import boston_housing\n", "\n", "(_, _), (x_test, y_test) = boston_housing.load_data(\n", " path=\"boston_housing.npz\", test_split=0.2, seed=113\n", ")\n", "\n", "\n", "def scale(feature):\n", " max = np.max(feature)\n", " feature = (feature / max).astype(np.float32)\n", " return feature\n", "\n", "\n", "# Let's save one data item that has not been scaled\n", "x_test_notscaled = x_test[0:1].copy()\n", "\n", "for _ in range(13):\n", " x_test[_] = scale(x_test[_])\n", "x_test = x_test.astype(np.float32)\n", "\n", "print(x_test.shape, x_test.dtype, y_test.shape)\n", "print(\"scaled\", x_test[0])\n", "print(\"unscaled\", x_test_notscaled)" ] }, { "cell_type": "markdown", "metadata": { "id": "perform_evaluation_custom" }, "source": [ "### Perform the model evaluation\n", "\n", "Now evaluate how well the model in the custom job did." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "perform_evaluation_custom" }, "outputs": [], "source": [ "local_model.evaluate(x_test, y_test)" ] }, { "cell_type": "markdown", "metadata": { "id": "serving_function_signature:xai" }, "source": [ "## Get the serving function signature\n", "\n", "You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.\n", "\n", "When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.\n", "\n", "You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "serving_function_signature:xai" }, "outputs": [], "source": [ "loaded = tf.saved_model.load(model_path_to_deploy)\n", "\n", "serving_input = list(\n", " loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n", ")[0]\n", "print(\"Serving function input:\", serving_input)\n", "serving_output = list(loaded.signatures[\"serving_default\"].structured_outputs.keys())[0]\n", "print(\"Serving function output:\", serving_output)\n", "\n", "input_name = local_model.input.name\n", "print(\"Model input name:\", input_name)\n", "output_name = local_model.output.name\n", "print(\"Model output name:\", output_name)" ] }, { "cell_type": "markdown", "metadata": { "id": "explanation_spec" }, "source": [ "### Explanation Specification\n", "\n", "To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex `Model` resource. These settings are referred to as the explanation metadata, which consists of:\n", "\n", "- `parameters`: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:\n", " - Shapley - *Note*, not recommended for image data -- can be very long running\n", " - XRAI\n", " - Integrated Gradients\n", "- `metadata`: This is the specification for how the algoithm is applied on your custom model.\n", "\n", "#### Explanation Parameters\n", "\n", "Let's first dive deeper into the settings for the explainability algorithm.\n", "\n", "#### Shapley\n", "\n", "Assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.\n", "\n", "Use Cases:\n", " - Classification and regression on tabular data.\n", "\n", "Parameters:\n", "\n", "- `path_count`: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).\n", "\n", "For any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * `path_count`.\n", "\n", "#### Integrated Gradients\n", "\n", "A gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.\n", "\n", "Use Cases:\n", " - Classification and regression on tabular data.\n", " - Classification on image data.\n", "\n", "Parameters:\n", "\n", "- `step_count`: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.\n", "\n", "#### XRAI\n", "\n", "Based on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.\n", "\n", "Use Cases:\n", "\n", " - Classification on image data.\n", "\n", "Parameters:\n", "\n", "- `step_count`: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.\n", "\n", "In the next code cell, set the variable `XAI` to which explainabilty algorithm you will use on your custom model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "explanation_parameters:mbsdk" }, "outputs": [], "source": [ "XAI = \"ig\" # [ shapley, ig, xrai ]\n", "\n", "if XAI == \"shapley\":\n", " PARAMETERS = {\"sampled_shapley_attribution\": {\"path_count\": 10}}\n", "elif XAI == \"ig\":\n", " PARAMETERS = {\"integrated_gradients_attribution\": {\"step_count\": 50}}\n", "elif XAI == \"xrai\":\n", " PARAMETERS = {\"xrai_attribution\": {\"step_count\": 50}}\n", "\n", "parameters = aip.explain.ExplanationParameters(PARAMETERS)" ] }, { "cell_type": "markdown", "metadata": { "id": "explanation_metadata:tabular" }, "source": [ "#### Explanation Metadata\n", "\n", "Let's first dive deeper into the explanation metadata, which consists of:\n", "\n", "- `outputs`: A scalar value in the output to attribute -- what to explain. For example, in a probability output \\[0.1, 0.2, 0.7\\] for classification, one wants an explanation for 0.7. Consider the following formulae, where the output is `y` and that is what we want to explain.\n", "\n", " y = f(x)\n", "\n", "Consider the following formulae, where the outputs are `y` and `z`. Since we can only do attribution for one scalar value, we have to pick whether we want to explain the output `y` or `z`. Assume in this example the model is object detection and y and z are the bounding box and the object classification. You would want to pick which of the two outputs to explain.\n", "\n", " y, z = f(x)\n", "\n", "The dictionary format for `outputs` is:\n", "\n", " { \"outputs\": { \"[your_display_name]\":\n", " \"output_tensor_name\": [layer]\n", " }\n", " }\n", "\n", "<blockquote>\n", " - [your_display_name]: A human readable name you assign to the output to explain. A common example is \"probability\".<br/>\n", " - \"output_tensor_name\": The key/value field to identify the output layer to explain. <br/>\n", " - [layer]: The output layer to explain. In a single task model, like a tabular regressor, it is the last (topmost) layer in the model.\n", "</blockquote>\n", "\n", "- `inputs`: The features for attribution -- how they contributed to the output. Consider the following formulae, where `a` and `b` are the features. We have to pick which features to explain how the contributed. Assume that this model is deployed for A/B testing, where `a` are the data_items for the prediction and `b` identifies whether the model instance is A or B. You would want to pick `a` (or some subset of) for the features, and not `b` since it does not contribute to the prediction.\n", "\n", " y = f(a,b)\n", "\n", "The minimum dictionary format for `inputs` is:\n", "\n", " { \"inputs\": { \"[your_display_name]\":\n", " \"input_tensor_name\": [layer]\n", " }\n", " }\n", "\n", "<blockquote>\n", " - [your_display_name]: A human readable name you assign to the input to explain. A common example is \"features\".<br/>\n", " - \"input_tensor_name\": The key/value field to identify the input layer for the feature attribution. <br/>\n", " - [layer]: The input layer for feature attribution. In a single input tensor model, it is the first (bottom-most) layer in the model.\n", "</blockquote>\n", "\n", "Since the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:\n", "\n", "<blockquote>\n", " - \"encoding\": \"BAG_OF_FEATURES\" : Indicates that the inputs are set of tabular features.<br/>\n", " - \"index_feature_mapping\": [ feature-names ] : A list of human readable names for each feature. For this example, we use the feature names specified in the dataset.<br/>\n", " - \"modality\": \"numeric\": Indicates the field values are numeric.\n", "</blockquote>" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "explanation_metadata:mbsdk,tabular" }, "outputs": [], "source": [ "INPUT_METADATA = {\n", " \"input_tensor_name\": serving_input,\n", " \"encoding\": \"BAG_OF_FEATURES\",\n", " \"modality\": \"numeric\",\n", " \"index_feature_mapping\": [\n", " \"crim\",\n", " \"zn\",\n", " \"indus\",\n", " \"chas\",\n", " \"nox\",\n", " \"rm\",\n", " \"age\",\n", " \"dis\",\n", " \"rad\",\n", " \"tax\",\n", " \"ptratio\",\n", " \"b\",\n", " \"lstat\",\n", " ],\n", "}\n", "\n", "OUTPUT_METADATA = {\"output_tensor_name\": serving_output}\n", "\n", "input_metadata = aip.explain.ExplanationMetadata.InputMetadata(INPUT_METADATA)\n", "output_metadata = aip.explain.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)\n", "\n", "metadata = aip.explain.ExplanationMetadata(\n", " inputs={\"features\": input_metadata}, outputs={\"medv\": output_metadata}\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "upload_model:mbsdk,xai" }, "source": [ "## Upload the model\n", "\n", "Next, upload your model to a `Model` resource using `Model.upload()` method, with the following parameters:\n", "\n", "- `display_name`: The human readable name for the `Model` resource.\n", "- `artifact`: The Cloud Storage location of the trained model artifacts.\n", "- `serving_container_image_uri`: The serving container image.\n", "- `sync`: Whether to execute the upload asynchronously or synchronously.\n", "- `explanation_parameters`: Parameters to configure explaining for `Model`'s predictions.\n", "- `explanation_metadata`: Metadata describing the `Model`'s input and output for explanation.\n", "\n", "If the `upload()` method is run asynchronously, you can subsequently block until completion with the `wait()` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "upload_model:mbsdk,xai" }, "outputs": [], "source": [ "model = aip.Model.upload(\n", " display_name=\"boston_\" + TIMESTAMP,\n", " artifact_uri=MODEL_DIR,\n", " serving_container_image_uri=DEPLOY_IMAGE,\n", " explanation_parameters=parameters,\n", " explanation_metadata=metadata,\n", " sync=False,\n", ")\n", "\n", "model.wait()" ] }, { "cell_type": "markdown", "metadata": { "id": "deploy_model:mbsdk,all" }, "source": [ "## Deploy the model\n", "\n", "Next, deploy your model for online prediction. To deploy the model, you invoke the `deploy` method, with the following parameters:\n", "\n", "- `deployed_model_display_name`: A human readable name for the deployed model.\n", "- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.\n", "If only one model, then specify as { \"0\": 100 }, where \"0\" refers to this model being uploaded and 100 means 100% of the traffic.\n", "If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { \"0\": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.\n", "- `machine_type`: The type of machine to use for training.\n", "- `accelerator_type`: The hardware accelerator type.\n", "- `accelerator_count`: The number of accelerators to attach to a worker replica.\n", "- `starting_replica_count`: The number of compute instances to initially provision.\n", "- `max_replica_count`: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "deploy_model:mbsdk,all" }, "outputs": [], "source": [ "DEPLOYED_NAME = \"boston-\" + TIMESTAMP\n", "\n", "TRAFFIC_SPLIT = {\"0\": 100}\n", "\n", "MIN_NODES = 1\n", "MAX_NODES = 1\n", "\n", "if DEPLOY_GPU:\n", " endpoint = model.deploy(\n", " deployed_model_display_name=DEPLOYED_NAME,\n", " traffic_split=TRAFFIC_SPLIT,\n", " machine_type=DEPLOY_COMPUTE,\n", " accelerator_type=DEPLOY_GPU,\n", " accelerator_count=DEPLOY_NGPU,\n", " min_replica_count=MIN_NODES,\n", " max_replica_count=MAX_NODES,\n", " )\n", "else:\n", " endpoint = model.deploy(\n", " deployed_model_display_name=DEPLOYED_NAME,\n", " traffic_split=TRAFFIC_SPLIT,\n", " machine_type=DEPLOY_COMPUTE,\n", " accelerator_type=DEPLOY_GPU,\n", " accelerator_count=0,\n", " min_replica_count=MIN_NODES,\n", " max_replica_count=MAX_NODES,\n", " )" ] }, { "cell_type": "markdown", "metadata": { "id": "get_test_item:test" }, "source": [ "### Get test item\n", "\n", "You will use an example out of the test (holdout) portion of the dataset as a test item." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "get_test_item:test,tabular" }, "outputs": [], "source": [ "test_item = x_test[0]\n", "test_label = y_test[0]\n", "print(test_item.shape)" ] }, { "cell_type": "markdown", "metadata": { "id": "explain_request:mbsdk,custom,lrg" }, "source": [ "### Make the prediction with explanation\n", "\n", "Now that your `Model` resource is deployed to an `Endpoint` resource, one can do online explanations by sending prediction requests to the `Endpoint` resource.\n", "\n", "#### Request\n", "\n", "The format of each instance is:\n", "\n", " [feature_list]\n", "\n", "Since the explain() method can take multiple items (instances), send your single test item as a list of one test item.\n", "\n", "#### Response\n", "\n", "The response from the explain() call is a Python dictionary with the following entries:\n", "\n", "- `ids`: The internal assigned unique identifiers for each prediction request.\n", "- `predictions`: The prediction per instance.\n", "- `deployed_model_id`: The Vertex AI identifier for the deployed `Model` resource which did the predictions.\n", "- `explanations`: The feature attributions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "explain_request:mbsdk,custom,lrg" }, "outputs": [], "source": [ "instances_list = [test_item.tolist()]\n", "\n", "prediction = endpoint.explain(instances_list)\n", "print(prediction)" ] }, { "cell_type": "markdown", "metadata": { "id": "understand_explanations" }, "source": [ "### Understanding the explanations response\n", "\n", "First, you will look what your model predicted and compare it to the actual value." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "understand_explanations:mbsdk,boston" }, "outputs": [], "source": [ "value = prediction[0][0][0]\n", "print(\"Predicted Value:\", value)" ] }, { "cell_type": "markdown", "metadata": { "id": "examine_feature_attributions" }, "source": [ "### Examine feature attributions\n", "\n", "Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "examine_feature_attributions:mbsdk,boston" }, "outputs": [], "source": [ "from tabulate import tabulate\n", "\n", "feature_names = [\n", " \"crim\",\n", " \"zn\",\n", " \"indus\",\n", " \"chas\",\n", " \"nox\",\n", " \"rm\",\n", " \"age\",\n", " \"dis\",\n", " \"rad\",\n", " \"tax\",\n", " \"ptratio\",\n", " \"b\",\n", " \"lstat\",\n", "]\n", "attributions = prediction.explanations[0].attributions[0].feature_attributions\n", "\n", "rows = []\n", "for i, val in enumerate(feature_names):\n", " rows.append([val, test_item[i], attributions[val]])\n", "print(tabulate(rows, headers=[\"Feature name\", \"Feature value\", \"Attribution value\"]))" ] }, { "cell_type": "markdown", "metadata": { "id": "check_explanations_baselines" }, "source": [ "### Check your explanations and baselines\n", "\n", "To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the `baseline_score` returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.\n", "\n", "In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the `sanity_check_explanations` method.\n", "\n", "#### Get explanations" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "check_explanations_baselines:mbsdk,boston" }, "outputs": [], "source": [ "# Prepare 10 test examples to your model for prediction\n", "instances = []\n", "for i in range(10):\n", " instances.append(x_test[i].tolist())\n", "\n", "response = endpoint.explain(instances)" ] }, { "cell_type": "markdown", "metadata": { "id": "sanity_check_explanations" }, "source": [ "#### Sanity check\n", "\n", "In the function below you perform a sanity check on the explanations." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sanity_check_explanations" }, "outputs": [], "source": [ "import numpy as np\n", "\n", "\n", "def sanity_check_explanations(\n", " explanation, prediction, mean_tgt_value=None, variance_tgt_value=None\n", "):\n", " passed_test = 0\n", " total_test = 1\n", " # `attributions` is a dict where keys are the feature names\n", " # and values are the feature attributions for each feature\n", " baseline_score = explanation.attributions[0].baseline_output_value\n", " print(\"baseline:\", baseline_score)\n", "\n", " # Sanity check 1\n", " # The prediction at the input is equal to that at the baseline.\n", " # Please use a different baseline. Some suggestions are: random input, training\n", " # set mean.\n", " if abs(prediction - baseline_score) <= 0.05:\n", " print(\"Warning: example score and baseline score are too close.\")\n", " print(\"You might not get attributions.\")\n", " else:\n", " passed_test += 1\n", " print(\"Sanity Check 1: Passed\")\n", "\n", " print(passed_test, \" out of \", total_test, \" sanity checks passed.\")\n", "\n", "\n", "i = 0\n", "for explanation in response.explanations:\n", " try:\n", " prediction = np.max(response.predictions[i][\"scores\"])\n", " except TypeError:\n", " prediction = np.max(response.predictions[i])\n", " sanity_check_explanations(explanation, prediction)\n", " i += 1" ] }, { "cell_type": "markdown", "metadata": { "id": "undeploy_model:mbsdk" }, "source": [ "## Undeploy the model\n", "\n", "When you are done doing predictions, you undeploy the model from the `Endpoint` resouce. This deprovisions all compute resources and ends billing for the deployed model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "undeploy_model:mbsdk" }, "outputs": [], "source": [ "endpoint.undeploy_all()" ] }, { "cell_type": "markdown", "metadata": { "id": "cleanup:mbsdk" }, "source": [ "# Cleaning up\n", "\n", "To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n", "project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n", "\n", "Otherwise, you can delete the individual resources you created in this tutorial:\n", "\n", "- Dataset\n", "- Pipeline\n", "- Model\n", "- Endpoint\n", "- AutoML Training Job\n", "- Batch Job\n", "- Custom Job\n", "- Hyperparameter Tuning Job\n", "- Cloud Storage Bucket" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "cleanup:mbsdk" }, "outputs": [], "source": [ "delete_all = True\n", "\n", "if delete_all:\n", " # Delete the dataset using the Vertex dataset object\n", " try:\n", " if \"dataset\" in globals():\n", " dataset.delete()\n", " except Exception as e:\n", " print(e)\n", "\n", " # Delete the model using the Vertex model object\n", " try:\n", " if \"model\" in globals():\n", " model.delete()\n", " except Exception as e:\n", " print(e)\n", "\n", " # Delete the endpoint using the Vertex endpoint object\n", " try:\n", " if \"endpoint\" in globals():\n", " endpoint.delete()\n", " except Exception as e:\n", " print(e)\n", "\n", " # Delete the AutoML or Pipeline trainig job\n", " try:\n", " if \"dag\" in globals():\n", " dag.delete()\n", " except Exception as e:\n", " print(e)\n", "\n", " # Delete the custom trainig job\n", " try:\n", " if \"job\" in globals():\n", " job.delete()\n", " except Exception as e:\n", " print(e)\n", "\n", " # Delete the batch prediction job using the Vertex batch prediction object\n", " try:\n", " if \"batch_predict_job\" in globals():\n", " batch_predict_job.delete()\n", " except Exception as e:\n", " print(e)\n", "\n", " # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object\n", " try:\n", " if \"hpt_job\" in globals():\n", " hpt_job.delete()\n", " except Exception as e:\n", " print(e)\n", "\n", " if \"BUCKET_NAME\" in globals():\n", " ! gsutil rm -r $BUCKET_NAME" ] } ], "metadata": { "colab": { "name": "sdk_custom_tabular_regression_online_explain.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }