notebooks/official/model_evaluation/automl_tabular_classification_model_evaluation.ipynb (988 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ur8xi4C7S06n"
},
"outputs": [],
"source": [
"# Copyright 2022 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JAPoU8Sm5E6e"
},
"source": [
"# Vertex AI Pipelines: Evaluating batch prediction results from an AutoML Tabular classification model\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/model_evaluation/automl_tabular_classification_model_evaluation.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fmodel_evaluation%2Fautoml_tabular_classification_model_evaluation.ipynb\">\n",
" <img width=\"32px\" src=\"https://cloud.google.com/ml-engine/images/colab-enterprise-logo-32px.png\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td> \n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/official/model_evaluation/automl_tabular_classification_model_evaluation.ipynb\">\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/model_evaluation/automl_tabular_classification_model_evaluation.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tvgnzT1CKxrO"
},
"source": [
"## Overview\n",
"\n",
"This notebook demonstrates how to use the Vertex AI classification model evaluation component to evaluate an AutoML Tabular classification model. Model evaluation helps determine your model's performance based on the evaluation metrics and improve the model whenever necessary. \n",
"\n",
"Learn more about [Vertex AI Model Evaluation](https://cloud.google.com/vertex-ai/docs/evaluation/introduction). Learn more about [Classification for tabular data](https://cloud.google.com/vertex-ai/docs/tabular-data/classification-regression/overview)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d975e698c9a4"
},
"source": [
"### Objective\n",
"\n",
"In this tutorial, you learn how to train a Vertex AI AutoML Tabular classification model and learn how to evaluate it through a Vertex AI pipeline job using `google_cloud_pipeline_components`:\n",
"\n",
"This tutorial uses the following Google Cloud ML services and resources:\n",
"\n",
"- Vertex AI `Datasets`\n",
"- Vertex AI `Training`(AutoML Tabular classification) \n",
"- Vertex AI `Model Registry`\n",
"- Vertex AI `Pipelines`\n",
"- Vertex AI `Batch Predictions`\n",
"\n",
"\n",
"\n",
"The steps performed include:\n",
"\n",
"- Create a Vertex AI `Dataset`.\n",
"- Train an Automl Tabular classification model on the `Dataset` resource.\n",
"- Import the trained `AutoML model resource` into the pipeline.\n",
"- Run a `Batch Prediction` job.\n",
"- Evaluate the AutoML model using the `Classification Evaluation component`.\n",
"- Import the classification metrics to the AutoML model resource."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "08d289fa873f"
},
"source": [
"### Dataset\n",
"\n",
"The dataset being used in this notebook is part of the PetFinder Dataset, available [here](https://www.kaggle.com/c/petfinder-adoption-prediction) on Kaggle. The current dataset is only a part of the original dataset considered for the problem of predicting whether the pet is adopted or not. It consists of the following fields:\n",
"\n",
"- `Type`: Type of animal (1 = Dog, 2 = Cat)\n",
"- `Age`: Age of pet when listed, in months\n",
"- `Breed1`: Primary breed of pet\n",
"- `Gender`: Gender of pet\n",
"- `Color1`: Color 1 of pet \n",
"- `Color2`: Color 2 of pet\n",
"- `MaturitySize`: Size at maturity (1 = Small, 2 = Medium, 3 = Large, 4 = Extra Large, 0 = Not Specified)\n",
"- `FurLength`: Fur length (1 = Short, 2 = Medium, 3 = Long, 0 = Not Specified)\n",
"- `Vaccinated`: Pet has been vaccinated (1 = Yes, 2 = No, 3 = Not Sure)\n",
"- `Sterilized`: Pet has been spayed / neutered (1 = Yes, 2 = No, 3 = Not Sure)\n",
"- `Health`: Health Condition (1 = Healthy, 2 = Minor Injury, 3 = Serious Injury, 0 = Not Specified)\n",
"- `Fee`: Adoption fee (0 = Free)\n",
"- `PhotoAmt`: Total uploaded photos for this pet\n",
"- `Adopted`: Whether or not the pet was adopted (Yes/No).\n",
"\n",
"**Note**: This dataset is moved to a public Cloud Storage bucket from where it's accessed in this notebook."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aed92deeb4a0"
},
"source": [
"### Costs \n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI\n",
"pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n",
"pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
"Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f0316df526f8"
},
"source": [
"## Get started"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "i7EUnXsZhAGF"
},
"source": [
"### Install Vertex AI SDK for Python and other required packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2b4ef9b72d43"
},
"outputs": [],
"source": [
"! pip3 install --upgrade --quiet google-cloud-aiplatform \\\n",
" google-cloud-pipeline-components \\\n",
" matplotlib"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ff555b32bab8"
},
"source": [
"### Restart runtime (Colab only)\n",
"\n",
"To use the newly installed packages, you must restart the runtime on Google Colab."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f09b4dff629a"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
"\n",
" import IPython\n",
"\n",
" app = IPython.Application.instance()\n",
" app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4a2b7b59bbf7"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ The kernel is going to restart. Wait until it's finished before continuing to the next step. ⚠️</b>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f82e28c631cc"
},
"source": [
"### Authenticate your notebook environment (Colab only)\n",
"\n",
"Authenticate your environment on Google Colab."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "46604f70e831"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
"\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "191d1345e064"
},
"source": [
"### Set Google Cloud project information\n",
"\n",
"To get started using Vertex AI, you must have an existing Google Cloud project. Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "set_project_id"
},
"outputs": [],
"source": [
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n",
"LOCATION = \"us-central1\" # @param {type: \"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aab852d94fc7"
},
"source": [
"#### Enable Cloud services used throughout this notebook.\n",
"\n",
"Run the cell below to the enable Compute Engine, Container Registry, and Vertex AI services."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "18396d3d7fe4"
},
"outputs": [],
"source": [
"!gcloud services enable compute.googleapis.com \\\n",
" containerregistry.googleapis.com \\\n",
" aiplatform.googleapis.com"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bucket:mbsdk"
},
"source": [
"### Create a Cloud Storage bucket\n",
"\n",
"Create a storage bucket to store intermediate artifacts such as datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bucket"
},
"outputs": [],
"source": [
"BUCKET_URI = (\n",
" f\"gs://model-evaluation-bucket-{PROJECT_ID}-unique\" # @param {type:\"string\"}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "create_bucket"
},
"source": [
"**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "create_bucket"
},
"outputs": [],
"source": [
"! gsutil mb -l $LOCATION $BUCKET_URI"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "set_service_account"
},
"source": [
"#### Service Account\n",
"\n",
"You use a service account to create Vertex AI Pipeline jobs. If you don't want to use your project's Compute Engine service account, set `SERVICE_ACCOUNT` to another service account ID."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UwC1AdGeF6kx"
},
"outputs": [],
"source": [
"SERVICE_ACCOUNT = \"\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "autoset_service_account"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"IS_COLAB = \"google.colab\" in sys.modules\n",
"\n",
"if (\n",
" SERVICE_ACCOUNT == \"\"\n",
" or SERVICE_ACCOUNT is None\n",
" or SERVICE_ACCOUNT == \"[your-service-account]\"\n",
"):\n",
" # Get your service account from gcloud\n",
" if not IS_COLAB:\n",
" shell_output = !gcloud auth list 2>/dev/null\n",
" SERVICE_ACCOUNT = shell_output[2].replace(\"*\", \"\").strip()\n",
"\n",
" else: # IS_COLAB:\n",
" shell_output = ! gcloud projects describe $PROJECT_ID\n",
" project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n",
" SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n",
"\n",
" print(\"Service Account:\", SERVICE_ACCOUNT)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "set_service_account:pipelines"
},
"source": [
"#### Set service account access for Vertex AI Pipelines\n",
"\n",
"Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step. You only need to run this step once per service account."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6OqzKqhMF6kx"
},
"outputs": [],
"source": [
"! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI\n",
"\n",
"! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XoEqT2Y4DJmf"
},
"source": [
"### Import libraries\n",
"\n",
"Import the Vertex AI Python SDK and other required Python libraries."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pRUOFELefqf1"
},
"outputs": [],
"source": [
"import json\n",
"\n",
"import google.cloud.aiplatform as aiplatform\n",
"import matplotlib.pyplot as plt\n",
"from google.cloud import aiplatform_v1"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "init_aip:mbsdk,all"
},
"source": [
"### Initialize the Vertex AI SDK for Python\n",
"\n",
"Initialize the Vertex AI SDK for Python for your project and corresponding bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ksAefQcCF6ky"
},
"outputs": [],
"source": [
"aiplatform.init(project=PROJECT_ID, location=LOCATION, staging_bucket=BUCKET_URI)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8d97acf78771"
},
"source": [
"## Create a Vertex AI dataset\n",
"\n",
"Create a managed tabular dataset resource in Vertex AI using the dataset source."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3390c9e9426c"
},
"outputs": [],
"source": [
"DATA_SOURCE = \"gs://cloud-samples-data/ai-platform-unified/datasets/tabular/petfinder-tabular-classification.csv\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2011a473ce65"
},
"outputs": [],
"source": [
"# Create the Vertex AI Dataset resource\n",
"dataset = aiplatform.TabularDataset.create(\n",
" display_name=\"petfinder-tabular-dataset\",\n",
" gcs_source=DATA_SOURCE,\n",
")\n",
"\n",
"print(\"Resource name:\", dataset.resource_name)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6da01c2f1d4f"
},
"source": [
"## Train an AutoML model\n",
"\n",
"Train a simple classification model using the created dataset with `Adopted` as the target column. \n",
"\n",
"Create a training job using Vertex AI SDK's `AutoMLTabularTrainingJob` class. Set a display name for the training job and specify the appropriate data types for column transformations."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5dd3db2d1225"
},
"outputs": [],
"source": [
"TRAINING_JOB_DISPLAY_NAME = \"[your-train-job-display-name]\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0614e3fb19da"
},
"outputs": [],
"source": [
"# If no display name is specified, use the default one\n",
"if (\n",
" TRAINING_JOB_DISPLAY_NAME == \"\"\n",
" or TRAINING_JOB_DISPLAY_NAME is None\n",
" or TRAINING_JOB_DISPLAY_NAME == \"[your-train-job-display-name]\"\n",
"):\n",
" TRAINING_JOB_DISPLAY_NAME = \"train-petfinder-automl\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ce9c9f279674"
},
"source": [
"### Create an AutoML Tabular training job\n",
"\n",
"`AutoMLTabularTrainingJob` class creates an AutoML training job using the following parameters: \n",
"\n",
"- `display_name`: The human readable name for the Vertex AI Training job.\n",
"- `optimization_prediction_type`: The type of prediction the Vertex AI Model is to produce. Ex: regression, classification.\n",
"- `column_specs`(Optional): Transformations to apply to the input columns (including data-type corrections).\n",
"- `optimization_objective`: The optimization objective to minimize or maximize. Depending on the type of prediction, this parameter is chosen. If the field is not set, the default objective function is used. \n",
"\n",
"Learn more about [Class AutoMLTabularTrainingJob](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.AutoMLTabularTrainingJob)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "d33629c2aae6"
},
"outputs": [],
"source": [
"# Define the AutoML training job\n",
"train_job = aiplatform.AutoMLTabularTrainingJob(\n",
" display_name=TRAINING_JOB_DISPLAY_NAME,\n",
" optimization_prediction_type=\"classification\",\n",
" column_specs={\n",
" \"Type\": \"categorical\",\n",
" \"Age\": \"numeric\",\n",
" \"Breed1\": \"categorical\",\n",
" \"Color1\": \"categorical\",\n",
" \"Color2\": \"categorical\",\n",
" \"MaturitySize\": \"categorical\",\n",
" \"FurLength\": \"categorical\",\n",
" \"Vaccinated\": \"categorical\",\n",
" \"Sterilized\": \"categorical\",\n",
" \"Health\": \"categorical\",\n",
" \"Fee\": \"numeric\",\n",
" \"PhotoAmt\": \"numeric\",\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "391c51c98647"
},
"source": [
"### Set a display name\n",
"Set a display name for the AutoML Tabular classification model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "454f077b984e"
},
"outputs": [],
"source": [
"MODEL_DISPLAY_NAME = \"[your-model-display-name]\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "21b5a27e8171"
},
"outputs": [],
"source": [
"# If no name is specified, use the default name\n",
"if (\n",
" MODEL_DISPLAY_NAME == \"\"\n",
" or MODEL_DISPLAY_NAME is None\n",
" or MODEL_DISPLAY_NAME == \"[your-model-display-name]\"\n",
"):\n",
" MODEL_DISPLAY_NAME = \"pet-adoption-prediction-model\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "93ebafd3f347"
},
"source": [
"### Run the training job\n",
"\n",
"Now, run the training job on the created tabular dataset by passing the following arguments for training:\n",
"\n",
"- `dataset`: The Vertex AI tabular dataset within the same project from which data needs to be used to train the Vertex AI Model.\n",
"- `target_column`: The name of the column values of which the Vertex AI Model is to predict.\n",
"- `model_display_name`: The display name of the Vertex AI Model that is produced as an output. \n",
"- `budget_milli_node_hours`(Optional): The training budget of creating the Vertex AI Model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model does not exceed this budget.\n",
"\n",
"Learn more about [`run()` method from the Class AutoMLTabularTrainingJob](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.AutoMLTabularTrainingJob#google_cloud_aiplatform_AutoMLTabularTrainingJob_run).\n",
"\n",
"The training job takes roughly 1.5-2 hours to finish."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9ce44a2ab942"
},
"outputs": [],
"source": [
"# Specify the target column\n",
"target_column = \"Adopted\"\n",
"\n",
"# Run the training job\n",
"model = train_job.run(\n",
" dataset=dataset,\n",
" target_column=target_column,\n",
" model_display_name=MODEL_DISPLAY_NAME,\n",
" budget_milli_node_hours=1000,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bfa52eb3f22f"
},
"source": [
"## List model evaluations from training\n",
"\n",
"After the training job is finished, get the model evaluations and print them."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "d56e2b3cf57d"
},
"outputs": [],
"source": [
"# Get evaluations\n",
"model_evaluations = model.list_model_evaluations()\n",
"\n",
"# Print the evaluation metrics\n",
"for evaluation in model_evaluations:\n",
" evaluation = evaluation.to_dict()\n",
" print(\"Model's evaluation metrics from training:\\n\")\n",
" metrics = evaluation[\"metrics\"]\n",
" for metric in metrics.keys():\n",
" print(f\"metric: {metric}, value: {metrics[metric]}\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ab9f273691cc"
},
"source": [
"## Run a pipeline for model evaluation\n",
"\n",
"Now, you run a Vertex AI batch prediction job and generate evaluations and feature attributions on its results using a pipeline. \n",
"\n",
"To do so, you create a Vertex AI pipeline by calling `evaluate` function. Learn more about [evaluate function](https://github.com/googleapis/python-aiplatform/blob/main/google/cloud/aiplatform/models.py#L5127)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1b042d6309f0"
},
"source": [
"### Define parameters to run the evaluate function\n",
"\n",
"Specify the required parameters to run `evaluate` function.\n",
"\n",
"The following is the instruction of `evaluate` function paramters:\n",
"\n",
"- `prediction_type`: The problem type being addressed by this evaluation run. 'classification' and 'regression' are the currently supported problem types.\n",
"- `target_field_name`: Name of the column to be used as the target for classification.\n",
"- `gcs_source_uris`: List of the Cloud Storage bucket uris of input instances for batch prediction.\n",
"- `class_labels`: List of class labels in the target column.\n",
"- `generate_feature_attributions`: Optional. Whether the model evaluation job should generate feature attributions. Defaults to False if not specified."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1abb012ce04b"
},
"outputs": [],
"source": [
"job = model.evaluate(\n",
" prediction_type=\"classification\",\n",
" target_field_name=target_column,\n",
" gcs_source_uris=[DATA_SOURCE],\n",
" class_labels=[\"No\", \"Yes\"],\n",
" generate_feature_attributions=True,\n",
")\n",
"print(\"Waiting model evaluation is in process\")\n",
"job.wait()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mKRTDi8ioXBY"
},
"source": [
"## Results from the model evaluation pipeline\n",
"\n",
"In the results from the earlier step, click on the generated link to see your run in the Cloud Console.\n",
"\n",
"In the UI, many of the pipeline directed acyclic graph (DAG) nodes expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).\n",
"\n",
"<img src=\"images/automl_tabular_classification_evaluation_pipeline.PNG\">\n",
"\n",
"### Fetch the model evaluation results\n",
"\n",
"After the evalution pipeline is finished, run the below cell to print the evaluation metrics."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "80d4f5b11d24"
},
"outputs": [],
"source": [
"model_evaluation = job.get_model_evaluation()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ec4ec00ab350"
},
"outputs": [],
"source": [
"# Iterate over the pipeline tasks\n",
"for (\n",
" task\n",
") in model_evaluation._backing_pipeline_job._gca_resource.job_detail.task_details:\n",
" # Obtain the artifacts from the evaluation task\n",
" if (\n",
" (\"model-evaluation\" in task.task_name)\n",
" and (\"model-evaluation-import\" not in task.task_name)\n",
" and (\n",
" task.state == aiplatform_v1.types.PipelineTaskDetail.State.SUCCEEDED\n",
" or task.state == aiplatform_v1.types.PipelineTaskDetail.State.SKIPPED\n",
" )\n",
" ):\n",
" evaluation_metrics = task.outputs.get(\"evaluation_metrics\").artifacts[0]\n",
" evaluation_metrics_gcs_uri = evaluation_metrics.uri\n",
"\n",
"print(evaluation_metrics)\n",
"print(evaluation_metrics_gcs_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ca00512eb89f"
},
"source": [
"### Visualize the metrics\n",
"\n",
"Visualize the available metrics like `auRoc` and `logLoss` using a bar-chart."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f9e38f73f838"
},
"outputs": [],
"source": [
"metrics = []\n",
"values = []\n",
"for i in evaluation_metrics.metadata.items():\n",
" metrics.append(i[0])\n",
" values.append(i[1])\n",
"plt.figure(figsize=(5, 3))\n",
"plt.bar(x=metrics, height=values)\n",
"plt.title(\"Evaluation Metrics\")\n",
"plt.ylabel(\"Value\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "049c9bbae2cb"
},
"source": [
"### Fetch the feature attributions\n",
"\n",
"Run the below cell to print the feature attributions. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "03ca8c149bc6"
},
"outputs": [],
"source": [
"# Iterate over the pipeline tasks\n",
"for (\n",
" task\n",
") in model_evaluation._backing_pipeline_job._gca_resource.job_detail.task_details:\n",
" # Obtain the artifacts from the feature attribution task\n",
" if (task.task_name == \"feature-attribution\") and (\n",
" task.state == aiplatform_v1.types.PipelineTaskDetail.State.SUCCEEDED\n",
" or task.state == aiplatform_v1.types.PipelineTaskDetail.State.SKIPPED\n",
" ):\n",
" feat_attrs = task.outputs.get(\"feature_attributions\").artifacts[0]\n",
" feat_attrs_gcs_uri = feat_attrs.uri\n",
"\n",
"print(feat_attrs)\n",
"print(feat_attrs_gcs_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "719d2cd57d10"
},
"source": [
"From the obtained Cloud Storage uri for the feature attributions, get the attribution values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "82e308dd8aca"
},
"outputs": [],
"source": [
"# Load the results\n",
"attributions = !gsutil cat $feat_attrs_gcs_uri\n",
"\n",
"# Print the results obtained\n",
"attributions = json.loads(attributions[0])\n",
"print(attributions)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5bfe517357f8"
},
"source": [
"### Visualize the feature attributions\n",
"\n",
"Visualize the obtained attributions for each feature using a bar-chart."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "d7a7dca9e3cc"
},
"outputs": [],
"source": [
"data = attributions[\"explanation\"][\"attributions\"][0][\"featureAttributions\"]\n",
"features = []\n",
"attr_values = []\n",
"for key, value in data.items():\n",
" features.append(key)\n",
" attr_values.append(value)\n",
"\n",
"plt.figure(figsize=(5, 3))\n",
"plt.bar(x=features, height=attr_values)\n",
"plt.title(\"Feature Attributions\")\n",
"plt.xticks(rotation=90)\n",
"plt.ylabel(\"Attribution value\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TpV-iwP9qw9c"
},
"source": [
"## Cleaning up\n",
"\n",
"To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
"project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
"\n",
"Otherwise, you can delete the individual resources you created in this tutorial.\n",
"\n",
"Set `delete_bucket` to **True** to create the Cloud Storage bucket created in this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "sx_vKniMq9ZX"
},
"outputs": [],
"source": [
"# Delete model resource\n",
"model.delete()\n",
"\n",
"# Delete the dataset resource\n",
"dataset.delete()\n",
"\n",
"# Delete the training job\n",
"train_job.delete()\n",
"\n",
"# Delete the evaluation pipeline\n",
"job.delete()\n",
"\n",
"# Delete the Cloud Storage bucket\n",
"delete_bucket = False # Set True for deletion\n",
"if delete_bucket:\n",
" ! gsutil rm -rf {BUCKET_URI}"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "automl_tabular_classification_model_evaluation.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}