notebooks/official/pipelines/google_cloud_pipeline_components_automl_images.ipynb (726 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "copyright" }, "outputs": [], "source": [ "# Copyright 2021 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "title:generic" }, "source": [ "# Vertex AI Pipelines: AutoML image classification pipelines using google-cloud-pipeline-components\n", "\n", "<table align=\"left\">\n", " <td style=\"text-align: center\">\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/google_cloud_pipeline_components_automl_images.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fpipelines%2Fgoogle_cloud_pipeline_components_automl_images.ipynb\">\n", " <img width=\"32px\" src=\"https://cloud.google.com/ml-engine/images/colab-enterprise-logo-32px.png\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n", " </a>\n", " </td> \n", " <td style=\"text-align: center\">\n", "<a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/official/pipelines/google_cloud_pipeline_components_automl_images.ipynb\" target='_blank'>\n", " <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/google_cloud_pipeline_components_automl_images.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n", " </a>\n", " </td>\n", " \n", "</table>\n", "<br/><br/>" ] }, { "cell_type": "markdown", "metadata": { "id": "overview:pipelines,automl" }, "source": [ "## Overview\n", "\n", "This notebook shows how to use the components defined in [`google_cloud_pipeline_components`](https://github.com/kubeflow/pipelines/tree/master/components/google-cloud) to build an AutoML image classification workflow on [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines).\n", "\n", "Learn more about [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) and [AutoML components](https://cloud.google.com/vertex-ai/docs/pipelines/vertex-automl-component)." ] }, { "cell_type": "markdown", "metadata": { "id": "objective:pipelines,automl" }, "source": [ "### Objective\n", "\n", "In this tutorial, you learn how to use Vertex AI Pipelines and Google Cloud pipeline components to build an AutoML image classification model.\n", "\n", "\n", "This tutorial uses the following Google Cloud ML services:\n", "\n", "- Vertex AI Pipelines\n", "- Google Cloud pipeline components\n", "- Vertex AutoML\n", "- Vertex AI model resource\n", "- Vertex AI endpoint resource\n", "\n", "The steps performed include:\n", "\n", "- Create a KFP pipeline:\n", " - Create a dataset resource.\n", " - Train an AutoML image classification model resource.\n", " - Create an endpoint resource.\n", " - Deploys the model resource to the endpoint resource.\n", "- Compile the KFP pipeline.\n", "- Execute the KFP pipeline using Vertex AI Pipelines.\n", "\n", "The components are [documented here](https://google-cloud-pipeline-components.readthedocs.io/en/latest/google_cloud_pipeline_components.aiplatform.html#module-google_cloud_pipeline_components.aiplatform)." ] }, { "cell_type": "markdown", "metadata": { "id": "dataset:flowers,icn" }, "source": [ "### Dataset\n", "\n", "The dataset used for this tutorial is the [Flowers dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip." ] }, { "cell_type": "markdown", "metadata": { "id": "costs" }, "source": [ "### Costs\n", "\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "* Vertex AI\n", "* Cloud Storage\n", "\n", "Learn about [Vertex AI\n", "pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n", "pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n", "Calculator](https://cloud.google.com/products/calculator/)\n", "to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": { "id": "f0316df526f8" }, "source": [ "## Get started" ] }, { "cell_type": "markdown", "metadata": { "id": "a2c2cb2109a0" }, "source": [ "### Install Vertex AI SDK for Python and other required packages\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "install_aip:mbsdk" }, "outputs": [], "source": [ "! pip3 install --upgrade --quiet google-cloud-aiplatform \\\n", " kfp \\\n", " google-cloud-pipeline-components==2.4.1 \\\n", " google-cloud-storage" ] }, { "cell_type": "markdown", "metadata": { "id": "ff555b32bab8" }, "source": [ "### Restart runtime (Colab only)\n", "\n", "To use the newly installed packages, you must restart the runtime on Google Colab." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "f09b4dff629a" }, "outputs": [], "source": [ "import sys\n", "\n", "if \"google.colab\" in sys.modules:\n", "\n", " import IPython\n", "\n", " app = IPython.Application.instance()\n", " app.kernel.do_shutdown(True)" ] }, { "cell_type": "markdown", "metadata": { "id": "ee775571c2b5" }, "source": [ "<div class=\"alert alert-block alert-warning\">\n", "<b>⚠️ The kernel is going to restart. Wait until it's finished before continuing to the next step. ⚠️</b>\n", "</div>\n" ] }, { "cell_type": "markdown", "metadata": { "id": "92e68cfc3a90" }, "source": [ "### Authenticate your notebook environment (Colab only)\n", "\n", "Authenticate your environment on Google Colab.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "46604f70e831" }, "outputs": [], "source": [ "import sys\n", "\n", "if \"google.colab\" in sys.modules:\n", "\n", " from google.colab import auth\n", "\n", " auth.authenticate_user()" ] }, { "cell_type": "markdown", "metadata": { "id": "timestamp" }, "source": [ "#### UUID\n", "\n", "To avoid name collisions between users on created resources, create a uuid for each session instance. Append these uuids to the respective names of the resources created in this tutorial.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "timestamp" }, "outputs": [], "source": [ "import random\n", "import string\n", "\n", "\n", "# Generate a uuid of a specifed length(default=8)\n", "def generate_uuid(length: int = 8) -> str:\n", " return \"\".join(random.choices(string.ascii_lowercase + string.digits, k=length))\n", "\n", "\n", "UUID = generate_uuid()" ] }, { "cell_type": "markdown", "metadata": { "id": "3bfb7dd22f59" }, "source": [ "### Set Google Cloud project information and initialize Vertex AI SDK for Python\n", "\n", "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com). Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "oM1iC_MfAts1" }, "outputs": [], "source": [ "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n", "\n", "# Set the project id\n", "! gcloud config set project {PROJECT_ID}\n", "\n", "LOCATION = \"us-central1\" # @param {type: \"string\"}" ] }, { "cell_type": "markdown", "metadata": { "id": "zgPO1eR3CYjk" }, "source": [ "### Create a Cloud Storage bucket\n", "\n", "Create a storage bucket to store intermediate artifacts such as datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MzGDU7TWdts_" }, "outputs": [], "source": [ "BUCKET_URI = f\"gs://your-bucket-name-{PROJECT_ID}-unique\" # @param {type:\"string\"}" ] }, { "cell_type": "markdown", "metadata": { "id": "-EcIXiGsCePi" }, "source": [ "**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "create_bucket" }, "outputs": [], "source": [ "! gsutil mb -l {LOCATION} {BUCKET_URI}" ] }, { "cell_type": "markdown", "metadata": { "id": "set_service_account" }, "source": [ "#### Service Account\n", "\n", "**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "set_service_account" }, "outputs": [], "source": [ "SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "autoset_service_account" }, "outputs": [], "source": [ "import sys\n", "\n", "IS_COLAB = \"google.colab\" in sys.modules\n", "if (\n", " SERVICE_ACCOUNT == \"\"\n", " or SERVICE_ACCOUNT is None\n", " or SERVICE_ACCOUNT == \"[your-service-account]\"\n", "):\n", " # Get your service account from gcloud\n", " if not IS_COLAB:\n", " shell_output = !gcloud auth list 2>/dev/null\n", " SERVICE_ACCOUNT = shell_output[2].replace(\"*\", \"\").strip()\n", "\n", " if IS_COLAB:\n", " shell_output = ! gcloud projects describe $PROJECT_ID\n", " project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n", " SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n", "\n", " print(\"Service Account:\", SERVICE_ACCOUNT)" ] }, { "cell_type": "markdown", "metadata": { "id": "set_service_account:pipelines" }, "source": [ "#### Set service account access for Vertex AI Pipelines\n", "\n", "Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "set_service_account:pipelines" }, "outputs": [], "source": [ "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI\n", "\n", "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI" ] }, { "cell_type": "markdown", "metadata": { "id": "setup_vars" }, "source": [ "### Import libraries and define constants" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "import_aip:mbsdk" }, "outputs": [], "source": [ "from typing import Any, Dict, List\n", "\n", "import google.cloud.aiplatform as aip\n", "import kfp\n", "from kfp import compiler" ] }, { "cell_type": "markdown", "metadata": { "id": "pipeline_constants" }, "source": [ "#### Vertex AI Pipelines constants\n", "\n", "Setup up the following constants for Vertex AI Pipelines:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pipeline_constants" }, "outputs": [], "source": [ "PIPELINE_ROOT = f\"{BUCKET_URI}/pipeline_root/flowers\"" ] }, { "cell_type": "markdown", "metadata": { "id": "init_aip:mbsdk" }, "source": [ "## Initialize Vertex AI SDK for Python\n", "\n", "Initialize the Vertex AI SDK for Python for your project and corresponding bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "init_aip:mbsdk" }, "outputs": [], "source": [ "aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)" ] }, { "cell_type": "markdown", "metadata": { "id": "define_pipeline:gcpc,automl,flowers,icn" }, "source": [ "## Define AutoML image classification model pipeline that uses components from `google_cloud_pipeline_components`\n", "\n", "Next, you define the pipeline.\n", "\n", "Create and deploy an AutoML image classification model resource using a dataset resource." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "define_pipeline:gcpc,automl,flowers,icn" }, "outputs": [], "source": [ "@kfp.dsl.pipeline(name=\"automl-image-training-v2\")\n", "def pipeline(project: str = PROJECT_ID, region: str = LOCATION):\n", " from google_cloud_pipeline_components.v1.automl.training_job import \\\n", " AutoMLImageTrainingJobRunOp\n", " from google_cloud_pipeline_components.v1.dataset import \\\n", " ImageDatasetCreateOp\n", " from google_cloud_pipeline_components.v1.endpoint import (EndpointCreateOp,\n", " ModelDeployOp)\n", "\n", " ds_op = ImageDatasetCreateOp(\n", " project=project,\n", " display_name=\"flowers\",\n", " gcs_source=\"gs://cloud-samples-data/ai-platform/flowers/flowers.csv\",\n", " import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification,\n", " )\n", "\n", " training_job_run_op = AutoMLImageTrainingJobRunOp(\n", " project=project,\n", " display_name=\"train-automl-flowers\",\n", " prediction_type=\"classification\",\n", " model_type=\"CLOUD\",\n", " dataset=ds_op.outputs[\"dataset\"],\n", " model_display_name=\"train-automl-flowers\",\n", " training_fraction_split=0.6,\n", " validation_fraction_split=0.2,\n", " test_fraction_split=0.2,\n", " budget_milli_node_hours=8000,\n", " )\n", "\n", " endpoint_op = EndpointCreateOp(\n", " project=project,\n", " location=region,\n", " display_name=\"train-automl-flowers\",\n", " )\n", "\n", " ModelDeployOp(\n", " model=training_job_run_op.outputs[\"model\"],\n", " endpoint=endpoint_op.outputs[\"endpoint\"],\n", " automatic_resources_min_replica_count=1,\n", " automatic_resources_max_replica_count=1,\n", " )" ] }, { "cell_type": "markdown", "metadata": { "id": "compile_pipeline" }, "source": [ "## Compile the pipeline\n", "\n", "Next, compile the pipeline." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "compile_pipeline" }, "outputs": [], "source": [ "compiler.Compiler().compile(\n", " pipeline_func=pipeline, package_path=\"image_classification_pipeline.yaml\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9af52f3613f5" }, "outputs": [], "source": [ "DISPLAY_NAME = \"flowers_\" + UUID\n", "\n", "job = aip.PipelineJob(\n", " display_name=DISPLAY_NAME,\n", " template_path=\"image_classification_pipeline.yaml\",\n", " pipeline_root=PIPELINE_ROOT,\n", " enable_caching=False,\n", ")\n", "\n", "job.run()\n", "\n", "! rm image_classification_pipeline.yaml" ] }, { "cell_type": "markdown", "metadata": { "id": "run_pipeline:automl,image" }, "source": [ "## Run the pipeline\n", "\n", "Next, run the pipeline." ] }, { "cell_type": "markdown", "metadata": { "id": "b6d7ccae0e3a" }, "source": [ "Click on the generated link to see your run in the Cloud Console.\n", "\n", "<!-- It should look something like this as it's running:\n", "\n", "<a href=\"https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png\" target=\"_blank\"><img src=\"https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png\" width=\"40%\"/></a> -->\n", "\n", "In the Google Cloud console, many of the pipeline DAG nodes expand or collapse when you click on them. Here's a partially-expanded view of the DAG (click image to see larger version).\n", "\n", "<a href=\"https://storage.googleapis.com/amy-jo/images/mp/automl_image_classif.png\" target=\"_blank\"><img src=\"https://storage.googleapis.com/amy-jo/images/mp/automl_image_classif.png\" width=\"40%\"/></a>" ] }, { "cell_type": "markdown", "metadata": { "id": "cleanup:pipelines" }, "source": [ "## Cleaning up\n", "\n", "To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n", "project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n", "\n", "Otherwise, you can delete the individual resources you created in this tutorial." ] }, { "cell_type": "markdown", "metadata": { "id": "463df8940f59" }, "source": [ "### Get resources from the pipeline to clean up\n", "#### Function to get details of a task" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "877740ce8d2f" }, "outputs": [], "source": [ "def get_task_detail(\n", " task_details: List[Dict[str, Any]], task_name: str\n", ") -> List[Dict[str, Any]]:\n", " for task_detail in task_details:\n", " if task_detail.task_name == task_name:\n", " return task_detail" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3550a83b91e6" }, "outputs": [], "source": [ "pipeline_task_details = (\n", " job.gca_resource.job_detail.task_details\n", ") # fetch pipeline task details\n", "\n", "\n", "# fetch endpoint from pipeline and delete the endpoint\n", "endpoint_task = get_task_detail(pipeline_task_details, \"endpoint-create\")\n", "endpoint_resourceName = (\n", " endpoint_task.outputs[\"endpoint\"].artifacts[0].metadata[\"resourceName\"]\n", ")\n", "endpoint = aip.Endpoint(endpoint_resourceName)\n", "# undeploy model from endpoint\n", "endpoint.undeploy_all()\n", "endpoint.delete()\n", "\n", "# fetch model from pipeline and delete the model\n", "model_task = get_task_detail(pipeline_task_details, \"automl-image-training-job\")\n", "model_resourceName = model_task.outputs[\"model\"].artifacts[0].metadata[\"resourceName\"]\n", "model = aip.Model(model_resourceName)\n", "model.delete()\n", "\n", "\n", "# fetch dataset from pipeline and delete the dataset\n", "dataset_task = get_task_detail(pipeline_task_details, \"image-dataset-create\")\n", "dataset_resourceName = (\n", " dataset_task.outputs[\"dataset\"].artifacts[0].metadata[\"resourceName\"]\n", ")\n", "dataset = aip.ImageDataset(dataset_resourceName)\n", "dataset.delete()\n", "\n", "# delete the pipelinejob\n", "job.delete()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "cleanup:pipelines" }, "outputs": [], "source": [ "delete_bucket = False\n", "\n", "if delete_bucket:\n", " ! gsutil rm -r $BUCKET_URI" ] } ], "metadata": { "colab": { "name": "google_cloud_pipeline_components_automl_images.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }