notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb (739 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "copyright"
},
"outputs": [],
"source": [
"# Copyright 2021 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "title:generic"
},
"source": [
"# Vertex AI Pipelines: AutoML text classification pipelines using google-cloud-pipeline-components\n",
"\n",
"> **NOTE:** Starting on September 15, 2024, you can only customize classification, entity extraction, and sentiment analysis models by moving to Vertex AI Gemini prompts and tuning. Training or updating models for Vertex AI AutoML for Text classification, entity extraction, and sentiment analysis objectives will no longer be available. You can continue using existing Vertex AI AutoML Text objectives until June 15, 2025. For more information about how Gemini offers enhanced user experience through improved prompting capabilities, see \n",
"[Introduction to tuning](https://cloud.google.com/vertex-ai/generative-ai/docs/models/tune-gemini-overview).\n",
"\n",
"<table align=\"left\">\n",
" <td>\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n",
" View on GitHub\n",
" </a>\n",
" </td>\n",
" <td>\n",
"<a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb\" target='_blank'>\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\">\n",
" Open in Vertex AI Workbench\n",
" </a>\n",
" </td>\n",
"</table>\n",
"<br/><br/><br/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "overview:pipelines,automl"
},
"source": [
"## Overview\n",
"\n",
"This notebook shows how to use the components defined in [`google_cloud_pipeline_components`](https://github.com/kubeflow/pipelines/tree/master/components/google-cloud) to build an AutoML text classification workflow on [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines).\n",
"\n",
"Learn more about [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) and [AutoML components](https://cloud.google.com/vertex-ai/docs/pipelines/vertex-automl-component)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "objective:pipelines,automl"
},
"source": [
"### Objective\n",
"\n",
"In this tutorial, you learn to use `Vertex AI Pipelines` and `Google Cloud Pipeline Components` to build an `AutoML` text classification model.\n",
"\n",
"\n",
"This tutorial uses the following Google Cloud ML services:\n",
"\n",
"- `Vertex AI Pipelines`\n",
"- `Google Cloud Pipeline Components`\n",
"- `Vertex AutoML`\n",
"- `Vertex AI Model` resource\n",
"- `Vertex AI Endpoint` resource\n",
"\n",
"The steps performed include:\n",
"\n",
"- Create a KFP pipeline:\n",
" - Create a `Dataset` resource.\n",
" - Train an AutoML text classification `Model` resource.\n",
" - Create an `Endpoint` resource.\n",
" - Deploys the `Model` resource to the `Endpoint` resource.\n",
"- Compile the KFP pipeline.\n",
"- Execute the KFP pipeline using `Vertex AI Pipelines`\n",
"\n",
"The components are [documented here](https://google-cloud-pipeline-components.readthedocs.io/en/latest/google_cloud_pipeline_components.aiplatform.html#module-google_cloud_pipeline_components.aiplatform)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dataset:happydb,tcn"
},
"source": [
"### Dataset\n",
"\n",
"The dataset used for this tutorial is the [Happy Moments dataset](https://www.kaggle.com/ritresearch/happydb) from [Kaggle Datasets](https://www.kaggle.com/ritresearch/happydb). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "costs"
},
"source": [
"### Costs\n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI\n",
"pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n",
"pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
"Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "install_aip:mbsdk"
},
"source": [
"## Installation\n",
"\n",
"Install the packages required for executing this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "install_aip:mbsdk"
},
"outputs": [],
"source": [
"! pip3 install --upgrade --quiet google-cloud-aiplatform \\\n",
" google-cloud-storage \\\n",
" kfp \\\n",
" google-cloud-pipeline-components"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "58707a750154"
},
"source": [
"### Colab only: Uncomment the following cell to restart the kernel."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f200f10a1da3"
},
"outputs": [],
"source": [
"# Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BF1j6f9HApxa"
},
"source": [
"## Before you begin\n",
"\n",
"### Set up your Google Cloud project\n",
"\n",
"**The following steps are required, regardless of your notebook environment.**\n",
"\n",
"1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.\n",
"\n",
"2. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
"\n",
"3. [Enable the Vertex AI API]\n",
"\n",
"4. If you are running this notebook locally, you need to install the [Cloud SDK](https://cloud.google.com/sdk)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WReHDGG5g0XY"
},
"source": [
"#### Set your project ID\n",
"\n",
"**If you don't know your project ID**, try the following:\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "oM1iC_MfAts1"
},
"outputs": [],
"source": [
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"! gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "region"
},
"source": [
"#### Region\n",
"\n",
"You can also change the `REGION` variable used by Vertex AI. Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "region"
},
"outputs": [],
"source": [
"REGION = \"us-central1\" # @param {type: \"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sBCra4QMA2wR"
},
"source": [
"### Authenticate your Google Cloud account\n",
"\n",
"Depending on your Jupyter environment, you may have to manually authenticate. Follow the relevant instructions below.\n",
"\n",
"**1. Vertex AI Workbench**\n",
"* Do nothing as you are already authenticated.\n",
"\n",
"**2. Local JupyterLab instance, uncomment and run:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "254614fa0c46"
},
"outputs": [],
"source": [
"# ! gcloud auth login"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ef21552ccea8"
},
"source": [
"**3. Colab, uncomment and run:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "603adbbf0532"
},
"outputs": [],
"source": [
"# from google.colab import auth\n",
"# auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f6b2ccc891ed"
},
"source": [
"**4. Service account or other**\n",
"* See how to grant Cloud Storage permissions to your service account at https://cloud.google.com/storage/docs/gsutil/commands/iam#ch-examples."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zgPO1eR3CYjk"
},
"source": [
"### Create a Cloud Storage bucket\n",
"\n",
"Create a storage bucket to store intermediate artifacts such as datasets."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-EcIXiGsCePi"
},
"source": [
"**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "MzGDU7TWdts_"
},
"outputs": [],
"source": [
"BUCKET_URI = f\"gs://your-bucket-name-{PROJECT_ID}-unique\" # @param {type:\"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-EcIXiGsCePi"
},
"source": [
"**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NIq7R4HZCfIc"
},
"outputs": [],
"source": [
"! gsutil mb -l {REGION} -p {PROJECT_ID} {BUCKET_URI}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "set_service_account"
},
"source": [
"#### Service Account\n",
"\n",
"**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "set_service_account"
},
"outputs": [],
"source": [
"SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "autoset_service_account"
},
"outputs": [],
"source": [
"import os\n",
"import sys\n",
"\n",
"IS_COLAB = \"google.colab\" in sys.modules\n",
"if (\n",
" SERVICE_ACCOUNT == \"\"\n",
" or SERVICE_ACCOUNT is None\n",
" or SERVICE_ACCOUNT == \"[your-service-account]\"\n",
"):\n",
" # Get your service account from gcloud\n",
" if not IS_COLAB:\n",
" shell_output = !gcloud auth list 2>/dev/null\n",
" SERVICE_ACCOUNT = shell_output[2].replace(\"*\", \"\").strip()\n",
"\n",
" if IS_COLAB:\n",
" shell_output = ! gcloud projects describe $PROJECT_ID\n",
" project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n",
" SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n",
"\n",
" print(\"Service Account:\", SERVICE_ACCOUNT)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "set_service_account:pipelines"
},
"source": [
"#### Set service account access for Vertex AI Pipelines\n",
"\n",
"Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "set_service_account:pipelines"
},
"outputs": [],
"source": [
"! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI\n",
"\n",
"! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "setup_vars"
},
"source": [
"### Set up variables\n",
"\n",
"Next, set up some variables used throughout the tutorial.\n",
"### Import libraries and define constants"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "import_aip:mbsdk"
},
"outputs": [],
"source": [
"import google.cloud.aiplatform as aip\n",
"import kfp"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pipeline_constants"
},
"source": [
"#### Vertex AI Pipelines constants\n",
"\n",
"Setup up the following constants for Vertex AI Pipelines:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pipeline_constants"
},
"outputs": [],
"source": [
"PIPELINE_ROOT = \"{}/pipeline_root/happydb\".format(BUCKET_URI)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "init_aip:mbsdk"
},
"source": [
"## Initialize Vertex AI SDK for Python\n",
"\n",
"Initialize the Vertex AI SDK for Python for your project and corresponding bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "init_aip:mbsdk"
},
"outputs": [],
"source": [
"aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "define_pipeline:gcpc,automl,happydb,tcn"
},
"source": [
"## Define AutoML text classification model pipeline that uses components from `google_cloud_pipeline_components`\n",
"\n",
"Next, you define the pipeline.\n",
"\n",
"Create and deploy an AutoML text classification `Model` resource using a `Dataset` resource."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "define_pipeline:gcpc,automl,happydb,tcn"
},
"outputs": [],
"source": [
"IMPORT_FILE = \"gs://cloud-ml-data/NL-classification/happiness.csv\"\n",
"\n",
"\n",
"@kfp.dsl.pipeline(name=\"automl-text-classification\")\n",
"def pipeline(\n",
" project: str = PROJECT_ID, region: str = REGION, import_file: str = IMPORT_FILE\n",
"):\n",
" from google_cloud_pipeline_components.v1.automl.training_job import \\\n",
" AutoMLTextTrainingJobRunOp\n",
" from google_cloud_pipeline_components.v1.dataset import TextDatasetCreateOp\n",
" from google_cloud_pipeline_components.v1.endpoint import (EndpointCreateOp,\n",
" ModelDeployOp)\n",
"\n",
" dataset_create_task = TextDatasetCreateOp(\n",
" display_name=\"train-automl-happydb\",\n",
" gcs_source=import_file,\n",
" import_schema_uri=aip.schema.dataset.ioformat.text.multi_label_classification,\n",
" project=project,\n",
" )\n",
"\n",
" training_run_task = AutoMLTextTrainingJobRunOp(\n",
" dataset=dataset_create_task.outputs[\"dataset\"],\n",
" display_name=\"train-automl-happydb\",\n",
" prediction_type=\"classification\",\n",
" multi_label=True,\n",
" training_fraction_split=0.6,\n",
" validation_fraction_split=0.2,\n",
" test_fraction_split=0.2,\n",
" model_display_name=\"train-automl-happydb\",\n",
" project=project,\n",
" )\n",
"\n",
" endpoint_op = EndpointCreateOp(\n",
" project=project,\n",
" location=region,\n",
" display_name=\"train-automl-flowers\",\n",
" )\n",
"\n",
" ModelDeployOp(\n",
" model=training_run_task.outputs[\"model\"],\n",
" endpoint=endpoint_op.outputs[\"endpoint\"],\n",
" automatic_resources_min_replica_count=1,\n",
" automatic_resources_max_replica_count=1,\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "compile_pipeline"
},
"source": [
"## Compile the pipeline\n",
"\n",
"Next, compile the pipeline."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "compile_pipeline"
},
"outputs": [],
"source": [
"from kfp.v2 import compiler # noqa: F811\n",
"\n",
"compiler.Compiler().compile(\n",
" pipeline_func=pipeline,\n",
" package_path=\"text_classification_pipeline.yaml\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "run_pipeline:automl,text"
},
"source": [
"## Run the pipeline\n",
"\n",
"Next, run the pipeline."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "run_pipeline:automl,text"
},
"outputs": [],
"source": [
"DISPLAY_NAME = \"happydb\"\n",
"\n",
"job = aip.PipelineJob(\n",
" display_name=DISPLAY_NAME,\n",
" template_path=\"text_classification_pipeline.yaml\",\n",
" pipeline_root=PIPELINE_ROOT,\n",
" enable_caching=False,\n",
")\n",
"\n",
"job.run()\n",
"\n",
"! rm text_classification_pipeline.yaml"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "view_pipeline_run:automl,text"
},
"source": [
"Click on the generated link to see your run in the Cloud Console.\n",
"\n",
"<!-- It should look something like this as it is running:\n",
"\n",
"<a href=\"https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png\" target=\"_blank\"><img src=\"https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png\" width=\"40%\"/></a> -->\n",
"\n",
"In the UI, many of the pipeline DAG nodes will expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).\n",
"\n",
"<a href=\"https://storage.googleapis.com/amy-jo/images/mp/automl_text_classif.png\" target=\"_blank\"><img src=\"https://storage.googleapis.com/amy-jo/images/mp/automl_text_classif.png\" width=\"40%\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cleanup:pipelines"
},
"source": [
"# Cleaning up\n",
"\n",
"To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
"project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
"\n",
"Otherwise, you can delete the individual resources you created in this tutorial -- *Note:* this is auto-generated and not all resources may be applicable for this tutorial."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cleanup:pipelines"
},
"outputs": [],
"source": [
"delete_bucket = False\n",
"\n",
"try:\n",
" if \"DISPLAY_NAME\" in globals():\n",
" models = aip.Model.list(\n",
" filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n",
" )\n",
" model = models[0]\n",
" aip.Model.delete(model)\n",
" print(\"Deleted model:\", model)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"try:\n",
" if delete_endpoint and \"DISPLAY_NAME\" in globals():\n",
" endpoints = aip.Endpoint.list(\n",
" filter=f\"display_name={DISPLAY_NAME}_endpoint\", order_by=\"create_time\"\n",
" )\n",
" endpoint = endpoints[0]\n",
" endpoint.undeploy_all()\n",
" aip.Endpoint.delete(endpoint.resource_name)\n",
" print(\"Deleted endpoint:\", endpoint)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"if \"DISPLAY_NAME\" in globals():\n",
"\n",
" try:\n",
" datasets = aip.TextDataset.list(\n",
" filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n",
" )\n",
" dataset = datasets[0]\n",
" aip.TextDataset.delete(dataset.resource_name)\n",
" print(\"Deleted dataset:\", dataset)\n",
" except Exception as e:\n",
" print(e)\n",
"\n",
"try:\n",
" if \"DISPLAY_NAME\" in globals():\n",
" pipelines = aip.PipelineJob.list(\n",
" filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n",
" )\n",
" pipeline = pipelines[0]\n",
" aip.PipelineJob.delete(pipeline.resource_name)\n",
" print(\"Deleted pipeline:\", pipeline)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"if delete_bucket or os.getenv(\"IS_TESTING\"):\n",
" ! gsutil rm -r $BUCKET_URI"
]
}
],
"metadata": {
"colab": {
"name": "google_cloud_pipeline_components_automl_text.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}