ai-ml/vertex-ai-pipelines-template-blog/google_cloud_pipeline_template.ipynb (1,139 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ur8xi4C7S06n"
},
"outputs": [],
"source": [
"# Copyright 2022 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Building reusable machine learning workflows with pipeline templates"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JAPoU8Sm5E6e"
},
"source": [
"<table align=\"left\">\n",
"\n",
" <td>\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/notebook_template.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/notebook_template.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n",
" View on GitHub\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/notebook_template.ipynb\">\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\">\n",
" Open in Vertex AI Workbench\n",
" </a>\n",
" </td> \n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview\n",
"In this notebook you are demonstrating how you can use the Kubeflow Pipelines DSL `RegistryClient` to build, upload and reuse a ML pipeline using Vertex Pipelines and the Artifact Registry. You can always go to our [Pipeline Template documentation](https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template) to learn more about creating, uploading and reusing templates. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Dataset\n",
"We are using the [penguin dataset](https://www.tensorflow.org/datasets/catalog/penguins) that is available as a public dataset in [BigQuery](https://cloud.google.com/bigquery/public-data)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Objective\n",
"\n",
"This tutorial uses the following Google Cloud ML services and resources:\n",
"- KFP DSL (RegistryClient)\n",
"- Vertex AI SDK\n",
"- Vertex AI Pipelines\n",
"- Artifact Registry\n",
"- Vertex AI Workbench\n",
"- Vertex AI Endpoints\n",
"- BigQuery\n",
"\n",
"The steps performed include:\n",
"- How to build a pipeline using the KFP DSL.\n",
"- How to use the KFP RegistryClient to upload and use a Pipeline artifact.\n",
"- Run the pipeline using Vertex AI Pipelines."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tvgnzT1CKxrO"
},
"source": [
"### Costs \n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"* Artifact Registry\n",
"* BigQuery\n",
"\n",
"Learn about [Vertex AI\n",
"pricing](https://cloud.google.com/vertex-ai/pricing), [Artifact Registry Pricing](https://cloud.google.com/artifact-registry/pricing), [BigQuery](https://cloud.google.com/bigquery/pricing) and [Cloud Storage\n",
"pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
"Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gCuSR8GkAgzl"
},
"source": [
"### Set up your local development environment\n",
"\n",
"**If you are using Colab or Vertex AI Workbench Notebooks**, your environment already meets\n",
"all the requirements to run this notebook. You can skip this step.\n",
"\n",
"**Otherwise**, make sure your environment meets this notebook's requirements.\n",
"You need the following:\n",
"\n",
"* The Google Cloud SDK\n",
"* Git\n",
"* Python 3\n",
"* virtualenv\n",
"* Jupyter notebook running in a virtual environment with Python 3\n",
"\n",
"The Google Cloud guide to [Setting up a Python development\n",
"environment](https://cloud.google.com/python/setup) and the [Jupyter\n",
"installation guide](https://jupyter.org/install) provide detailed instructions\n",
"for meeting these requirements. The following steps provide a condensed set of\n",
"instructions:\n",
"\n",
"1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)\n",
"\n",
"1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)\n",
"\n",
"1. [Install\n",
" virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)\n",
" and create a virtual environment that uses Python 3. Activate the virtual environment.\n",
"\n",
"1. To install Jupyter, run `pip3 install jupyter` on the\n",
"command-line in a terminal shell.\n",
"\n",
"1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.\n",
"\n",
"1. Open this notebook in the Jupyter Notebook Dashboard."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "i7EUnXsZhAGF"
},
"source": [
"### Install additional packages\n",
"\n",
"Install the following packages required to execute this notebook. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2b4ef9b72d43"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"# The Vertex AI Workbench Notebook product has specific requirements\n",
"IS_WORKBENCH_NOTEBOOK = os.getenv(\"DL_ANACONDA_HOME\") and not os.getenv(\"VIRTUAL_ENV\")\n",
"IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(\n",
" \"/opt/deeplearning/metadata/env_version\"\n",
")\n",
"\n",
"# Vertex AI Notebook requires dependencies to be installed with '--user'\n",
"USER_FLAG = \"\"\n",
"if IS_WORKBENCH_NOTEBOOK:\n",
" USER_FLAG = \"--user\"\n",
"\n",
"! pip3 install --upgrade google-cloud-aiplatform {USER_FLAG} -q\n",
"! pip3 install -U google-cloud-storage {USER_FLAG} -q\n",
"! pip3 install {USER_FLAG} kfp==2.0.0b1 --upgrade -q"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check installed package versions\n",
"Run the following cell to check if you are using `kfp>=2.0.0b1`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"! pip freeze | grep kfp"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hhq5zEbGg0XX"
},
"source": [
"### Restart the kernel\n",
"\n",
"After you install the additional packages, you need to restart the notebook kernel so it can find the packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "EzrelQZ22IZj"
},
"outputs": [],
"source": [
"# Automatically restart kernel after installs. Might take a bit.\n",
"import os\n",
"\n",
"if not os.getenv(\"IS_TESTING\"):\n",
" # Automatically restart kernel after installs\n",
" import IPython\n",
"\n",
" app = IPython.Application.instance()\n",
" app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lWEdiXsJg0XY"
},
"source": [
"## Before you begin"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BF1j6f9HApxa"
},
"source": [
"### Set up your Google Cloud project\n",
"\n",
"**The following steps are required, regardless of your notebook environment.**\n",
"\n",
"1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.\n",
"\n",
"1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
"\n",
"1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) and the [Artifact Registry API](https://console.cloud.google.com/apis/library/artifactregistry.googleapis.com). **Enabling the API's will take a few minutes**\n",
"\n",
"1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).\n",
"\n",
"1. Enter your project ID in the cell below. Then run the cell to make sure the\n",
"Cloud SDK uses the right project for all the commands in this notebook.\n",
"\n",
"**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Set your project ID\n",
"\n",
"**If you don't know your project ID**, you may be able to get your project ID using `gcloud`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n",
" # Get your GCP project id from gcloud\n",
" shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n",
" PROJECT_ID = shell_output[0]\n",
" print(\"Project ID:\", PROJECT_ID)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"! gcloud config set project $PROJECT_ID"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "region"
},
"source": [
"#### Region\n",
"\n",
"You can also change the `REGION` variable, which is used for operations\n",
"throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n",
"\n",
"- Americas: `us-central1`\n",
"- Europe: `europe-west4`\n",
"- Asia Pacific: `asia-east1`\n",
"\n",
"You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\n",
"\n",
"Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "yB7ylyykAEVP"
},
"outputs": [],
"source": [
"REGION = (\n",
" \"[your-region]\" # @param {type: \"string\"} --> You can change the region if you want\n",
")\n",
"\n",
"if REGION == \"[your-region]\":\n",
" REGION = \"us-central1\"\n",
"\n",
"print(f\"Region: {REGION}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "06571eb4063b",
"tags": []
},
"source": [
"#### UUID\n",
"\n",
"If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a uuid for each instance session, and append it onto the name of resources you create in this tutorial."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "697568e92bd6"
},
"outputs": [],
"source": [
"import random\n",
"import string\n",
"\n",
"\n",
"# Generate a uuid of a specifed length(default=8)\n",
"def generate_uuid(length: int = 8) -> str:\n",
" return \"\".join(random.choices(string.ascii_lowercase + string.digits, k=length))\n",
"\n",
"\n",
"UUID = generate_uuid()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sBCra4QMA2wR"
},
"source": [
"### Authenticate your Google Cloud account\n",
"\n",
"**If you are using Vertex AI Workbench Notebooks**, your environment is already\n",
"authenticated. \n",
"\n",
"**If you are using Colab**, run the cell below and follow the instructions\n",
"when prompted to authenticate your account via oAuth.\n",
"\n",
"**Otherwise**, follow these steps:\n",
"\n",
"1. In the Cloud Console, go to the [**Create service account key**\n",
" page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).\n",
"\n",
"2. Click **Create service account**.\n",
"\n",
"3. In the **Service account name** field, enter a name, and\n",
" click **Create**.\n",
"\n",
"4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type \"Vertex AI\"\n",
"into the filter box, and select\n",
" **Vertex AI Administrator**. Type \"Storage Object Admin\" into the filter box, and select **Storage Object Admin**.\n",
"\n",
"5. Click *Create*. A JSON file that contains your key downloads to your\n",
"local environment.\n",
"\n",
"6. Enter the path to your service account key as the\n",
"`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PyQmSRbKA8r-"
},
"outputs": [],
"source": [
"# If you are running this notebook in Colab, run this cell and follow the\n",
"# instructions to authenticate your GCP account. This provides access to your\n",
"# Cloud Storage bucket and lets you submit training jobs and prediction\n",
"# requests.\n",
"\n",
"import os\n",
"import sys\n",
"\n",
"# If on Vertex AI Workbench, then don't execute this code\n",
"IS_COLAB = \"google.colab\" in sys.modules\n",
"if not os.path.exists(\"/opt/deeplearning/metadata/env_version\") and not os.getenv(\n",
" \"DL_ANACONDA_HOME\"\n",
"):\n",
" if \"google.colab\" in sys.modules:\n",
" from google.colab import auth as google_auth\n",
"\n",
" google_auth.authenticate_user()\n",
"\n",
" # If you are running this notebook locally, replace the string below with the\n",
" # path to your service account key and run this cell to authenticate your GCP\n",
" # account.\n",
" elif not os.getenv(\"IS_TESTING\"):\n",
" %env GOOGLE_APPLICATION_CREDENTIALS ''"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zgPO1eR3CYjk",
"tags": []
},
"source": [
"### Create a Cloud Storage bucket\n",
"\n",
"**The following steps are required, regardless of your notebook environment.**\n",
"In this tutorial, Vertex AI also saves the\n",
"trained model that results from your job in the same bucket. Using this model artifact, you can then\n",
"create Vertex AI model and endpoint resources in order to serve\n",
"online predictions. We will also use the GCS bucket for our Vertex Pipeline run. \n",
"\n",
"Set the name of your Cloud Storage bucket below. It must be unique across all\n",
"Cloud Storage buckets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "MzGDU7TWdts_"
},
"outputs": [],
"source": [
"BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"} --> Set to your bucket.\n",
"BUCKET_URI = f\"gs://{BUCKET_NAME}\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cf221059d072",
"tags": []
},
"outputs": [],
"source": [
"if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n",
" BUCKET_NAME = PROJECT_ID + \"vertex-\" + UUID\n",
" BUCKET_URI = f\"gs://{BUCKET_NAME}\"\n",
"\n",
"print(f\"Your bucket: {BUCKET_URI}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-EcIXiGsCePi"
},
"source": [
"**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NIq7R4HZCfIc"
},
"outputs": [],
"source": [
"! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ucvCsknMCims"
},
"source": [
"Finally, validate access to your Cloud Storage bucket by examining its contents:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "vhOb7YnwClBb"
},
"outputs": [],
"source": [
"! gsutil ls -al $BUCKET_URI"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "set_service_account"
},
"source": [
"### Create a repository in Artifact Registry\n",
"\n",
"Next you need to create a repository in Artifact Registry. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"! gcloud artifacts repositories create quickstart-kfp-repo \\\n",
" --repository-format=kfp \\\n",
" --location=us-central1 \\\n",
" --description=\"kfp-template-repo\" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next verify if the repo has been created. The next cell should output a list of the repositories. The list should contain `quickstart-kfp-repo`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!gcloud artifacts repositories list --project=$PROJECT_ID \\\n",
"--location=us-central1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also use the [Google Cloud Console](https://cloud.google.com/artifact-registry/docs/repositories/create-repos#console) to create a Artifact Repository. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XoEqT2Y4DJmf"
},
"source": [
"### Import libraries\n",
"Now you need to import our Python libraries. We will be using:\n",
"\n",
"* [Vertex AI SDK](https://cloud.google.com/vertex-ai/docs/start/client-libraries)\n",
"* [Kubeflow Pipelines DSL](https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.dsl.html)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pRUOFELefqf1"
},
"outputs": [],
"source": [
"import google.cloud.aiplatform as vertex_ai\n",
"from kfp import compiler, dsl"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to run our pipeline you also need to specify some constants that you can reuse in our pipeline. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"TRAINING_DATA = \"bigquery-public-data.ml_datasets.penguins\"\n",
"PIPELINE_ROOT = BUCKET_URI + \"/pipeline-root\"\n",
"MODEL_DIR = BUCKET_URI + \"/model-dir\"\n",
"MODEL_NAME = \"template_model\"\n",
"PACKAGE_NAME = \"pipeline-template\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "init_aip:mbsdk,all"
},
"source": [
"### Initialize Vertex AI SDK for Python\n",
"\n",
"Initialize the Vertex AI SDK for Python for your project and corresponding bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "nZDLbH4VAEVT"
},
"outputs": [],
"source": [
"vertex_ai.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create three custom components \n",
"In this example you will create a pipeline using components that are based on Python function. You can learn more about how to create a component from a Python function in the [KFP documentation](https://www.kubeflow.org/docs/components/pipelines/sdk-v2/python-function-components/). Our pipeline consists of three steps:\n",
"\n",
"* Train a Tensorflow model based on data in BigQuery. \n",
"* Upload the trained model to the Vertex AI Model Registry. \n",
"* Create a Vertex AI Endpoint and Deploy our model into the Endpoint. \n",
"\n",
"Run the following cells that will define the component code. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@dsl.component(\n",
" packages_to_install=[\n",
" \"pandas\",\n",
" \"scikit-learn\",\n",
" \"tensorflow\",\n",
" \"google-cloud-bigquery-storage==2.4.0\",\n",
" \"google-cloud-bigquery\",\n",
" \"pyarrow\",\n",
" \"fsspec\",\n",
" \"gcsfs\",\n",
" \"google-cloud-aiplatform==1.15.0\",\n",
" \"kfp==2.0.0b1\",\n",
" \"kfp-pipeline-spec\",\n",
" \"kfp-server-api\",\n",
" \"pandas==1.2.4\",\n",
" \"pandas-profiling==3.0.0\",\n",
" \"numpy==1.19.5\",\n",
" ]\n",
")\n",
"def model_train(\n",
" bq_data: str,\n",
" project_id: str,\n",
" region: str,\n",
" model_dir: str,\n",
"):\n",
"\n",
" import logging\n",
" import os\n",
"\n",
" import tensorflow as tf\n",
" from google.cloud import bigquery, bigquery_storage\n",
" from sklearn.model_selection import train_test_split\n",
" from sklearn.preprocessing import LabelEncoder\n",
"\n",
" bqclient = bigquery.Client(project_id)\n",
" bqstorageclient = bigquery_storage.BigQueryReadClient()\n",
"\n",
" query_string = \"\"\"\n",
" SELECT\n",
" *\n",
" FROM `{}`\n",
" \"\"\".format(\n",
" bq_data\n",
" )\n",
"\n",
" df = (\n",
" bqclient.query(query_string)\n",
" .result()\n",
" .to_dataframe(bqstorage_client=bqstorageclient)\n",
" )\n",
"\n",
" print(df.head(2))\n",
"\n",
" output_directory = model_dir\n",
" if os.environ.get(\"AIP_MODEL_DIR\") is not None:\n",
" output_directory = os.environ[\"AIP_MODEL_DIR\"]\n",
"\n",
" print(output_directory)\n",
"\n",
" logging.info(\"Creating and training model ...\")\n",
"\n",
" model = tf.keras.Sequential(\n",
" [\n",
" tf.keras.layers.Dense(32, activation=\"relu\"),\n",
" tf.keras.layers.Dropout(0.2),\n",
" tf.keras.layers.Dense(16, activation=\"relu\"),\n",
" tf.keras.layers.Dropout(0.2),\n",
" tf.keras.layers.Dense(8, activation=\"relu\"),\n",
" tf.keras.layers.Dense(1, activation=\"softmax\"),\n",
" ]\n",
" )\n",
"\n",
" model.compile(\n",
" optimizer=tf.keras.optimizers.Adam(learning_rate=0.01, clipnorm=1.0),\n",
" loss=tf.keras.losses.BinaryCrossentropy(),\n",
" metrics=[tf.keras.metrics.AUC()],\n",
" )\n",
"\n",
" train, test = train_test_split(df, test_size=0.2)\n",
"\n",
" print(train.head(2))\n",
"\n",
" train_X, train_y = (\n",
" train[[\"island\", \"culmen_length_mm\", \"culmen_depth_mm\", \"body_mass_g\"]],\n",
" train[\"species\"],\n",
" )\n",
"\n",
" train_X[[\"island\", \"culmen_length_mm\", \"culmen_depth_mm\", \"body_mass_g\"]] = train_X[\n",
" [\"island\", \"culmen_length_mm\", \"culmen_depth_mm\", \"body_mass_g\"]\n",
" ].fillna(0)\n",
"\n",
" enc = LabelEncoder()\n",
" enc.fit(train_X[\"island\"])\n",
" train_X[\"island\"] = enc.transform(train_X[\"island\"])\n",
"\n",
" print(train_X.head(5))\n",
"\n",
" train_y.replace(\n",
" [\n",
" \"Gentoo penguin (Pygoscelis papua)\",\n",
" \"Chinstrap penguin (Pygoscelis antarctica)\",\n",
" \"Adelie Penguin (Pygoscelis adeliae)\",\n",
" ],\n",
" [1, 2, 3],\n",
" inplace=True,\n",
" )\n",
"\n",
" print(train_y.head(10))\n",
" print(train_y.dtypes)\n",
"\n",
" model.fit(train_X, train_y, epochs=10, verbose=1, batch_size=10)\n",
"\n",
" logging.info(f\"Exporting SavedModel to: {output_directory}\")\n",
" model.save(output_directory)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next the component that will upload the model to the Vertex AI Model Registry. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@dsl.component(\n",
" base_image=\"python:3.8-slim\", packages_to_install=[\"google-cloud-aiplatform\"]\n",
")\n",
"def model_upload(\n",
" project_id: str,\n",
" region: str,\n",
" model_dir: str,\n",
" model_name: str,\n",
"):\n",
"\n",
" import google.cloud.aiplatform as vertex_ai\n",
"\n",
" vertex_ai.init(project=project_id, location=region)\n",
"\n",
" vertex_ai.Model.upload(\n",
" display_name=model_name,\n",
" artifact_uri=model_dir,\n",
" serving_container_image_uri=\"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-2:latest\",\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And the last component will deploy our model to a Vertex AI Endpoint. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@dsl.component(\n",
" base_image=\"python:3.8-slim\", packages_to_install=[\"google-cloud-aiplatform\"]\n",
")\n",
"def model_deploy(\n",
" project_id: str,\n",
" region: str,\n",
" model_name: str,\n",
"):\n",
" import google.cloud.aiplatform as vertex_ai\n",
"\n",
" vertex_ai.init(project=project_id, location=region)\n",
"\n",
" model = vertex_ai.Model.list()\n",
" model = model[0]\n",
"\n",
" model.deploy(\n",
" min_replica_count=1,\n",
" max_replica_count=1,\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you have created three components that you can reuse in our pipeline workflow. "
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"### Build the pipeline\n",
"The next step is that you will build our pipeline. This pipeline will be our template that you will upload and reuse later in this notebook. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@dsl.pipeline(\n",
" name=\"pipeline_template\",\n",
" description=\"pipeline that trains and uploads a TF Model\",\n",
")\n",
"def ml_pipeline():\n",
" train_task = model_train(\n",
" bq_data=TRAINING_DATA, project_id=PROJECT_ID, region=REGION, model_dir=MODEL_DIR\n",
" )\n",
"\n",
" uploader_task = model_upload(\n",
" project_id=PROJECT_ID, region=REGION, model_dir=MODEL_DIR, model_name=MODEL_NAME\n",
" ).after(train_task)\n",
"\n",
" _ = model_deploy(project_id=PROJECT_ID, region=REGION, model_name=MODEL_NAME).after(\n",
" uploader_task\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you have to compile our pipeline in a local YAML file called `pipeline_template.yaml`. For this you will use the `kfp.Compiler`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"compiler.Compiler().compile(\n",
" pipeline_func=ml_pipeline, package_path=\"pipeline_template.yaml\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Upload the template"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now it's time to use the `RegistryClient`. First you need to configure your Kubeflow Pipelines SDK registry client."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from kfp.registry import RegistryClient\n",
"\n",
"client = RegistryClient(\n",
" host=f\"https://us-central1-kfp.pkg.dev/{PROJECT_ID}/quickstart-kfp-repo\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After configuring your registry client you can go ahead and upload the `pipeline_template.yaml`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"templateName, versionName = client.upload_pipeline(\n",
" file_name=\"pipeline_template.yaml\",\n",
" tags=[\"v1\", \"latest\"],\n",
" extra_headers={\"description\": \"This is an example pipeline template\"},\n",
")\n",
"\n",
"!rm pipeline_template.yaml"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Verify if the template was uploaded to the Artifact Registry repo. \n",
"Now you can verify if the template was uploaded. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"templatePackages = client.list_packages()\n",
"templatePackage = client.get_package(package_name=PACKAGE_NAME)\n",
"print(templatePackage)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also view the uploaded artifact in the [Artifact Registry repo](https://console.cloud.google.com/artifacts/browse/) that you created earlier. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Reuse your pipeline template using Vertex Pipelines\n",
"\n",
"Next you will reuse the pipeline template that you just created and uploaded. You can use the Vertex AI SDK for Python or the [Google Cloud console](https://console.cloud.google.com/vertex-ai/pipelines/runs) to create a pipeline run from your template in Artifact Registry. The following command will create submit a `PipelineJob`. \n",
"\n",
"**This will take ~15 minutes to complete**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a job via version id.\n",
"job = vertex_ai.PipelineJob(\n",
" display_name=\"pipeline-template\",\n",
" template_path=f\"https://us-central1-kfp.pkg.dev/{PROJECT_ID}/quickstart-kfp-repo/pipeline-template/\"\n",
" + versionName,\n",
")\n",
"\n",
"job.submit()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can view the runs created by a specific pipeline version in the Vertex AI SDK for Python. To list the pipeline runs, run the `pipelineJobs.list` command as shown in one or more of the following examples:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# To filter all runs created from a specific version\n",
"filter = (\n",
" f'template_uri:\"https://us-central1-kfp.pkg.dev/{PROJECT_ID}/quickstart-kfp-repo/pipeline-template/*\" AND '\n",
" + 'template_metadata.version=\"%s\"' % versionName\n",
")\n",
"\n",
"vertex_ai.PipelineJob.list(filter=filter)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Check if the Endpoint has been created. \n",
"After your Pipeline Job finishes you can check if your endpoint has been created. \n",
"**Note**: Wait until your pipeline run is finished. This will take around ~15 minutes. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"endpoint = vertex_ai.Endpoint.list(order_by=\"update_time\")\n",
"endpoint[-1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also navigate to the [Vertex Pipelines Console](https://console.cloud.google.com/vertex-ai/pipelines/) to see the submitted job run. \n",
"\n",
"You now made it to the end of this notebook!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TpV-iwP9qw9c"
},
"source": [
"## Cleaning up\n",
"\n",
"To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
"project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
"\n",
"* [Delete Vertex AI Model](https://cloud.google.com/vertex-ai/docs/model-registry/delete-model)\n",
"* [Delete GCS bucket](https://cloud.google.com/storage/docs/deleting-buckets)\n",
"\n",
"Otherwise, you can delete the individual resources you created in this tutorial:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "sx_vKniMq9ZX"
},
"outputs": [],
"source": [
"# Delete Cloud Storage objects that were created\n",
"! gsutil -m rm -r $MODEL_DIR\n",
"\n",
"if os.getenv(\"IS_TESTING\"):\n",
" ! gsutil -m rm -r $BUCKET_URI"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "notebook_template.ipynb",
"provenance": [],
"toc_visible": true
},
"environment": {
"kernel": "python3",
"name": "common-cpu.m95",
"type": "gcloud",
"uri": "gcr.io/deeplearning-platform-release/base-cpu:m95"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}