notebooks/official/ml_metadata/vertex-pipelines-ml-metadata.ipynb (1,180 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ur8xi4C7S06n"
},
"outputs": [],
"source": [
"# Copyright 2021 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JAPoU8Sm5E6e"
},
"source": [
"# Vertex AI: Track artifacts and metrics across Vertex AI Pipelines runs using Vertex ML Metadata\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/ml_metadata/vertex-pipelines-ml-metadata.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fml_metadata%2Fvertex-pipelines-ml-metadata.ipynb\">\n",
" <img width=\"32px\" src=\"https://cloud.google.com/ml-engine/images/colab-enterprise-logo-32px.png\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td> \n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/official/ml_metadata/vertex-pipelines-ml-metadata.ipynb\">\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/ml_metadata/vertex-pipelines-ml-metadata.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e88691377fcc"
},
"source": [
"## Overview\n",
"\n",
"This notebook demonstrates how to track metrics and artifacts across Vertex AI Pipeline runs, and analyze this metadata using the Vertex AI Python SDK. If you'd prefer to follow a step-by-step tutorial, check out the [codelab version](https://codelabs.developers.google.com/vertex-mlmd-pipelines#0) of this notebook.\n",
"\n",
"Learn more about [Vertex ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata) and [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tvgnzT1CKxrO"
},
"source": [
"### Objective\n",
"\n",
"In this notebook, you learn how to track artifacts and metrics with Vertex ML Metadata in Vertex AI Pipeline runs.\n",
"\n",
"This tutorial uses the following Google Cloud ML services and resources:\n",
"\n",
"- Vertex AI Pipelines\n",
"- Vertex ML Metadata\n",
"\n",
"The steps performed include:\n",
"\n",
"* Use the Kubeflow Pipelines SDK to build an ML pipeline that runs on Vertex AI.\n",
"* The pipeline creates a dataset, trains a scikit-learn model, and deploys the model to an endpoint.\n",
"* Write custom pipeline components that generate artifacts and metadata.\n",
"* Compare Vertex AI Pipeline runs, both in the Google Cloud console and programmatically.\n",
"* Trace the lineage for pipeline-generated artifacts.\n",
"* Query your pipeline run metadata."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ce1e72673981"
},
"source": [
"### Dataset\n",
"\n",
"This notebook uses scikit-learn to train a model and classify bean types using the [Dry Beans Dataset](https://archive.ics.uci.edu/ml/datasets/Dry+Bean+Dataset) from UCI Machine Learning. This is a tabular dataset that includes measurements and characteristics of seven different types of beans taken from images."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0c997d8d92ce"
},
"source": [
"### Costs \n",
"\n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"\n",
"Learn about [Vertex AI\n",
"pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n",
"pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
"Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "61RBz8LLbxCR"
},
"source": [
"## Get started"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "No17Cw5hgx12"
},
"source": [
"### Install Vertex AI SDK for Python and other required packages\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "wyy5Lbnzg5fi"
},
"outputs": [],
"source": [
"! pip3 install --upgrade --quiet google-cloud-aiplatform \\\n",
" 'kfp<2.0'"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "R5Xep4W9lq-Z"
},
"source": [
"### Restart runtime (Colab only)\n",
"\n",
"To use the newly installed packages, you must restart the runtime on Google Colab."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XRvKdaPDTznN"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
"\n",
" import IPython\n",
"\n",
" app = IPython.Application.instance()\n",
" app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SbmM4z7FOBpM"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ The kernel is going to restart. Wait until it's finished before continuing to the next step. ⚠️</b>\n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dmWOrTJ3gx13"
},
"source": [
"### Authenticate your notebook environment (Colab only)\n",
"\n",
"Authenticate your environment on Google Colab.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NyKGtVQjgx13"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
"\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DF4l8DTdWgPY"
},
"source": [
"### Set Google Cloud project information\n",
"\n",
"To get started using Vertex AI, you must have an existing Google Cloud project. Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Nqwi-5ufWp_B"
},
"outputs": [],
"source": [
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n",
"LOCATION = \"us-central1\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"! gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aab852d94fc7"
},
"source": [
"#### Enable Cloud services used throughout this notebook.\n",
"\n",
"Run the cell below to the enable Compute Engine, Container Registry, and Vertex AI services."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "18396d3d7fe4"
},
"outputs": [],
"source": [
"!gcloud services enable compute.googleapis.com \\\n",
" containerregistry.googleapis.com \\\n",
" aiplatform.googleapis.com"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bucket:mbsdk"
},
"source": [
"### Create a Cloud Storage bucket\n",
"\n",
"Create a storage bucket to store intermediate artifacts such as datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bucket"
},
"outputs": [],
"source": [
"BUCKET_URI = f\"gs://your-bucket-name-{PROJECT_ID}-unique\" # @param {type:\"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "create_bucket"
},
"source": [
"**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "create_bucket"
},
"outputs": [],
"source": [
"! gsutil mb -l {LOCATION} -p {PROJECT_ID} {BUCKET_URI}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "set_service_account"
},
"source": [
"### Service Account\n",
"\n",
"Use a service account to create Vertex AI Pipeline jobs.\n",
"\n",
"If you don't want to use your project's Compute Engine service account, set `SERVICE_ACCOUNT` to another service account ID."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "set_service_account"
},
"outputs": [],
"source": [
"SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "autoset_service_account"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"IS_COLAB = \"google.colab\" in sys.modules\n",
"if (\n",
" SERVICE_ACCOUNT == \"\"\n",
" or SERVICE_ACCOUNT is None\n",
" or SERVICE_ACCOUNT == \"[your-service-account]\"\n",
"):\n",
" # Get your service account from gcloud\n",
" if not IS_COLAB:\n",
" shell_output = !gcloud auth list 2>/dev/null\n",
" SERVICE_ACCOUNT = shell_output[2].replace(\"*\", \"\").strip()\n",
"\n",
" else: # IS_COLAB:\n",
" shell_output = ! gcloud projects describe $PROJECT_ID\n",
" project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n",
" SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n",
"\n",
" print(\"Service Account:\", SERVICE_ACCOUNT)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "set_service_account:pipelines"
},
"source": [
"#### Set service account access for Vertex AI Pipelines\n",
"\n",
"Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "set_service_account:pipelines"
},
"outputs": [],
"source": [
"! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI\n",
"\n",
"! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XoEqT2Y4DJmf"
},
"source": [
"### Import libraries and define constants"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Y9Uo3tifg1kx"
},
"source": [
"Import required libraries."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pRUOFELefqf1"
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import pandas as pd\n",
"# We'll use this beta library for metadata querying\n",
"from google.cloud import aiplatform, aiplatform_v1beta1\n",
"from google.cloud.aiplatform import pipeline_jobs\n",
"from kfp.v2 import compiler, dsl\n",
"from kfp.v2.dsl import (Artifact, Dataset, Input, Metrics, Model, Output,\n",
" OutputPath, component)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xtXZWmYqJ1bh"
},
"source": [
"Define some constants"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "JIOrI-hoJ46P"
},
"outputs": [],
"source": [
"PATH = get_ipython().run_line_magic(\"env\", \"PATH\")\n",
"%env PATH={PATH}:/home/jupyter/.local/bin\n",
"REGION = \"us-central1\"\n",
"\n",
"PIPELINE_ROOT = f\"{BUCKET_URI}/pipeline_root/\"\n",
"PIPELINE_ROOT"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2937d462a96a"
},
"source": [
"### Initialize Vertex AI SDK for Python"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7def96de8098"
},
"outputs": [],
"source": [
"aiplatform.init(project=PROJECT_ID, location=LOCATION)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Xuny18aMcWDb"
},
"source": [
"## Concepts\n",
"\n",
"To better understand [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) and [Vertex AI ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata), here’re some relevant concepts:\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NThDci5bp0Uw"
},
"source": [
"### Pipeline Run\n",
"The term “run” refers to a single execution of your pipeline in Vertex AI Pipelines, during which artifacts, metrics, and associated metadata are generated."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SAyRR3Ydp4X5"
},
"source": [
"### Artifact\n",
"\n",
"An artifact is a resource generated by your pipeline. Artifacts can be datasets, models, endpoints, or custom resources defined in your pipeline."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "40ee71479689"
},
"source": [
"### Metric\n",
"\n",
"A metric is a way to measure the performance of your pipeline runs and artifacts. For example, a metric can be the accuracy of a classification model artifact created in your pipeline, or the size of the dataset used to train your model."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "57b1cc9981d5"
},
"source": [
"### Metadata\n",
"\n",
"Metadata describes the artifacts and metrics generated by your pipeline runs. Metadata on a model, for example, includes the URL of the model artifacts, its name, and the time it was created."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "l1YW2pgyegFP"
},
"source": [
"## Creating a 3-step pipeline with custom components\n",
"\n",
"The focus of this lab is on understanding metadata from pipeline runs. To do that, you need a pipeline to run on Vertex AI Pipelines, which is where you start. Here, you define a 3-step pipeline with the following custom components:\n",
"\n",
"* `get_dataframe`: Retrieve data from a BigQuery table and convert it into a pandas DataFrame.\n",
"* `train_sklearn_model`: Use the pandas DataFrame to train and export a scikit-learn model, along with some metrics.\n",
"* `deploy_model`: Deploy the exported scikit-learn model to an endpoint in Vertex AI."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KPY41M9_AhZU"
},
"source": [
"### Create and define Python function based components"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bfMQSmRuUuX-"
},
"source": [
"First, define the `get_dataframe` component with the code below. This component does the following:\n",
"* Creates a reference to a BigQuery table using the BigQuery client library\n",
"* Downloads the BigQuery table and converts it to a shuffled pandas DataFrame\n",
"* Exports the DataFrame to a CSV file"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "RiQuMv4bmpuV"
},
"outputs": [],
"source": [
"@component(\n",
" packages_to_install=[\"google-cloud-bigquery[pandas]\", \"pyarrow\"],\n",
" base_image=\"python:3.10\",\n",
" output_component_file=\"create_dataset.yaml\",\n",
")\n",
"def get_dataframe(\n",
" project_id: str, bq_table: str, output_data_path: OutputPath(\"Dataset\")\n",
"):\n",
" from google.cloud import bigquery\n",
"\n",
" bqclient = bigquery.Client(project=project_id)\n",
" table = bigquery.TableReference.from_string(bq_table)\n",
" rows = bqclient.list_rows(table)\n",
" dataframe = rows.to_dataframe(\n",
" create_bqstorage_client=True,\n",
" )\n",
" dataframe = dataframe.sample(frac=1, random_state=2)\n",
" dataframe.to_csv(output_data_path)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Y06J7A7yU21t"
},
"source": [
"Next, create a component to train a scikit-learn model. This component does the following:\n",
"* Imports a CSV as a pandas DataFrame.\n",
"* Splits the DataFrame into train and test sets.\n",
"* Trains a scikit-learn model.\n",
"* Logs metrics from the model.\n",
"* Saves the model artifacts as a local `model.joblib` file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "p5JBCBKyH-NC"
},
"outputs": [],
"source": [
"@component(\n",
" packages_to_install=[\"scikit-learn==1.2\", \"pandas\", \"joblib\", \"numpy==1.26.4\"],\n",
" base_image=\"python:3.10\",\n",
" output_component_file=\"beans_model_component.yaml\",\n",
")\n",
"def sklearn_train(\n",
" dataset: Input[Dataset], metrics: Output[Metrics], model: Output[Model]\n",
"):\n",
" import pandas as pd\n",
" from joblib import dump\n",
" from sklearn.model_selection import train_test_split\n",
" from sklearn.tree import DecisionTreeClassifier\n",
"\n",
" df = pd.read_csv(dataset.path)\n",
" labels = df.pop(\"Class\").tolist()\n",
" data = df.values.tolist()\n",
" x_train, x_test, y_train, y_test = train_test_split(data, labels)\n",
"\n",
" skmodel = DecisionTreeClassifier()\n",
" skmodel.fit(x_train, y_train)\n",
" score = skmodel.score(x_test, y_test)\n",
" print(\"accuracy is:\", score)\n",
"\n",
" metrics.log_metric(\"accuracy\", (score * 100.0))\n",
" metrics.log_metric(\"framework\", \"Scikit Learn\")\n",
" metrics.log_metric(\"dataset_size\", len(df))\n",
" dump(skmodel, model.path + \".joblib\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gaNNTFPaU7KT"
},
"source": [
"Finally, the last component takes the trained model from the previous step, uploads the model to Vertex AI, and deploys it to an endpoint:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VGq5QCoyIEWJ"
},
"outputs": [],
"source": [
"@component(\n",
" packages_to_install=[\"google-cloud-aiplatform\"],\n",
" base_image=\"python:3.10\",\n",
" output_component_file=\"beans_deploy_component.yaml\",\n",
")\n",
"def deploy_model(\n",
" model: Input[Model],\n",
" project: str,\n",
" region: str,\n",
" vertex_endpoint: Output[Artifact],\n",
" vertex_model: Output[Model],\n",
"):\n",
" from google.cloud import aiplatform\n",
"\n",
" aiplatform.init(project=project, location=region)\n",
"\n",
" deployed_model = aiplatform.Model.upload(\n",
" display_name=\"beans-model-pipeline\",\n",
" artifact_uri=model.uri.replace(\"model\", \"\"),\n",
" serving_container_image_uri=\"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-2:latest\",\n",
" )\n",
" endpoint = deployed_model.deploy(machine_type=\"n1-standard-4\")\n",
"\n",
" # Save data to the output params\n",
" vertex_endpoint.uri = endpoint.resource_name\n",
" vertex_model.uri = deployed_model.resource_name"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UBXUgxgqA_GB"
},
"source": [
"### Define and compile the pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "66odBYKrIN4q"
},
"outputs": [],
"source": [
"@dsl.pipeline(\n",
" # Default pipeline root. You can override it when submitting the pipeline.\n",
" pipeline_root=PIPELINE_ROOT,\n",
" # A name for the pipeline.\n",
" name=\"mlmd-pipeline\",\n",
")\n",
"def pipeline(\n",
" bq_table: str,\n",
" output_data_path: str,\n",
" project: str,\n",
" region: str,\n",
"):\n",
" dataset_task = get_dataframe(project, bq_table)\n",
"\n",
" model_task = sklearn_train(dataset_task.output)\n",
"\n",
" deploy_model(model=model_task.outputs[\"model\"], project=project, region=region)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "910541af051c"
},
"source": [
"The following generates a JSON file that is then used to run the pipeline:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "o_wnT10RJ7-W"
},
"outputs": [],
"source": [
"compiler.Compiler().compile(pipeline_func=pipeline, package_path=\"mlmd_pipeline.json\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "u-iTnzt3B6Z_"
},
"source": [
"### Initiate pipeline runs\n",
"\n",
"First, define a timestamp to use as your pipeline job IDs.\n",
"\n",
"Then for each run, create an instance of `PipelineJob` from the `pipeline_jobs` module.\n",
" For each instance provide the following details:\n",
"\n",
"* `display_name` : Human-readable name for the pipeline job.\n",
"* `template_path` : This specifies the path to the pipeline template file in JSON format, which contains the pipeline's configuration and structure created in the previous steps.\n",
"* `job_id` : This sets a unique identifier for the job.\n",
"* `parameter_values` : This dictionary contains key-value pairs for the parameters required by the pipeline which are metioned during pipeline definition.\n",
" * `bq_table` : Specifies the BigQuery table to use.\n",
" * `output_data_path` : Defines the path for the output data file.\n",
" * `project` : Specifies the Google Cloud project ID.\n",
" * `region` : Defines the region where the pipeline will run.\n",
"* enable_caching : When set to `True`, caching is enabled for the pipeline run. This lets the system reuse previous results, if the same job has been executed before with identical parameters, thereby saving time and resources"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "i2wnpu8_7JfV"
},
"outputs": [],
"source": [
"from datetime import datetime\n",
"\n",
"TIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3d380ed72490"
},
"source": [
"Create a pipeline run using the smaller version beans dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ff4aee966c5f"
},
"outputs": [],
"source": [
"run1 = pipeline_jobs.PipelineJob(\n",
" display_name=\"mlmd-pipeline\",\n",
" template_path=\"mlmd_pipeline.json\",\n",
" job_id=\"mlmd-pipeline-small-{}\".format(TIMESTAMP),\n",
" parameter_values={\n",
" \"bq_table\": \"sara-vertex-demos.beans_demo.small_dataset\",\n",
" \"output_data_path\": \"data.csv\",\n",
" \"project\": PROJECT_ID,\n",
" \"region\": REGION,\n",
" },\n",
" enable_caching=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "555ac88a22cf"
},
"source": [
"Next, create another pipeline run using a larger version of the same dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3d9fcb6a4a9e"
},
"outputs": [],
"source": [
"run2 = pipeline_jobs.PipelineJob(\n",
" display_name=\"mlmd-pipeline\",\n",
" template_path=\"mlmd_pipeline.json\",\n",
" job_id=\"mlmd-pipeline-large-{}\".format(TIMESTAMP),\n",
" parameter_values={\n",
" \"bq_table\": \"sara-vertex-demos.beans_demo.large_dataset\",\n",
" \"output_data_path\": \"data.csv\",\n",
" \"project\": PROJECT_ID,\n",
" \"region\": REGION,\n",
" },\n",
" enable_caching=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5670722f7668"
},
"source": [
"Finally, kick off pipeline executions for both runs. It's best to do this in two separate notebook cells so you can see the output for each run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1f477f5565c6"
},
"outputs": [],
"source": [
"run1.submit()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6e682e41af78"
},
"source": [
"Then, kick off the second run:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cb263e503ced"
},
"outputs": [],
"source": [
"run2.submit()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cc15017be48e"
},
"source": [
"After running this cell, there is a link to view each pipeline in the Google Cloud console. Open that link to get more details about your pipeline.\n",
"\n",
"**These pipeline runs will take 10-15 minutes to complete.**"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jZLrJZTfL7tE"
},
"source": [
"## Comparing pipeline runs"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "A1PqKxlpOZa2"
},
"source": [
"Once both the pipelines run successfully, you're ready to take a closer look at pipeline metrics using the Vertex AI SDK for Python.\n",
"\n",
"For guidance on inspecting pipeline artifacts and metadata in the Google Cloud console, check out this codelab: [Understanding pipeline artifacts and lineage](https://codelabs.developers.google.com/vertex-mlmd-pipelines#5)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jbRf1WoH_vbY"
},
"source": [
"Use `aiplatform.get_pipeline_df()` method to retrieve the metadata for the last two runs of the pipeline. Then, load it into a Pandas DataFrame. \n",
"The `pipeline` parameter specifies the name of your pipeline as defined in the pipeline configuration, which in this case is *mlmd-pipeline*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "90d850cda34f"
},
"outputs": [],
"source": [
"df = aiplatform.get_pipeline_df(pipeline=\"mlmd-pipeline\")\n",
"print(df)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d23e2cb66265"
},
"source": [
"You’ve only executed the pipeline twice here, but you can imagine how many metrics you'd have with more executions. Next, create a custom visualization with matplotlib to see the relationship between the model accuracy and the amount of data used for training. Run the following to generate a graph:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5957415cc390"
},
"outputs": [],
"source": [
"plt.plot(df[\"metric.dataset_size\"], df[\"metric.accuracy\"], label=\"Accuracy\")\n",
"plt.title(\"Accuracy and dataset size\")\n",
"plt.legend(loc=4)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EYuYgqVCMKU1"
},
"source": [
"## Querying pipeline metrics"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4431b5d062f3"
},
"source": [
"In addition to creating a DataFrame of all pipeline metrics, you can programmatically query artifacts created in your ML system. From there you can create a custom dashboard or let others in your organizaiton get details on specific artifacts."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "995723757c5d"
},
"source": [
"### Getting all Model artifacts\n",
"\n",
"To query artifacts in this way, create a `MetadataServiceClient`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "r8orCj8iJuO1"
},
"outputs": [],
"source": [
"API_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n",
"metadata_client = aiplatform_v1beta1.MetadataServiceClient(\n",
" client_options={\"api_endpoint\": API_ENDPOINT}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e5aee9cdc5bd"
},
"source": [
"Next, make a `list_artifacts` request to that endpoint and pass a filter indicating which artifacts you'd like in your response. First, let's get all the artifacts in the project that are **models**. To do that, run the following in your notebook:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "29260057ae40"
},
"outputs": [],
"source": [
"MODEL_FILTER = 'schema_title = \"system.Model\"'\n",
"artifact_request = aiplatform_v1beta1.ListArtifactsRequest(\n",
" parent=\"projects/{}/locations/{}/metadataStores/default\".format(PROJECT_ID, REGION),\n",
" filter=MODEL_FILTER,\n",
")\n",
"model_artifacts = metadata_client.list_artifacts(artifact_request)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dfb57f1b7833"
},
"source": [
"The resulting `model_artifacts` response contains an iterable object for each model artifact in your project, along with associated metadata for each model."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WTHvPMweMlP1"
},
"source": [
"### Filtering objects and displaying in a DataFrame"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "F19_5lw0MqXv"
},
"source": [
"Next, get all artifacts created after August 10, 2021 that are in `LIVE` state. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GmN9vE9pqqzt"
},
"outputs": [],
"source": [
"LIVE_FILTER = 'create_time > \"2021-08-10T00:00:00-00:00\" AND state = LIVE'\n",
"artifact_req = {\n",
" \"parent\": \"projects/{}/locations/{}/metadataStores/default\".format(\n",
" PROJECT_ID, REGION\n",
" ),\n",
" \"filter\": LIVE_FILTER,\n",
"}\n",
"live_artifacts = metadata_client.list_artifacts(artifact_req)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6bba2012b7f0"
},
"source": [
"Then, display the results in a DataFrame:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6bee5790cec4"
},
"outputs": [],
"source": [
"data = {\"uri\": [], \"createTime\": [], \"type\": []}\n",
"\n",
"for i in live_artifacts:\n",
" data[\"uri\"].append(i.uri)\n",
" data[\"createTime\"].append(i.create_time)\n",
" data[\"type\"].append(i.schema_title)\n",
"\n",
"df = pd.DataFrame.from_dict(data)\n",
"print(df)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TpV-iwP9qw9c"
},
"source": [
"## Cleaning up\n",
"\n",
"To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
"project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
"\n",
"If you don't want to delete the project, do the following to clean up the resources you used:\n",
"\n",
"* If you used Vertex AI Workbench notebooks to run this, stop or delete the notebook instance.\n",
"\n",
"* The pipeline runs you executed deployed endpoints in Vertex AI. Navigate to the [Google Cloud console](https://console.cloud.google.com/vertex-ai/endpoints) to delete those endpoints.\n",
"\n",
"* Delete the [Cloud Storage bucket](https://console.cloud.google.com/storage/browser/) you created.\n",
"\n",
"Alternatively, you can execute the below cell to clean up the resources used in this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f2c21373a498"
},
"outputs": [],
"source": [
"# delete pipelines\n",
"try:\n",
" run1.delete()\n",
" run2.delete()\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"# undeploy model from endpoints\n",
"endpoints = aiplatform.Endpoint.list(\n",
" filter='display_name=\"beans-model-pipeline_endpoint\"'\n",
")\n",
"for endpoint in endpoints:\n",
" deployed_models = endpoint.list_models()\n",
" for deployed_model in deployed_models:\n",
" endpoint.undeploy(deployed_model_id=deployed_model.id)\n",
" # delete endpoint\n",
" endpoint.delete()\n",
"\n",
"# delete model\n",
"model_ids = aiplatform.Model.list(filter='display_name=\"beans-model-pipeline\"')\n",
"for model_id in model_ids:\n",
" model = aiplatform.Model(model_name=model_id.resource_name)\n",
" model.delete()\n",
"\n",
"# delete locally generated files\n",
"! rm -rf beans_deploy_component.yaml beans_model_component.yaml create_dataset.yaml mlmd_pipeline.json\n",
"\n",
"# delete cloud storage bucket\n",
"delete_bucket = False # set True for deletion\n",
"if delete_bucket:\n",
" ! gsutil rm -rf {BUCKET_URI}"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "vertex-pipelines-ml-metadata.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}