notebooks/official/pipelines/lightweight_functions_component_io_kfp.ipynb (801 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "copyright" }, "outputs": [], "source": [ "# Copyright 2021 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "title:generic" }, "source": [ "# Vertex AI Pipelines: Lightweight Python function-based components, and component I/O\n", "\n", "<table align=\"left\">\n", " <td style=\"text-align: center\">\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/lightweight_functions_component_io_kfp.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fpipelines%2Flightweight_functions_component_io_kfp.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-enterprise-logo-32px.png\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/lightweight_functions_component_io_kfp.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", "<a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/official/pipelines/lightweight_functions_component_io_kfp.ipynb\" target='_blank'>\n", " <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n", " </a>\n", " </td>\n", "</table>\n", "<br/><br/><br/>" ] }, { "cell_type": "markdown", "metadata": { "id": "overview:pipelines,lightweight" }, "source": [ "## Overview\n", "\n", "This notebooks shows how to use [the Kubeflow Pipelines (KFP) SDK](https://www.kubeflow.org/docs/components/pipelines/) to build [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines) that use lightweight Python function based components, as well as supporting component I/O using the KFP SDK.\n", "\n", "Learn more about [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction)." ] }, { "cell_type": "markdown", "metadata": { "id": "objective:pipelines,lightweight" }, "source": [ "### Objective\n", "\n", "In this tutorial, you learn to use the KFP SDK to build lightweight Python function-based components, and then you learn to use Vertex AI Pipelines to execute the pipeline.\n", "\n", "This tutorial uses the following Google Cloud ML services:\n", "\n", "- Vertex AI Pipelines\n", "\n", "The steps performed include:\n", "\n", "- Build Python function-based KFP components.\n", "- Construct a KFP pipeline.\n", "- Pass *Artifacts* and *parameters* between components, both by path reference and by value.\n", "- Use the kfp.dsl.importer method.\n", "- Compile the KFP pipeline.\n", "- Execute the KFP pipeline using Vertex AI Pipelines" ] }, { "cell_type": "markdown", "metadata": { "id": "what_is:kfp,lightweight" }, "source": [ "### KFP Python function-based components\n", "\n", "A Kubeflow pipeline component is a self-contained set of code that performs one step in your ML workflow. A pipeline component is composed of:\n", "\n", "* The component code, which implements the logic need to perform a step in your ML workflow.\n", "* A component specification, which defines the following:\n", " * The component’s metadata, its name and description.\n", " * The component’s interface, the component’s inputs and outputs.\n", "* The component’s implementation, the Docker container image to run, how to pass inputs to your component code, and how to get the component’s outputs.\n", "\n", "Lightweight Python function-based components make it easier to iterate quickly by letting you build your component code as a Python function and generating the component specification for you. This notebook shows how to create Python function-based components for use in [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines).\n", "\n", "Python function-based components use the Kubeflow Pipelines SDK to handle the complexity of passing inputs into your component and passing your function’s outputs back to your pipeline.\n", "\n", "There are two categories of inputs/outputs supported in Python function-based components: *artifacts* and *parameters*.\n", "\n", "* Parameters are passed to your component by value and typically contain `int`, `float`, `bool`, or small `string` values.\n", "* Artifacts are passed to your component as a *reference* to a path, to which you can write a file or a subdirectory structure. In addition to the artifact’s data, you can also read and write the artifact’s metadata. This lets you record arbitrary key-value pairs for an artifact such as the accuracy of a trained model, and use metadata in downstream components – for example, you could use metadata to decide if a model is accurate enough to deploy for predictions." ] }, { "cell_type": "markdown", "metadata": { "id": "costs" }, "source": [ "### Costs\n", "\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "* Vertex AI\n", "* Cloud Storage\n", "\n", "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and\n", "[Cloud Storage pricing](https://cloud.google.com/storage/pricing) and use the \n", "[Pricing Calculator](https://cloud.google.com/products/calculator/)\n", "to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": { "id": "install_aip:mbsdk" }, "source": [ "## Get Started\n", "\n", "Install Vertex AI SDK for Python and other required packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "install_aip:mbsdk" }, "outputs": [], "source": [ "! pip3 install --upgrade --quiet google-cloud-aiplatform \\\n", " google-cloud-storage \\\n", " kfp \\\n", " \"numpy<2\" \\\n", " google-cloud-pipeline-components" ] }, { "cell_type": "markdown", "metadata": { "id": "gcp_authenticate" }, "source": [ "### Restart runtime (Colab only)\n", "Authenticate your environment on Google Colab." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "21ad4dbb4a61" }, "outputs": [], "source": [ "import sys\n", "\n", "if \"google.colab\" in sys.modules:\n", "\n", " import IPython\n", "\n", " app = IPython.Application.instance()\n", " app.kernel.do_shutdown(True)" ] }, { "cell_type": "markdown", "metadata": { "id": "56e219dbcb9a" }, "source": [ "### Authenticate your notebook environment (Colab only)\n", "Authenticate your environment on Google Colab." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "c97be6a73155" }, "outputs": [], "source": [ "import sys\n", "\n", "if \"google.colab\" in sys.modules:\n", "\n", " from google.colab import auth\n", "\n", " auth.authenticate_user()" ] }, { "cell_type": "markdown", "metadata": { "id": "45b2362d4ce4" }, "source": [ "### Set Google Cloud project information\n", "Learn more about setting up a project and a development environment." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "oM1iC_MfAts1" }, "outputs": [], "source": [ "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n", "LOCATION = \"us-central1\" # @param {type:\"string\"}" ] }, { "cell_type": "markdown", "metadata": { "id": "zgPO1eR3CYjk" }, "source": [ "### Create a Cloud Storage bucket\n", "\n", "Create a storage bucket to store intermediate artifacts such as datasets.\n", "\n", "- *{Note to notebook author: For any user-provided strings that need to be unique (like bucket names or model ID's), append \"-unique\" to the end so proper testing can occur}*" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MzGDU7TWdts_" }, "outputs": [], "source": [ "BUCKET_URI = f\"gs://your-bucket-name-{PROJECT_ID}-unique\" # @param {type:\"string\"}" ] }, { "cell_type": "markdown", "metadata": { "id": "create_bucket" }, "source": [ "**If your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NIq7R4HZCfIc" }, "outputs": [], "source": [ "! gsutil mb -l {LOCATION} -p {PROJECT_ID} {BUCKET_URI}" ] }, { "cell_type": "markdown", "metadata": { "id": "set_service_account" }, "source": [ "#### Service Account\n", "\n", "**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "set_service_account" }, "outputs": [], "source": [ "SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "autoset_service_account" }, "outputs": [], "source": [ "import sys\n", "\n", "IS_COLAB = \"google.colab\" in sys.modules\n", "if (\n", " SERVICE_ACCOUNT == \"\"\n", " or SERVICE_ACCOUNT is None\n", " or SERVICE_ACCOUNT == \"[your-service-account]\"\n", "):\n", " # Get your service account from gcloud\n", " if not IS_COLAB:\n", " shell_output = !gcloud auth list 2>/dev/null\n", " SERVICE_ACCOUNT = shell_output[2].replace(\"*\", \"\").strip()\n", "\n", " if IS_COLAB:\n", " shell_output = ! gcloud projects describe $PROJECT_ID\n", " project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n", " SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n", "\n", " print(\"Service Account:\", SERVICE_ACCOUNT)" ] }, { "cell_type": "markdown", "metadata": { "id": "set_service_account:pipelines" }, "source": [ "#### Set service account access for Vertex AI Pipelines\n", "\n", "Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "set_service_account:pipelines" }, "outputs": [], "source": [ "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI\n", "\n", "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI" ] }, { "cell_type": "markdown", "metadata": { "id": "setup_vars" }, "source": [ "### Set up variables\n", "\n", "Next, set up some variables used throughout the tutorial.\n", "### Import libraries and define constants" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "import_aip:mbsdk" }, "outputs": [], "source": [ "from typing import NamedTuple\n", "\n", "import kfp\n", "from google.cloud import aiplatform\n", "from kfp import compiler, dsl\n", "from kfp.dsl import (Artifact, Dataset, Input, InputPath, Model, Output,\n", " OutputPath, component)" ] }, { "cell_type": "markdown", "metadata": { "id": "pipeline_constants" }, "source": [ "#### Vertex AI Pipelines constants\n", "\n", "Set up up the following constants for Vertex AI Pipelines:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pipeline_constants" }, "outputs": [], "source": [ "PIPELINE_ROOT = \"{}/pipeline_root/shakespeare\".format(BUCKET_URI)" ] }, { "cell_type": "markdown", "metadata": { "id": "init_aip:mbsdk" }, "source": [ "## Initialize Vertex AI SDK for Python\n", "\n", "Initialize the Vertex AI SDK for Python for your project and corresponding bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "init_aip:mbsdk" }, "outputs": [], "source": [ "aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)" ] }, { "cell_type": "markdown", "metadata": { "id": "define_component:lightweight,preprocess" }, "source": [ "### Define Python function-based pipeline components\n", "\n", "In this tutorial, you define function-based components that consume parameters and produce (typed) Artifacts and parameters. Functions can produce Artifacts in three ways:\n", "\n", "* Accept an output local path using `OutputPath`\n", "* Accept an `OutputArtifact` which gives the function a handle to the output artifact's metadata\n", "* Return an `Artifact` (or `Dataset`, `Model`, `Metrics`, etc) in a `NamedTuple`\n", "\n", "These options for producing Artifacts are demonstrated.\n", "\n", "#### Define preprocess component\n", "\n", "The first component definition, `preprocess`, shows a component that outputs two `Dataset` Artifacts, as well as an output parameter. (For this example, the datasets don't reflect real data).\n", "\n", "For the parameter output, you would typically use the approach shown here, using the `OutputPath` type, for \"larger\" data.\n", "For \"small data\", like a short string, it might be more convenient to use the `NamedTuple` function output as shown in the second component instead." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "define_component:lightweight,preprocess" }, "outputs": [], "source": [ "@component(base_image=\"python:3.9\")\n", "def preprocess(\n", " # An input parameter of type string.\n", " message: str,\n", " # Use Output to get a metadata-rich handle to the output artifact\n", " # of type `Dataset`.\n", " output_dataset_one: Output[Dataset],\n", " # A locally accessible filepath for another output artifact of type\n", " # `Dataset`.\n", " output_dataset_two_path: OutputPath(\"Dataset\"),\n", " # A locally accessible filepath for an output parameter of type string.\n", " output_parameter_path: OutputPath(str),\n", "):\n", " \"\"\"'Mock' preprocessing step.\n", " Writes out the passed in message to the output \"Dataset\"s and the output message.\n", " \"\"\"\n", " output_dataset_one.metadata[\"hello\"] = \"there\"\n", " # Use OutputArtifact.path to access a local file path for writing.\n", " # One can also use OutputArtifact.uri to access the actual URI file path.\n", " with open(output_dataset_one.path, \"w\") as f:\n", " f.write(message)\n", "\n", " # OutputPath is used to just pass the local file path of the output artifact\n", " # to the function.\n", " with open(output_dataset_two_path, \"w\") as f:\n", " f.write(message)\n", "\n", " with open(output_parameter_path, \"w\") as f:\n", " f.write(message)" ] }, { "cell_type": "markdown", "metadata": { "id": "define_component:lightweight,train" }, "source": [ "#### Define train component\n", "\n", "The second component definition, `train`, defines as input both an `InputPath` of type `Dataset`, and an `InputArtifact` of type `Dataset` (as well as other parameter inputs). It uses the `NamedTuple` format for function output. As shown, these outputs can be Artifacts as well as parameters.\n", "\n", "Additionally, this component writes some metrics metadata to the `model` output Artifact. This information is displayed in the Cloud Console user interface when the pipeline runs." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "define_component:lightweight,train" }, "outputs": [], "source": [ "@component(\n", " base_image=\"python:3.9\", # Use a different base image.\n", ")\n", "def train(\n", " # An input parameter of type string.\n", " message: str,\n", " # Use InputPath to get a locally accessible path for the input artifact\n", " # of type `Dataset`.\n", " dataset_one_path: InputPath(\"Dataset\"),\n", " # Use InputArtifact to get a metadata-rich handle to the input artifact\n", " # of type `Dataset`.\n", " dataset_two: Input[Dataset],\n", " # Output artifact of type Model.\n", " imported_dataset: Input[Dataset],\n", " model: Output[Model],\n", " # An input parameter of type int with a default value.\n", " num_steps: int = 3,\n", " # Use NamedTuple to return either artifacts or parameters.\n", " # When returning artifacts like this, return the contents of\n", " # the artifact. The assumption here is that this return value\n", " # fits in memory.\n", ") -> NamedTuple(\n", " \"Outputs\",\n", " [\n", " (\"output_message\", str), # Return parameter.\n", " (\"generic_artifact\", Artifact), # Return generic Artifact.\n", " ],\n", "):\n", " \"\"\"'Mock' Training step.\n", " Combines the contents of dataset_one and dataset_two into the\n", " output Model.\n", " Constructs a new output_message consisting of message repeated num_steps times.\n", " \"\"\"\n", "\n", " # Directly access the passed in GCS URI as a local file (uses GCSFuse).\n", " with open(dataset_one_path) as input_file:\n", " dataset_one_contents = input_file.read()\n", "\n", " # dataset_two is an Artifact handle. Use dataset_two.path to get a\n", " # local file path (uses GCSFuse).\n", " # Alternately, use dataset_two.uri to access the GCS URI directly.\n", " with open(dataset_two.path) as input_file:\n", " dataset_two_contents = input_file.read()\n", "\n", " with open(model.path, \"w\") as f:\n", " f.write(\"My Model\")\n", "\n", " with open(imported_dataset.path) as f:\n", " data = f.read()\n", " print(\"Imported Dataset:\", data)\n", "\n", " # Use model.get() to get a Model artifact, which has a .metadata dictionary\n", " # to store arbitrary metadata for the output artifact. This metadata is\n", " # recorded in Managed Metadata and can be queried later. It also shows up\n", " # in the Google Cloud console.\n", " model.metadata[\"accuracy\"] = 0.9\n", " model.metadata[\"framework\"] = \"Tensorflow\"\n", " model.metadata[\"time_to_train_in_seconds\"] = 257\n", "\n", " artifact_contents = \"{}\\n{}\".format(dataset_one_contents, dataset_two_contents)\n", " output_message = \" \".join([message for _ in range(num_steps)])\n", " return (output_message, artifact_contents)" ] }, { "cell_type": "markdown", "metadata": { "id": "define_component:lightweight,read_artifact_input" }, "source": [ "#### Define read_artifact_input component\n", "\n", "Finally, you define a small component that takes as input the `generic_artifact` returned by the `train` component function, and reads and prints the Artifact's contents." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "define_component:lightweight,read_artifact_input" }, "outputs": [], "source": [ "@component(base_image=\"python:3.9\")\n", "def read_artifact_input(\n", " generic: Input[Artifact],\n", "):\n", " with open(generic.path) as input_file:\n", " generic_contents = input_file.read()\n", " print(f\"generic contents: {generic_contents}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "define_pipeline:kfp,importer" }, "source": [ "### Define a pipeline that uses your components and the Importer\n", "\n", "Next, define a pipeline that uses the components that were built in the previous sections, and also shows the use of the `kfp.dsl.importer`.\n", "\n", "This example uses the `importer` to create, in this case, a `Dataset` artifact from an existing URI.\n", "\n", "Note that the `train_task` step takes as inputs three of the outputs of the `preprocess_task` step, as well as the output of the `importer` step.\n", "In the \"train\" inputs we refer to the `preprocess` `output_parameter`, which gives us the output string directly.\n", "\n", "The `read_task` step takes as input the `train_task` `generic_artifact` output." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "define_pipeline:kfp,importer" }, "outputs": [], "source": [ "@dsl.pipeline(\n", " # Default pipeline root. You can override it when submitting the pipeline.\n", " pipeline_root=PIPELINE_ROOT,\n", " # A name for the pipeline. Use to determine the pipeline Context.\n", " name=\"metadata-pipeline-v2\",\n", ")\n", "def pipeline(message: str):\n", " importer = kfp.dsl.importer(\n", " artifact_uri=\"gs://ml-pipeline-playground/shakespeare1.txt\",\n", " artifact_class=Dataset,\n", " reimport=False,\n", " )\n", " preprocess_task = preprocess(message=message)\n", " train_task = train(\n", " dataset_one_path=preprocess_task.outputs[\"output_dataset_one\"],\n", " dataset_two=preprocess_task.outputs[\"output_dataset_two_path\"],\n", " imported_dataset=importer.output,\n", " message=preprocess_task.outputs[\"output_parameter_path\"],\n", " num_steps=5,\n", " )\n", " read_task = read_artifact_input( # noqa: F841\n", " generic=train_task.outputs[\"generic_artifact\"]\n", " )" ] }, { "cell_type": "markdown", "metadata": { "id": "compile_pipeline" }, "source": [ "## Compile the pipeline\n", "\n", "Next, compile the pipeline." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "compile_pipeline" }, "outputs": [], "source": [ "compiler.Compiler().compile(\n", " pipeline_func=pipeline, package_path=\"lightweight_pipeline.yaml\"\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "run_pipeline:lightweight" }, "source": [ "## Run the pipeline\n", "\n", "Next, run the pipeline." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "run_pipeline:lightweight" }, "outputs": [], "source": [ "DISPLAY_NAME = \"shakespeare\"\n", "\n", "job = aiplatform.PipelineJob(\n", " display_name=DISPLAY_NAME,\n", " template_path=\"lightweight_pipeline.yaml\",\n", " pipeline_root=PIPELINE_ROOT,\n", " parameter_values={\"message\": \"Hello, World\"},\n", " enable_caching=False,\n", ")\n", "\n", "job.run()" ] }, { "cell_type": "markdown", "metadata": { "id": "view_pipeline_run:lightweight" }, "source": [ "Click on the generated link to see your run in the Cloud Console.\n", "\n", "<!-- It should look something like this as it's running:\n", "\n", "<a href=\"https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png\" target=\"_blank\"><img src=\"https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png\" width=\"40%\"/></a> -->\n", "\n", "In the Google Cloud console, many of the pipeline DAG nodes expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).\n", "\n", "<a href=\"https://storage.googleapis.com/amy-jo/images/mp/artifact_io2.png\" target=\"_blank\"><img src=\"https://storage.googleapis.com/amy-jo/images/mp/artifact_io2.png\" width=\"95%\"/></a>" ] }, { "cell_type": "markdown", "metadata": { "id": "2ba7c8f55afc" }, "source": [ "### Delete the pipeline job\n", "\n", "You can delete the pipeline job with the method `delete()`.job.delete()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "425adbf24044" }, "outputs": [], "source": [ "job.delete()" ] }, { "cell_type": "markdown", "metadata": { "id": "cleanup:pipelines" }, "source": [ "# Cleaning up\n", "\n", "To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n", "project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n", "\n", "Otherwise, you can delete the individual resources you created in this tutorial." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "cleanup:pipelines" }, "outputs": [], "source": [ "delete_bucket = False\n", "\n", "if delete_bucket:\n", " ! gsutil rm -r $BUCKET_URI\n", "\n", "! rm lightweight_pipeline.yaml" ] } ], "metadata": { "colab": { "name": "lightweight_functions_component_io_kfp.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }