vertex_ai/06_formalization.ipynb (748 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "ur8xi4C7S06n", "tags": [] }, "outputs": [], "source": [ "# Copyright 2023 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "JAPoU8Sm5E6e" }, "source": [ "# Fraudfinder - ML Pipeline\n", "\n", "<table align=\"left\">\n", " <td>\n", " <a href=\"https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/fraudfinder/blob/main/vertex_ai/06_formalization.ipynb\">\n", " <img src=\"https://www.gstatic.com/cloud/images/navigation/vertex-ai.svg\" alt=\"Google Cloud Notebooks\">Open in Cloud Notebook\n", " </a>\n", " </td> \n", " <td>\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/fraudfinder/blob/main/vertex_ai/06_formalization.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Open in Colab\n", " </a>\n", " </td>\n", " <td>\n", " <a href=\"https://github.com/GoogleCloudPlatform/fraudfinder/blob/main/vertex_ai/06_formalization.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n", " View on GitHub\n", " </a>\n", " </td>\n", "</table>" ] }, { "cell_type": "markdown", "metadata": { "id": "tvgnzT1CKxrO", "tags": [] }, "source": [ "## Overview\n", "\n", "[Fraudfinder](https://github.com/googlecloudplatform/fraudfinder) is a series of labs on how to build a real-time fraud detection system on Google Cloud. Throughout the Fraudfinder labs, you will learn how to read historical bank transaction data stored in data warehouse, read from a live stream of new transactions, perform exploratory data analysis (EDA), do feature engineering, ingest features into a feature store, train a model using feature store, register your model in a model registry, evaluate your model, deploy your model to an endpoint, do real-time inference on your model with feature store, and monitor your model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Objective\n", "\n", "This notebook shows how to use Feature Store, Pipelines and Model Monitoring for building an end-to-end demo using both components defined in `google_cloud_pipeline_components` and custom components. \n", "\n", "This lab uses the following Google Cloud services and resources:\n", "\n", "- [Vertex AI](https://cloud.google.com/vertex-ai/)\n", "- [BigQuery](https://cloud.google.com/bigquery/)\n", "\n", "Steps performed in this notebook:\n", "\n", "* Create a Vetex AI Pipeline to orchestrate and automate the ML workflow" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Costs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This tutorial uses billable components of Google Cloud:\n", "\n", "* Vertex AI\n", "* BigQuery\n", "\n", "Learn about [Vertex AI\n", "pricing](https://cloud.google.com/vertex-ai/pricing), [BigQuery pricing](https://cloud.google.com/bigquery/pricing) and use the [Pricing\n", "Calculator](https://cloud.google.com/products/calculator/)\n", "to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load configuration settings from the setup notebook\n", "\n", "Set the constants used in this notebook and load the config settings from the `00_environment_setup.ipynb` notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "GCP_PROJECTS = !gcloud config get-value project\n", "PROJECT_ID = GCP_PROJECTS[0]\n", "BUCKET_NAME = f\"{PROJECT_ID}-fraudfinder\"\n", "config = !gsutil cat gs://{BUCKET_NAME}/config/notebook_env.py\n", "print(config.n)\n", "exec(config.n)" ] }, { "cell_type": "markdown", "metadata": { "id": "XoEqT2Y4DJmf", "tags": [] }, "source": [ "### Import libraries and define constants" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Libraries\n", "Next you will import the libraries needed for this notebook. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that currently this notebook uses KFP SDK v1, whereas the environment includes KFP v2. As an interim solution, we will downlevel KFP and the Google Cloud Pipeline Components in order to use the v1 code here as-is. See the [KFP migration guide](https://www.kubeflow.org/docs/components/pipelines/v2/migration/) for more details of moving from v1 to v2. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "! pip install --upgrade 'google-cloud-pipeline-components==0.3.0'" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pRUOFELefqf1", "tags": [] }, "outputs": [], "source": [ "# General\n", "import os\n", "import sys\n", "import random\n", "from datetime import datetime, timedelta\n", "import json\n", "\n", "# Vertex Pipelines\n", "from typing import NamedTuple\n", "import kfp\n", "from kfp.v2 import dsl\n", "from kfp.v2.dsl import (\n", " Artifact,\n", " Dataset,\n", " Input,\n", " InputPath,\n", " Model,\n", " Output,\n", " OutputPath,\n", " Metrics,\n", " ClassificationMetrics,\n", " Condition,\n", " component,\n", ")\n", "from kfp.v2 import compiler\n", "\n", "from google.cloud import aiplatform as vertex_ai\n", "from google_cloud_pipeline_components import aiplatform as vertex_ai_components\n", "from kfp.v2.google.client import AIPlatformClient as VertexAIClient" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "print(\"kfp version:\", kfp.__version__)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Variables" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Components variables\n", "BASE_IMAGE = \"python:3.7\"\n", "COMPONENTS_DIR = os.path.join(os.curdir, \"pipelines\", \"components\")\n", "INGEST_FEATURE_STORE = f\"{COMPONENTS_DIR}/ingest_feature_store_{ID}.yaml\"\n", "EVALUATE = f\"{COMPONENTS_DIR}/evaluate_{ID}.yaml\"\n", "\n", "# Pipeline variables\n", "PIPELINE_NAME = f\"fraud-finder-xgb-pipeline-{ID}\"\n", "PIPELINE_DIR = os.path.join(os.curdir, \"pipelines\")\n", "PIPELINE_ROOT = f\"gs://{BUCKET_NAME}/pipelines\"\n", "PIPELINE_PACKAGE_PATH = f\"{PIPELINE_DIR}/pipeline_{ID}.json\"\n", "\n", "# Feature Store component variables\n", "BQ_DATASET = \"tx\"\n", "READ_INSTANCES_TABLE = f\"ground_truth_{ID}\"\n", "READ_INSTANCES_URI = f\"bq://{PROJECT_ID}.{BQ_DATASET}.{READ_INSTANCES_TABLE}\"\n", "\n", "# Dataset component variables\n", "DATASET_NAME = f\"fraud_finder_dataset_{ID}\"\n", "\n", "# Training component variables\n", "JOB_NAME = f\"fraudfinder-train-xgb-{ID}\"\n", "MODEL_NAME = f\"{MODEL_NAME}_xgb_pipeline_{ID}\"\n", "CONTAINER_URI = \"us-docker.pkg.dev/vertex-ai/training/xgboost-cpu.1-1:latest\"\n", "MODEL_SERVING_IMAGE_URI = (\n", " \"us-docker.pkg.dev/vertex-ai/prediction/xgboost-cpu.1-1:latest\"\n", ")\n", "ARGS = json.dumps([\"--bucket\", f\"gs://{BUCKET_NAME}\"])\n", "IMAGE_REPOSITORY = f\"fraudfinder-{ID}\"\n", "IMAGE_NAME = \"dask-xgb-classificator\"\n", "IMAGE_TAG = \"v1\"\n", "IMAGE_URI = f\"us-central1-docker.pkg.dev/{PROJECT_ID}/{IMAGE_REPOSITORY}/{IMAGE_NAME}:{IMAGE_TAG}\" # TODO: get it from config\n", "\n", "# Evaluation component variables\n", "METRICS_URI = f\"gs://{BUCKET_NAME}/deliverables/metrics.json\"\n", "AVG_PR_THRESHOLD = 0.2\n", "AVG_PR_CONDITION = \"avg_pr_condition\"\n", "\n", "# Endpoint variables\n", "ENDPOINT_NAME = f\"{ENDPOINT_NAME}_xgb_pipeline_{ID}\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Initialize the Vertex AI SDK\n", "Initialize the Vertex AI SDK for Python for your project and corresponding bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Vertex AI SDK\n", "vertex_ai.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "!gsutil ubla set on gs://{BUCKET_NAME}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create directories \n", "Create a directory for you pipeline and pipeline components. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "!mkdir -p -m 777 $PIPELINE_DIR $COMPONENTS_DIR" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create a end-to-end Pipeline and execute it on Vertex AI Pipelines.\n", "\n", "We will build a pipeline that you will execute using [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction). Vertex AI Pipelines helps you to automate, monitor, and govern your ML systems by orchestrating your ML workflow in a serverless manner, and storing your workflow's artifacts using Vertex ML Metadata. Authoring ML Pipelines that run on Vertex AI pipelines can be done in two different ways:\n", "\n", "* [Tensorflow Extended](https://www.tensorflow.org/tfx/guide)\n", "* [Kubeflow Pipelines SDK](https://kubeflow-pipelines.readthedocs.io/en/1.8.13/)\n", "\n", "Based on your preference you can choose between the two options. This notebook will only focus on Kubeflow Pipelines.\n", "\n", "If you don't have familiarity in authoring pipelines in Vertex AI Pipelines, we suggest the following resources:\n", "* [Introduction to Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction)\n", "* [Build a Pipeline in Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/build-pipeline)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Define Custom Components for your pipeline\n", "\n", "We will use a mix of prebuilt (Google Cloud Pipeline Components) and custom components in this notebook. The difference is:\n", "\n", "* Prebuilt components are official [Google Cloud Pipeline Components](https://cloud.google.com/vertex-ai/docs/pipelines/components-introduction)(GCPC). The GCPC Library provides a set of prebuilt components that are production quality, consistent, performant, and easy to use in Vertex AI Pipelines.\n", "* As you will build in the cell below, a data scientist or ML engineer typically authored the custom component. This means you have more control over the component (container) code. In this case, it's a Python-function-based component. You also have the option to build a component yourself by packaging code into a container.\n", "\n", "In the following two cells, you will build two custom components:\n", "\n", " *Feature Store component.\n", "\n", " *Evaluation component." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "#### Define feature store component\n", "\n", "Notice that the component assumes that containes the entities-timestamps \"query\" is already created." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Feature Store\n", "Next you will build a custom component using the [KFP SDK](https://kubeflow-pipelines.readthedocs.io/en/1.8.13/). Here you will take a Python function and create a component out of it. This component will take features from the Vertex AI Feature Store and output them on Google Cloud Storage (GCS). " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "@component(\n", " output_component_file=INGEST_FEATURE_STORE,\n", " base_image=BASE_IMAGE,\n", " packages_to_install=[\"google-cloud-aiplatform==1.21.0\"],\n", ")\n", "def ingest_features_gcs(\n", " project_id: str,\n", " region: str,\n", " bucket_name: str,\n", " feature_store_id: str,\n", " read_instances_uri: str,\n", ") -> NamedTuple(\"Outputs\", [(\"snapshot_uri_paths\", str),],):\n", " # Libraries --------------------------------------------------------------------------------------------------------------------------\n", " from datetime import datetime\n", " import glob\n", " import urllib\n", " import json\n", "\n", " # Feature Store\n", " from google.cloud.aiplatform import Featurestore, EntityType, Feature\n", "\n", " # Variables --------------------------------------------------------------------------------------------------------------------------\n", " timestamp = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n", " api_endpoint = region + \"-aiplatform.googleapis.com\"\n", " bucket = urllib.parse.urlsplit(bucket_name).netloc\n", " export_uri = (\n", " f\"{bucket_name}/data/snapshots/{timestamp}\" # format as new gsfuse requires\n", " )\n", " export_uri_path = f\"/gcs/{bucket}/data/snapshots/{timestamp}\"\n", " customer_entity = \"customer\"\n", " terminal_entity = \"terminal\"\n", " serving_feature_ids = {customer_entity: [\"*\"], terminal_entity: [\"*\"]}\n", "\n", " print(timestamp)\n", " print(bucket)\n", " print(export_uri)\n", " print(export_uri_path)\n", " print(customer_entity)\n", " print(terminal_entity)\n", " print(serving_feature_ids)\n", "\n", " # Main -------------------------------------------------------------------------------------------------------------------------------\n", "\n", " ## Define the feature store resource path\n", " feature_store_resource_path = (\n", " f\"projects/{project_id}/locations/{region}/featurestores/{feature_store_id}\"\n", " )\n", " print(\"Feature Store: \\t\", feature_store_resource_path)\n", "\n", " ## Run batch job request\n", " try:\n", " ff_feature_store = Featurestore(feature_store_resource_path)\n", " ff_feature_store.batch_serve_to_gcs(\n", " gcs_destination_output_uri_prefix=export_uri,\n", " gcs_destination_type=\"csv\",\n", " serving_feature_ids=serving_feature_ids,\n", " read_instances_uri=read_instances_uri,\n", " pass_through_fields=[\"tx_fraud\", \"tx_amount\"],\n", " )\n", " except Exception as error:\n", " print(error)\n", "\n", " # Store metadata\n", " snapshot_pattern = f\"{export_uri_path}/*.csv\"\n", " snapshot_files = glob.glob(snapshot_pattern)\n", " snapshot_files_fmt = [p.replace(\"/gcs/\", \"gs://\") for p in snapshot_files]\n", " snapshot_files_string = json.dumps(snapshot_files_fmt)\n", "\n", " component_outputs = NamedTuple(\n", " \"Outputs\",\n", " [\n", " (\"snapshot_uri_paths\", str),\n", " ],\n", " )\n", "\n", " print(snapshot_pattern)\n", " print(snapshot_files)\n", " print(snapshot_files_fmt)\n", " print(snapshot_files_string)\n", "\n", " return component_outputs(snapshot_files_string)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Define an evaluate custom component\n", "Next you will build a custom component that will evaluate our XGBoost model. This component will output `avg_precision_score` so that it can be used downstream for validating the model before deployment. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "@component(output_component_file=EVALUATE)\n", "def evaluate_model(\n", " model_in: Input[Artifact],\n", " metrics_uri: str,\n", " meta_metrics: Output[Metrics],\n", " graph_metrics: Output[ClassificationMetrics],\n", " model_out: Output[Artifact],\n", ") -> NamedTuple(\"Outputs\", [(\"metrics_thr\", float),],):\n", " # Libraries --------------------------------------------------------------------------------------------------------------------------\n", " import json\n", "\n", " # Variables --------------------------------------------------------------------------------------------------------------------------\n", " metrics_path = metrics_uri.replace(\"gs://\", \"/gcs/\")\n", " labels = [\"not fraud\", \"fraud\"]\n", "\n", " # Main -------------------------------------------------------------------------------------------------------------------------------\n", " with open(metrics_path, mode=\"r\") as json_file:\n", " metrics = json.load(json_file)\n", "\n", " ## metrics\n", " fpr = metrics[\"fpr\"]\n", " tpr = metrics[\"tpr\"]\n", " thrs = metrics[\"thrs\"]\n", " c_matrix = metrics[\"confusion_matrix\"]\n", " avg_precision_score = metrics[\"avg_precision_score\"]\n", " f1 = metrics[\"f1_score\"]\n", " lg_loss = metrics[\"log_loss\"]\n", " prec_score = metrics[\"precision_score\"]\n", " rec_score = metrics[\"recall_score\"]\n", "\n", " meta_metrics.log_metric(\"avg_precision_score\", avg_precision_score)\n", " meta_metrics.log_metric(\"f1_score\", f1)\n", " meta_metrics.log_metric(\"log_loss\", lg_loss)\n", " meta_metrics.log_metric(\"precision_score\", prec_score)\n", " meta_metrics.log_metric(\"recall_score\", rec_score)\n", " graph_metrics.log_roc_curve(fpr, tpr, thrs)\n", " graph_metrics.log_confusion_matrix(labels, c_matrix)\n", "\n", " ## model metadata\n", " model_framework = \"xgb.dask\"\n", " model_type = \"DaskXGBClassifier\"\n", " model_user = \"author\"\n", " model_function = \"classification\"\n", " model_out.metadata[\"framework\"] = model_framework\n", " model_out.metadata[\"type\"] = model_type\n", " model_out.metadata[\"model function\"] = model_function\n", " model_out.metadata[\"modified by\"] = model_user\n", "\n", " component_outputs = NamedTuple(\n", " \"Outputs\",\n", " [\n", " (\"metrics_thr\", float),\n", " ],\n", " )\n", "\n", " return component_outputs(float(avg_precision_score))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Author your pipeline\n", "Next you will author the pipeline using the KFP SDK. This pipeline consists of the following steps:\n", "\n", "* Ingest features\n", "* Create Vertex AI Dataset\n", "* Train XGBoost model\n", "* Evaluate model\n", "* Condition\n", "* Create endpoint\n", "* Deploy model into endpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "@dsl.pipeline(\n", " pipeline_root=PIPELINE_ROOT,\n", " name=PIPELINE_NAME,\n", ")\n", "def pipeline(\n", " project_id: str = PROJECT_ID,\n", " region: str = REGION,\n", " bucket_name: str = f\"gs://{BUCKET_NAME}\",\n", " feature_store_id: str = FEATURESTORE_ID,\n", " read_instances_uri: str = READ_INSTANCES_URI,\n", " replica_count: int = 1,\n", " machine_type: str = \"n1-standard-4\",\n", " train_split: float = 0.8,\n", " test_split: float = 0.1,\n", " val_split: float = 0.1,\n", " metrics_uri: str = METRICS_URI,\n", " thold: float = AVG_PR_THRESHOLD,\n", "):\n", " # Ingest data from featurestore\n", " ingest_features_op = ingest_features_gcs(\n", " project_id=project_id,\n", " region=region,\n", " bucket_name=bucket_name,\n", " feature_store_id=feature_store_id,\n", " read_instances_uri=read_instances_uri,\n", " )\n", "\n", " # create dataset\n", " dataset_create_op = vertex_ai_components.TabularDatasetCreateOp(\n", " project=project_id,\n", " display_name=DATASET_NAME,\n", " gcs_source=ingest_features_op.outputs[\"snapshot_uri_paths\"],\n", " ).after(ingest_features_op)\n", "\n", " # custom training job component - script\n", " train_model_op = vertex_ai_components.CustomContainerTrainingJobRunOp(\n", " display_name=JOB_NAME,\n", " model_display_name=MODEL_NAME,\n", " container_uri=IMAGE_URI,\n", " staging_bucket=bucket_name,\n", " dataset=dataset_create_op.outputs[\"dataset\"],\n", " base_output_dir=bucket_name,\n", " args=ARGS,\n", " replica_count=replica_count,\n", " machine_type=machine_type,\n", " training_fraction_split=train_split,\n", " validation_fraction_split=val_split,\n", " test_fraction_split=test_split,\n", " model_serving_container_image_uri=MODEL_SERVING_IMAGE_URI,\n", " project=project_id,\n", " location=region,\n", " ).after(dataset_create_op)\n", "\n", " # evaluate component\n", " evaluate_model_op = evaluate_model(\n", " model_in=train_model_op.outputs[\"model\"], metrics_uri=metrics_uri\n", " ).after(train_model_op)\n", "\n", " # if threshold on avg_precision_score\n", " with Condition(\n", " evaluate_model_op.outputs[\"metrics_thr\"] > thold, name=AVG_PR_CONDITION\n", " ):\n", " # create endpoint\n", " create_endpoint_op = vertex_ai_components.EndpointCreateOp(\n", " display_name=ENDPOINT_NAME, project=project_id\n", " ).after(evaluate_model_op)\n", "\n", " # deploy the model\n", " custom_model_deploy_op = vertex_ai_components.ModelDeployOp(\n", " model=train_model_op.outputs[\"model\"],\n", " endpoint=create_endpoint_op.outputs[\"endpoint\"],\n", " deployed_model_display_name=MODEL_NAME,\n", " dedicated_resources_machine_type=machine_type,\n", " dedicated_resources_min_replica_count=replica_count,\n", " ).after(create_endpoint_op)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Compile and run the pipeline\n", "After authoring the pipeline you can use the compiler to compile the pipeline. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# compile the pipeline\n", "pipeline_compiler = compiler.Compiler()\n", "pipeline_compiler.compile(pipeline_func=pipeline, package_path=PIPELINE_PACKAGE_PATH)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next you can use the Vertex AI SDK to create a job on Vertex AI Pipelines. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# instantiate pipeline representation\n", "pipeline_job = vertex_ai.PipelineJob(\n", " display_name=PIPELINE_NAME,\n", " template_path=PIPELINE_PACKAGE_PATH,\n", " pipeline_root=PIPELINE_ROOT,\n", " enable_caching=False,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# submit the pipeline run (may take ~20 minutes for the first run)\n", "pipeline_job.run(sync=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### END\n", "\n", "Now you can go to the next notebook `07_deployment.ipynb` and explore deployment using Cloud Build" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "notebook_template.ipynb", "toc_visible": true }, "environment": { "kernel": "python3", "name": "tf2-cpu.2-11.m116", "type": "gcloud", "uri": "gcr.io/deeplearning-platform-release/tf2-cpu.2-11:m116" }, "kernelspec": { "display_name": "Python 3 (Local)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.13" }, "toc-autonumbering": false, "toc-showmarkdowntxt": true }, "nbformat": 4, "nbformat_minor": 4 }