ai-ml/vertex-ai-first-model-in-production/vertex-ai-first-model-deployed.ipynb (1,397 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "ur8xi4C7S06n" }, "outputs": [], "source": [ "# Copyright 2022 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "JAPoU8Sm5E6e" }, "source": [ "<table align=\"left\">\n", "\n", " <td>\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/bigquery_ml/bqml-online-prediction.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n", " </a>\n", " </td>\n", " <td>\n", " <a href=\"https://github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/bigquery_ml/bqml-online-prediction.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n", " View on GitHub\n", " </a>\n", " </td>\n", " <td>\n", " <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/bigquery_ml/bqml-online-prediction.ipynb\">\n", " <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\">\n", " Open in Vertex AI Workbench\n", " </a>\n", " </td> \n", "</table>" ] }, { "cell_type": "markdown", "metadata": { "id": "tvgnzT1CKxrO" }, "source": [ "## Overview\n", "\n", "This notebook is aimed at data analysts and data scientists who have data in BigQuery, want to train a model using BigQuery ML, register the model to Vertex AI Model Registry, and deploy it to an endpoint for real-time prediction. \n", "\n", "### Dataset\n", "\n", "The dataset, <a href=\"https://console.cloud.google.com/bigquery?project=bigquery-public-data&d=ga4_obfuscated_sample_ecommerce&p=bigquery-public-data&page=dataset\" target=\"_blank\">available publicly on BigQuery</a>, comes from obfuscated <a href=\"https://support.google.com/analytics/answer/10937659\" target=\"_blank\">Google Analytics 4 data</a> from the <a href=\"https://shop.googlemerchandisestore.com/\" target=\"_blank\">Google Merchandise Store</a>).\n", "\n", "### Objective\n", "\n", "In this tutorial, you will learn how to train and deploy a churn prediction model for real-time inference, with the data in BigQuery and model trained using BigQuery ML, registered to Vertex AI Model Registry, and deployed to an endpoint on Vertex AI for online predictions.\n", "\n", "This tutorial uses the following Google Cloud data analytics and ML services:\n", "\n", "- BigQuery\n", "- BigQuery ML\n", "- Vertex AI Model Registry\n", "- Vertex endpoints\n", "- Vertex AI Pipelines\n", "\n", "The steps performed include:\n", "\n", "- Using Python & SQL to query the public data in BigQuery\n", "- Preparing the data for modeling\n", "- Training a classification model using BigQuery ML and registering it to Vertex AI Model Registry\n", "- Inspecting the model on Vertex AI Model Registry\n", "- Deploying the model to an endpoint on Vertex AI\n", "- Formalize your ML workflow using Vertex AI Pipelines \n", "- Making sample online predictions to the model endpoint\n", "\n", "### Costs \n", "\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "* BigQuery\n", "* BigQuery ML\n", "* Vertex AI\n", "\n", "Learn about <a href=\"https://cloud.google.com/bigquery/pricing\" target=\"_blank\">BigQuery Pricing</a>, <a href=\"https://cloud.google.com/bigquery-ml/pricing\" target=\"_blank\">BigQuery ML pricing</a>, <a href=\"https://cloud.google.com/vertex-ai/pricing\" target=\"_blank\">Vertex AI\n", "pricing</a>, and use the <a href=\"https://cloud.google.com/products/calculator/\" target=\"_blank\">Pricing\n", "Calculator</a>\n", "to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": { "id": "ze4-nDLfK4pw" }, "source": [ "### Set up your local development environment\n", "\n", "**If you are using Colab or Vertex AI Workbench Notebooks**, your environment already meets\n", "all the requirements to run this notebook. You can skip this step." ] }, { "cell_type": "markdown", "metadata": { "id": "gCuSR8GkAgzl" }, "source": [ "**Otherwise**, make sure your environment meets this notebook's requirements.\n", "You need the following:\n", "\n", "* The Google Cloud SDK\n", "* Git\n", "* Python 3\n", "* virtualenv\n", "* Jupyter notebook running in a virtual environment with Python 3\n", "\n", "The Google Cloud guide to <a href=\"https://cloud.google.com/python/setup\" target=\"_blank\">Setting up a Python development\n", "environment</a> and the <a href=\"https://jupyter.org/install\" target=\"_blank\">Jupyter\n", "installation guide</a> provide detailed instructions\n", "for meeting these requirements. The following steps provide a condensed set of\n", "instructions:\n", "\n", "1. <a href=\"https://cloud.google.com/sdk/docs/\" target=\"_blank\">Install and initialize the Cloud SDK.</a>\n", "\n", "1. <a href=\"https://cloud.google.com/python/setup#installing_python\" target=\"_blank\">Install Python 3.</a>\n", "\n", "1. <a href=\"https://cloud.google.com/python/setup#installing_and_using_virtualenv\" target=\"_blank\">Install\n", " virtualenv</a>\n", " and create a virtual environment that uses Python 3. Activate the virtual environment.\n", "\n", "1. To install Jupyter, run `pip3 install jupyter` on the\n", "command-line in a terminal shell.\n", "\n", "1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.\n", "\n", "1. Open this notebook in the Jupyter Notebook Dashboard." ] }, { "cell_type": "markdown", "metadata": { "id": "i7EUnXsZhAGF" }, "source": [ "### Install additional packages\n", "\n", "Install the following packages required to execute this notebook. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2b4ef9b72d43" }, "outputs": [], "source": [ "import os\n", "\n", "# The Vertex AI Workbench Notebook product has specific requirements\n", "IS_WORKBENCH_NOTEBOOK = os.getenv(\"DL_ANACONDA_HOME\")\n", "IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(\n", " \"/opt/deeplearning/metadata/env_version\"\n", ")\n", "\n", "# Vertex AI Notebook requires dependencies to be installed with '--user'\n", "USER_FLAG = \"\"\n", "if IS_WORKBENCH_NOTEBOOK:\n", " USER_FLAG = \"--user\"\n", "\n", "! pip3 install --upgrade google-cloud-aiplatform {USER_FLAG} -q google-cloud-bigquery db-dtypes\n", "! pip3 install --upgrade kfp {USER_FLAG} -q\n", "! pip3 install --upgrade google-cloud-pipeline-components {USER_FLAG} -q" ] }, { "cell_type": "markdown", "metadata": { "id": "hhq5zEbGg0XX" }, "source": [ "### Restart the kernel\n", "\n", "After you install the additional packages, you need to restart the notebook kernel so it can find the packages." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "EzrelQZ22IZj" }, "outputs": [], "source": [ "# Automatically restart kernel after installs\n", "import os\n", "\n", "if not os.getenv(\"IS_TESTING\"):\n", " # Automatically restart kernel after installs\n", " import IPython\n", "\n", " app = IPython.Application.instance()\n", " app.kernel.do_shutdown(True)" ] }, { "cell_type": "markdown", "metadata": { "id": "lWEdiXsJg0XY" }, "source": [ "## Before you begin" ] }, { "cell_type": "markdown", "metadata": { "id": "BF1j6f9HApxa" }, "source": [ "### Set up your Google Cloud project\n", "\n", "**The following steps are required, regardless of your notebook environment.**\n", "\n", "1. <a href=\"https://console.cloud.google.com/cloud-resource-manager\" target=\"_blank\">Select or create a Google Cloud project</a>. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n", "\n", "1. <a href=\"https://cloud.google.com/billing/docs/how-to/modify-project\" target=\"_blank\">Make sure that billing is enabled for your project</a>.\n", "\n", "1. <a href=\"https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com\" target=\"_blank\">Enable the Vertex AI API</a>.\n", "\n", "1. If you are running this notebook locally, you will need to install the <a href=\"https://cloud.google.com/sdk\" target=\"_blank\">Cloud SDK</a>.\n", "\n", "1. Enter your project ID in the cell below. Then run the cell to make sure the\n", "Cloud SDK uses the right project for all the commands in this notebook.\n", "\n", "**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands." ] }, { "cell_type": "markdown", "metadata": { "id": "WReHDGG5g0XY" }, "source": [ "#### Set your project ID\n", "\n", "**If you don't know your project ID**, you may be able to get your project ID using `gcloud`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "oM1iC_MfAts1" }, "outputs": [], "source": [ "PROJECT_ID = \"[YOUR-PROJECT-ID]\"\n", "\n", "# Get your Google Cloud project ID from gcloud\n", "import os\n", "\n", "if not os.getenv(\"IS_TESTING\"):\n", " shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n", " PROJECT_ID = shell_output[0]\n", " print(\"Project ID: \", PROJECT_ID)" ] }, { "cell_type": "markdown", "metadata": { "id": "qJYoRfYng0XZ" }, "source": [ "Otherwise, set your project ID here." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "riG_qUokg0XZ" }, "outputs": [], "source": [ "if PROJECT_ID == \"\" or PROJECT_ID is None:\n", " PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Project ID: \", PROJECT_ID)" ] }, { "cell_type": "markdown", "metadata": { "id": "region" }, "source": [ "#### Region\n", "\n", "You can also change the `REGION` variable, which is used for operations\n", "throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n", "\n", "- Americas: `us-central1`\n", "- Europe: `europe-west4`\n", "- Asia Pacific: `asia-east1`\n", "\n", "You might not be able to use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\n", "\n", "Learn more about <a href=\"https://cloud.google.com/vertex-ai/docs/general/locations\" target=\"_blank\">Vertex AI regions</a>." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "region" }, "outputs": [], "source": [ "REGION = \"[your-region]\" # @param {type: \"string\"}\n", "\n", "if REGION == \"[your-region]\":\n", " REGION = \"us-central1\"" ] }, { "cell_type": "markdown", "metadata": { "id": "06571eb4063b" }, "source": [ "#### UUID\n", "\n", "If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a uuid for each instance session, and append it onto the name of resources you create in this tutorial." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "697568e92bd6" }, "outputs": [], "source": [ "import random\n", "import string\n", "\n", "# Generate a uuid of a specifed length(default=8)\n", "def generate_uuid(length: int = 8) -> str:\n", " return \"\".join(random.choices(string.ascii_lowercase + string.digits, k=length))\n", "\n", "UUID = generate_uuid()" ] }, { "cell_type": "markdown", "metadata": { "id": "dr--iN2kAylZ" }, "source": [ "### Authenticate your Google Cloud account\n", "\n", "**If you are using Vertex AI Workbench Notebooks**, your environment is already\n", "authenticated. Skip this step." ] }, { "cell_type": "markdown", "metadata": { "id": "sBCra4QMA2wR", "tags": [] }, "source": [ "**If you are using Colab**, run the cell below and follow the instructions\n", "when prompted to authenticate your account via oAuth.\n", "\n", "**Otherwise**, follow these steps:\n", "\n", "1. In the Cloud Console, go to the <a href=\"https://console.cloud.google.com/apis/credentials/serviceaccountkey\" target=\"_blank\">**Create service account key** page</a>.\n", "\n", "2. Click **Create service account**.\n", "\n", "3. In the **Service account name** field, enter a name, and\n", " click **Create**.\n", "\n", "4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type \"Vertex AI\"\n", "into the filter box, and select\n", " **Vertex AI Administrator**. Type \"Storage Object Admin\" into the filter box, and select **Storage Object Admin**.\n", "\n", "5. Click *Create*. A JSON file that contains your key downloads to your\n", "local environment.\n", "\n", "6. Enter the path to your service account key as the\n", "`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "PyQmSRbKA8r-" }, "outputs": [], "source": [ "# If you are running this notebook in Colab, run this cell and follow the\n", "# instructions to authenticate your GCP account. This provides access to your\n", "# Cloud Storage bucket and lets you submit training jobs and prediction\n", "# requests.\n", "\n", "import os\n", "import sys\n", "\n", "# If on Vertex AI Workbench, then don't execute this code\n", "IS_COLAB = \"google.colab\" in sys.modules\n", "if not os.path.exists(\"/opt/deeplearning/metadata/env_version\") and not os.getenv(\n", " \"DL_ANACONDA_HOME\"\n", "):\n", " if \"google.colab\" in sys.modules:\n", " from google.colab import auth as google_auth\n", "\n", " google_auth.authenticate_user()\n", "\n", " # If you are running this notebook locally, replace the string below with the\n", " # path to your service account key and run this cell to authenticate your GCP\n", " # account.\n", " elif not os.getenv(\"IS_TESTING\"):\n", " %env GOOGLE_APPLICATION_CREDENTIALS ''" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "set_service_account" }, "outputs": [], "source": [ "SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "autoset_service_account" }, "outputs": [], "source": [ "if (\n", " SERVICE_ACCOUNT == \"\"\n", " or SERVICE_ACCOUNT is None\n", " or SERVICE_ACCOUNT == \"[your-service-account]\"\n", "):\n", " # Get your service account from gcloud\n", " if not IS_COLAB:\n", " shell_output = !gcloud auth list 2>/dev/null\n", " SERVICE_ACCOUNT = shell_output[2].replace(\"*\", \"\").strip()\n", "\n", " else: # IS_COLAB:\n", " shell_output = ! gcloud projects describe $PROJECT_ID\n", " project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n", " SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n", "\n", " print(\"Service Account:\", SERVICE_ACCOUNT)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create a Cloud Storage bucket\n", "\n", "**The following steps are required, regardless of your notebook environment.**\n", "\n", "To submit a Pipeline job using the Vertex AI SDK we will need a Pipeline root. For this we will use a Google Cloud Bucket. \n", "\n", "Set the name of your Cloud Storage bucket below. It must be unique across all\n", "Cloud Storage buckets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\n", "BUCKET_URI = f\"gs://{BUCKET_NAME}\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n", " BUCKET_NAME = PROJECT_ID + \"-vertex-\" + UUID\n", " BUCKET_URI = f\"gs://{BUCKET_NAME}\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Only run the next cell if you haven't create a bucket already. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! gsutil ls -al $BUCKET_URI" ] }, { "cell_type": "markdown", "metadata": { "id": "XoEqT2Y4DJmf" }, "source": [ "### Import libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pRUOFELefqf1" }, "outputs": [], "source": [ "import google.cloud.aiplatform as vertex_ai\n", "from google.cloud import bigquery\n", "import pandas as pd\n", "from typing import Union" ] }, { "cell_type": "markdown", "metadata": { "id": "init_aip:mbsdk,all" }, "source": [ "### Initialize Vertex AI and BigQuery SDKs for Python\n", "\n", "Initialize the Vertex AI SDK for Python for your project and corresponding bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "init_aip:mbsdk,all" }, "outputs": [], "source": [ "vertex_ai.init(project=PROJECT_ID, location='us-central1')" ] }, { "cell_type": "markdown", "metadata": { "id": "83859376c893" }, "source": [ "Create the BigQuery client." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0ab485806b17" }, "outputs": [], "source": [ "bq_client = bigquery.Client(project=PROJECT_ID)" ] }, { "cell_type": "markdown", "metadata": { "id": "f94734ac9312" }, "source": [ "Use a helper function for sending queries to BigQuery." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "e364dab1d353" }, "outputs": [], "source": [ "# Wrapper to use BigQuery client to run query/job, return job ID or result as DF\n", "def run_bq_query(sql: str) -> Union[str, pd.DataFrame]:\n", " \"\"\"\n", " Input: SQL query, as a string, to execute in BigQuery\n", " Returns the query results as a pandas DataFrame, or error, if any\n", " \"\"\"\n", "\n", " # Try dry run before executing query to catch any errors\n", " job_config = bigquery.QueryJobConfig(dry_run=True, use_query_cache=False)\n", " bq_client.query(sql, job_config=job_config)\n", "\n", " # If dry run succeeds without errors, proceed to run query\n", " job_config = bigquery.QueryJobConfig()\n", " client_result = bq_client.query(sql, job_config=job_config)\n", "\n", " job_id = client_result.job_id\n", "\n", " # Wait for query/job to finish running. then get & return data frame\n", " df = client_result.result().to_arrow().to_pandas()\n", " print(f\"Finished job_id: {job_id}\")\n", " return df" ] }, { "cell_type": "markdown", "metadata": { "id": "a4a686de97f5" }, "source": [ "## BigQuery ML introduction\n", "\n", "BigQuery ML (BQML) provides the capability to train ML tabular models, such as classification, regression, forecasting, and matrix factorization, in BigQuery using SQL syntax directly. BigQuery ML uses the scalable infrastructure of BigQuery ML so you don't need to set up additional infrastructure for training or batch serving.\n", "\n", "Learn more about <a href=\"https://cloud.google.com/bigquery-ml/docs\" target=\"_blank\">BigQuery ML documentation</a>." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "088e3b9577b3" }, "outputs": [], "source": [ "BQ_DATASET_NAME = f\"ga4_churnprediction_{UUID}\"\n", "\n", "sql_create_dataset = f\"\"\"CREATE SCHEMA IF NOT EXISTS {BQ_DATASET_NAME}\"\"\"\n", "\n", "print(sql_create_dataset)\n", "\n", "run_bq_query(sql_create_dataset)" ] }, { "cell_type": "markdown", "metadata": { "id": "13b6ce9f8d8b" }, "source": [ "### Inspect the pre-processed Google Analytics 4 data" ] }, { "cell_type": "markdown", "metadata": { "id": "49dd00d5fbe5" }, "source": [ "Inpect data that has been pre-processed from <a href=\"https://support.google.com/analytics/answer/10937659\" target=\"_blank\">Google Analytics 4 data from the Google Merchandise Store</a> so that it can be used for classification. For more information on how this data was prepared, read <a href=\"https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml\" target=\"_blank\">this blog post</a>.\n", "\n", "As seen below, each row represents a single user, and the columns represent their demographic features, their aggregated behavioral features in the first 24 hours of visiting the Google Merchandise Store, and the label (whether the user churned or returned any time after the first 24 hours)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sql_inspect = \"\"\"\n", "SELECT\n", " *\n", "FROM\n", " `bqmlpublic.demo_ga4churnprediction.training_data`\n", "LIMIT\n", " 100\n", "\"\"\"\n", "run_bq_query(sql_inspect)" ] }, { "cell_type": "markdown", "metadata": { "id": "02f304053600" }, "source": [ "### Train a classification model using BigQuery ML" ] }, { "cell_type": "markdown", "metadata": { "id": "566f3395f20b" }, "source": [ "The query below trains a logistic regression model using BigQuery ML. BigQuery resources are used to train the model.\n", "\n", "In the `OPTIONS` parameter:\n", "* with `model_registry=\"vertex_ai\"`, the BigQuery ML model will automatically be <a href=\"https://cloud.google.com/vertex-ai/docs/model-registry/model-registry-bqml\" target=\"_blank\">registered to Vertex AI Model Registry</a>, which enables you to view all of your registered models and its versions on Google Cloud in one place.\n", "\n", "* `vertex_ai_model_version_aliases allows you to set aliases to help you keep track of your model version (<a href=\"https://cloud.google.com/vertex-ai/docs/model-registry/model-alias\" target=\"_blank\">documentation</a>)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "414c45011c1f" }, "outputs": [], "source": [ "# this cell may take ~1 min to run\n", "BQML_MODEL_NAME = f\"bqml_model_churn_{UUID}\"\n", "\n", "sql_train_model_bqml = f\"\"\"\n", "CREATE OR REPLACE MODEL {BQ_DATASET_NAME}.{BQML_MODEL_NAME} \n", "OPTIONS(\n", " MODEL_TYPE=\"LOGISTIC_REG\",\n", " input_label_cols=[\"churned\"],\n", " model_registry=\"vertex_ai\",\n", " vertex_ai_model_version_aliases=['logistic_reg', 'experimental']\n", ") AS\n", "\n", "SELECT\n", " * EXCEPT(user_first_engagement, user_pseudo_id)\n", "FROM\n", " bqmlpublic.demo_ga4churnprediction.training_data\n", "\"\"\"\n", "\n", "print(sql_train_model_bqml)\n", "\n", "run_bq_query(sql_train_model_bqml)" ] }, { "cell_type": "markdown", "metadata": { "id": "a90e98c72a05" }, "source": [ "### Model evaluation" ] }, { "cell_type": "markdown", "metadata": { "id": "2aaaae772f67" }, "source": [ "With the model created, you can now evaluate the logistic regression model. Behind the scenes, BigQuery ML automatically <a href=\"https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create#data_split_method\" target=\"_blank\">split the data</a>, which makes it easier to quickly train and evaluate models." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "a1f8ac93d570" }, "outputs": [], "source": [ "sql_evaluate_model = f\"\"\"\n", "SELECT\n", " *\n", "FROM\n", " ML.EVALUATE(MODEL {BQ_DATASET_NAME}.{BQML_MODEL_NAME})\n", "\"\"\"\n", "\n", "print(sql_evaluate_model)\n", "\n", "run_bq_query(sql_evaluate_model)" ] }, { "cell_type": "markdown", "metadata": { "id": "d9f807a50f38" }, "source": [ "These metrics help you understand the performance of the model. \n", "\n", "There are various metrics for logistic regression and other model types (full list of metrics can be found in the <a href=\"https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-evaluate#mlevaluate_output\" target=\"_blank\">documentation</a>)." ] }, { "cell_type": "markdown", "metadata": { "id": "7e806ebc48a2" }, "source": [ "### Batch prediction (with Explainable AI)" ] }, { "cell_type": "markdown", "metadata": { "id": "d31605829283" }, "source": [ "Make a batch prediction in BigQuery ML on the original training data to check the probability of churn for each of the users, as seen in the `probability` column, with the predicted label under the `predicted_churn` column.\n", "\n", "<a href=\"https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-explain-predict\" target=\"_blank\">ML.EXPLAIN_PREDICT</a> has built-in <a href=\"https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-xai-overview\" target=\"_blank\">Explainable AI</a>. This allows you to see the top contributing features to each prediction and interpret how it was computed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2d500fdbfb44" }, "outputs": [], "source": [ "sql_explain_predict = f\"\"\"\n", "SELECT\n", " *\n", "FROM\n", " ML.EXPLAIN_PREDICT(MODEL {BQ_DATASET_NAME}.{BQML_MODEL_NAME},\n", " (SELECT * FROM bqmlpublic.demo_ga4churnprediction.training_data LIMIT 100)\n", " )\n", "\"\"\"\n", "\n", "print(sql_explain_predict)\n", "\n", "run_bq_query(sql_explain_predict)" ] }, { "cell_type": "markdown", "metadata": { "id": "fa1f96c0f452" }, "source": [ "Since the `top_feature_attributions` is a nested column, you can unnest the array (<a href=\"https://cloud.google.com/bigquery/docs/reference/standard-sql/arrays\" target=\"_blank\">documentation</a>) into separate rows for each of the features. In other words, since ML.EXPLAIN_PREDICT provides the top 5 most important features, using `UNNEST` results in 5 rows per prediction:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "278b3441084b" }, "outputs": [], "source": [ "sql_explain_predict = f\"\"\"\n", "SELECT\n", " tfa.*,\n", " predicted_churned,\n", " probability,\n", " baseline_prediction_value,\n", " prediction_value,\n", " approximation_error,\n", " user_pseudo_id\n", "FROM\n", " ML.EXPLAIN_PREDICT(MODEL {BQ_DATASET_NAME}.{BQML_MODEL_NAME},\n", " (SELECT * FROM bqmlpublic.demo_ga4churnprediction.training_data LIMIT 100)\n", " ),\n", " UNNEST(top_feature_attributions) as tfa\n", "WHERE\n", " user_pseudo_id = \"7666337.2408476627\"\n", "\"\"\"\n", "\n", "print(sql_explain_predict)\n", "\n", "run_bq_query(sql_explain_predict)" ] }, { "cell_type": "markdown", "metadata": { "id": "dc0c1c1b03f9" }, "source": [ "### Inspect the model on Vertex AI Model Registry" ] }, { "cell_type": "markdown", "metadata": { "id": "0144d67a298e" }, "source": [ "When the model was trained in BigQuery ML, the line `model_registry=\"vertex_ai\"` registered the model to Vertex AI Model Registry automatically upon completion.\n", "\n", "You can view the model on the <a href=\"https://console.cloud.google.com/vertex-ai/models\" target=\"_blank\">Vertex AI Model Registry page</a>, or use the code below to check that it was successfully registered:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "b00664302839" }, "outputs": [], "source": [ "model = vertex_ai.Model(model_name=BQML_MODEL_NAME)\n", "\n", "print(model.gca_resource)" ] }, { "cell_type": "markdown", "metadata": { "id": "89455f708f54" }, "source": [ "### Deploy the model to an endpoint" ] }, { "cell_type": "markdown", "metadata": { "id": "b6120dcc1ff6" }, "source": [ "While BigQuery ML supports batch prediction with <a href=\"https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-predict\" target=\"_blank\">ML.PREDICT</a> and <a href=\"https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-explain-predict\" target=\"_blank\">ML.EXPLAIN_PREDICT</a>, BigQuery ML is not suitable for real-time predictions where you need low latency predictions with potentially high frequency of requests.\n", "\n", "In other words, deploying the BigQuery ML model to an endpoint enables you to do online predictions." ] }, { "cell_type": "markdown", "metadata": { "id": "e1ab7e1ac83c" }, "source": [ "#### Create a Vertex AI endpoint" ] }, { "cell_type": "markdown", "metadata": { "id": "0a61ea55f685" }, "source": [ "To deploy your model to an endpoint, you will first need to create an endpoint before you deploy the model to it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "b7d80941cf30" }, "outputs": [], "source": [ "ENDPOINT_NAME = f\"{BQML_MODEL_NAME}-endpoint\"\n", "\n", "endpoint = vertex_ai.Endpoint.create(\n", " display_name=ENDPOINT_NAME,\n", " project=PROJECT_ID,\n", " location=REGION,\n", ")\n", "\n", "print(endpoint.display_name)\n", "print(endpoint.resource_name)" ] }, { "cell_type": "markdown", "metadata": { "id": "b58a104207d2" }, "source": [ "#### List endpoints" ] }, { "cell_type": "markdown", "metadata": { "id": "951ed1693f6b" }, "source": [ "List the endpoints to make sure it has successfully been created. (You can also view your endpoints on the <a href=\"https://console.cloud.google.com/vertex-ai/endpoints\" target=\"_blank\">Vertex AI Endpoints page</a>)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "c4d7707910bc" }, "outputs": [], "source": [ "endpoint.list()" ] }, { "cell_type": "markdown", "metadata": { "id": "ba0d40b26cfb" }, "source": [ "#### Deploy model to Vertex endpoint" ] }, { "cell_type": "markdown", "metadata": { "id": "6a90be5b77a2" }, "source": [ "With the new endpoint, you can now deploy your model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# deploying the model to the endpoint may take 10-15 minutes\n", "model.deploy(endpoint=endpoint)" ] }, { "cell_type": "markdown", "metadata": { "id": "c303d779477b" }, "source": [ "You can also check on the status of your model by visiting the <a href=\"https://console.cloud.google.com/vertex-ai/endpoints\" target=\"_blank\">Vertex AI Endpoints page</a>." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Formalization: Orchestrate, Deploy and Run your Vertex AI Pipeline. \n", "Now its time take the steps from the previous cells and formalize them into a Machine Learning Pipeline that can help with orchestration, automation and scaling. In the next cells we will:\n", "\n", "* Build a custom component that will upload the BigQuery ML into our Endpoint. \n", "* Define our end-to-end Pipeline using Kubeflow Pipelines. \n", "* Leverage pre-build and custom components in our Pipeline. \n", "* Compile and submit the Pipeline to Vertex AI Pipelines for a run. \n", "\n", "Below you can see the Vertex AI Pipeline execution you will visualize in the Cloud console.\n", "\n", "<img src=\"https://github.com/GoogleCloudPlatform/devrel-demos/blob/main/ai-ml/vertex-ai-first-model-in-production/img/vertex-ai-pipeline.png?raw=1\">" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First you need to define the constants that we will use for our pipeline. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "PIPELINE_ROOT = f\"{BUCKET_URI}/root\"\n", "PIPELINE_DISPLAY_NAME=\"bqml_model_churn_pipeline\"\n", "PUBLIC_DATASET=\"bqmlpublic.demo_ga4churnprediction.training_data\"\n", "BQML_MODEL_NAME_PIPELINE = f\"bqml_model_pipeline_churn_{UUID}\"\n", "ENDPOINT_NAME = f\"{BQML_MODEL_NAME_PIPELINE}-endpoint\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next you will import the libraries needed. We will use KFP and pre build components from `google_cloud_pipeline_components`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# kfp and cloud components\n", "import kfp\n", "from kfp import dsl\n", "from kfp.v2 import compiler\n", "from kfp.v2.dsl import component, Input, Output, Condition, Artifact, HTML\n", "from google_cloud_pipeline_components.v1.bigquery import (\n", " BigqueryCreateModelJobOp, BigqueryEvaluateModelJobOp,\n", " BigqueryExportModelJobOp, BigqueryPredictModelJobOp,\n", " BigqueryQueryJobOp,BigqueryExplainPredictModelJobOp)\n", "from google_cloud_pipeline_components.v1.endpoint import (EndpointCreateOp,ModelDeployOp)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from typing import Optional\n", "\n", "@component(\n", " base_image=\"python:3.8-slim\",\n", " packages_to_install=[\"google-cloud-aiplatform\"],\n", ")\n", "def upload_model_enpoint(\n", " project: str,\n", " location: str,\n", " bq_model_name: str,\n", " endpoint: Input[Artifact]\n", "):\n", " \"\"\"\n", " model_name: A fully-qualified model resource name or model ID.\n", " Example: \"projects/123/locations/us-central1/models/456\" or\n", " \"456\" when project and location are initialized or passed.\n", " \"\"\"\n", "\n", " from google.cloud import aiplatform as vertex_ai \n", " \n", " model=vertex_ai.Model(model_name=bq_model_name)\n", " \n", " endpoint=vertex_ai.Endpoint.list(order_by=\"update_time\")\n", " endpoint=endpoint[-1]\n", "\n", " model.deploy(\n", " endpoint=endpoint,\n", " min_replica_count=1,\n", " max_replica_count=1,\n", " )\n", "\n", " model.wait()\n", "\n", " return" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@dsl.pipeline(\n", " name=\"bqml-pipeline-ff\",\n", " description=\"Trains and deploys bqml model to detect fraud\",\n", " pipeline_root=PIPELINE_ROOT,\n", ")\n", "def bqml_pipeline_ff(\n", " dataset_name: str = BQ_DATASET_NAME,\n", " public_dataset: str = PUBLIC_DATASET,\n", " model: str = BQML_MODEL_NAME_PIPELINE,\n", " project_id: str = PROJECT_ID,\n", " region: str = REGION, \n", "):\n", " bq_model_op = BigqueryCreateModelJobOp(\n", " project=project_id,\n", " location=\"US\",\n", " query= f\"\"\"CREATE OR REPLACE MODEL `{BQ_DATASET_NAME}.{BQML_MODEL_NAME_PIPELINE}`\n", " OPTIONS (\n", " MODEL_TYPE='LOGISTIC_REG',\n", " input_label_cols=['churned'],\n", " model_registry='vertex_ai',\n", " vertex_ai_model_version_aliases=['logistic_reg', 'experimental']\n", " ) AS SELECT * EXCEPT(user_first_engagement, user_pseudo_id) FROM bqmlpublic.demo_ga4churnprediction.training_data\"\"\",\n", " )\n", " \n", " bq_evaluate_model_op = BigqueryEvaluateModelJobOp(\n", " project=project_id, location=\"US\", model=bq_model_op.outputs[\"model\"]\n", " ).after(bq_model_op)\n", " \n", " endpoint_create_op = EndpointCreateOp(\n", " project=project_id,\n", " location=region,\n", " display_name=ENDPOINT_NAME\n", " ).after(bq_evaluate_model_op)\n", " \n", " _ = upload_model_enpoint(project=project_id, location=region, bq_model_name=model, endpoint=endpoint_create_op.outputs[\"endpoint\"]).after(endpoint_create_op)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "PACKAGE_PATH=\"bqml_model_churn.json\"\n", "\n", "from kfp.v2 import compiler\n", "\n", "compiler.Compiler().compile(pipeline_func=bqml_pipeline_ff, \n", " package_path=PACKAGE_PATH\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "job = vertex_ai.PipelineJob(\n", " display_name=PIPELINE_DISPLAY_NAME,\n", " template_path=PACKAGE_PATH,\n", " pipeline_root=PIPELINE_ROOT,\n", " enable_caching=True\n", ")\n", "\n", "print (job.run())\n", "\n", "! rm bqml_model_churn.json" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Make online predictions to the endpoint\n", "Using a sample of the training data, you can test the endpoint to make online predictions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "endpoint_name = vertex_ai.Endpoint.list(filter=f'display_name=\"{ENDPOINT_NAME}\"')\n", "endpoint_name\n", "\n", "print(endpoint_name)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_sample_requests_list = [\n", " {\n", " \"country\": \"Turkey\",\n", " \"operating_system\": \"Web\",\n", " \"language\": \"None\",\n", " \"cnt_user_engagement\": 28,\n", " \"cnt_page_view\": 37,\n", " \"cnt_view_item\": 6,\n", " \"cnt_view_promotion\": 15,\n", " \"cnt_select_promotion\": 4,\n", " \"cnt_add_to_cart\": 0,\n", " \"cnt_begin_checkout\": 0,\n", " \"cnt_add_shipping_info\": 0,\n", " \"cnt_add_payment_info\": 0,\n", " \"cnt_purchase\": 0,\n", " \"month\": 1,\n", " \"julianday\": 1,\n", " \"dayofweek\": 6,\n", " },\n", " {\n", " \"country\": \"Macao\",\n", " \"operating_system\": \"Web\",\n", " \"language\": \"None\",\n", " \"cnt_user_engagement\": 2,\n", " \"cnt_page_view\": 4,\n", " \"cnt_view_item\": 0,\n", " \"cnt_view_promotion\": 0,\n", " \"cnt_select_promotion\": 0,\n", " \"cnt_add_to_cart\": 0,\n", " \"cnt_begin_checkout\": 0,\n", " \"cnt_add_shipping_info\": 0,\n", " \"cnt_add_payment_info\": 0,\n", " \"cnt_purchase\": 0,\n", " \"month\": 1,\n", " \"julianday\": 16,\n", " \"dayofweek\": 7,\n", " },\n", "]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prediction = endpoint.predict(df_sample_requests_list)\n", "print(prediction)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can then extract the predictions from the prediction response" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prediction.predictions" ] }, { "cell_type": "markdown", "metadata": { "id": "TpV-iwP9qw9c" }, "source": [ "## Cleaning up\n", "\n", "To clean up all Google Cloud resources used in this project, you can <a href=\"https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects\" target=\"_blank\">delete the Google Cloud\n", "project</a> you used for the tutorial.\n", "\n", "Otherwise, you can delete the individual resources you created in this tutorial:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sx_vKniMq9ZX" }, "outputs": [], "source": [ "# Undeploy model from endpoint and delete endpoint\n", "endpoint.undeploy_all()\n", "endpoint.delete()\n", "\n", "# Delete BigQuery dataset, including the BigQuery ML model\n", "! bq rm -r -f $PROJECT_ID:$BQ_DATASET_NAME" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "bqml-online-prediction.ipynb", "toc_visible": true }, "environment": { "kernel": "python3", "name": "common-cpu.m102", "type": "gcloud", "uri": "gcr.io/deeplearning-platform-release/base-cpu:m102" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.12" } }, "nbformat": 4, "nbformat_minor": 4 }