notebooks/official/workbench/inventory-prediction/inventory_prediction.ipynb (1,493 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "12cb1b47a1b7"
},
"outputs": [],
"source": [
"# Copyright 2022 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "78fad7a79180"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ Vertex AI Workbench user-managed notebooks is <a href=\"https://cloud.google.com/vertex-ai/docs/deprecations\">deprecated</a>. On January 30, 2025, support for user-managed notebooks will end and the ability to create user-managed notebooks instances will be removed. Existing instances will continue to function but patches, updates, and upgrades won't be available. To continue using Vertex AI Workbench, complete the steps on this page to <a href=\"https://cloud.google.com/vertex-ai/docs/workbench/user-managed/migrate-to-instances\">migrate your user-managed notebooks instances to Vertex AI Workbench instances.</a>⚠️</b>\n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "565d260c8eda"
},
"source": [
"# Inventory prediction on ecommerce data using Vertex AI\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/workbench/inventory-prediction/inventory_prediction.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fworkbench%2Finventory-prediction%2Finventory_prediction.ipynb\">\n",
" <img width=\"32px\" src=\"https://cloud.google.com/ml-engine/images/colab-enterprise-logo-32px.png\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td> \n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/official/workbench/inventory-prediction/inventory_prediction.ipynb\">\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/workbench/inventory-prediction/inventory_prediction.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "962e636b5cee"
},
"source": [
"**_NOTE_**: This notebook has been tested in the following environment:\n",
"\n",
"* Python version = 3.9"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0c8480c6f627"
},
"source": [
"## Overview\n",
"\n",
"This notebook explores how to build a machine learning model for inventory prediction on an ecommerce dataset. This notebook includes steps for deploying the model on Vertex AI using the Vertex AI SDK and analyzing the deployed model using the What-If Tool. Learn more about [What-If Tool](https://pair-code.github.io/what-if-tool/).\n",
"\n",
"Learn more about [Vertex AI Workbench](https://cloud.google.com/vertex-ai/docs/workbench/introduction) and [Vertex AI Training](https://cloud.google.com/vertex-ai/docs/training/custom-training).\n",
"\n",
"**Note**: The What-IF tool widget is tested on Colab and Vertex AI workbench's managed instances. It may not work on user-managed instances."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "c056e02ac509"
},
"source": [
"### Objective\n",
"\n",
"This tutorial shows you how to do exploratory data analysis, preprocess data, train model, evaluate model, deploy model, configure What-If Tool.\n",
"\n",
"This tutorial uses the following Google Cloud ML services and resources:\n",
"\n",
"- Vertex AI Model\n",
"- Vertex AI Endpoint\n",
"- Vertex Explainable AI\n",
"- Google Cloud Storage\n",
"- BigQuery\n",
"\n",
"The steps performed include:\n",
"\n",
"* Load the dataset from BigQuery using the \"BigQuery in Notebooks\" integration.\n",
"* Analyze the dataset.\n",
"* Preprocess the features in the dataset.\n",
"* Build a random forest classifier model that predicts whether a product is sold in the next 60 days.\n",
"* Evaluate the model.\n",
"* Deploy the model using Vertex AI.\n",
"* Configure and test with the What-If Tool."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "172228308dfb"
},
"source": [
"### Dataset\n",
"\n",
"The dataset used in this notebook consists of inventory data since 2018 for an ecommerce store. This dataset is publicly available as a BigQuery table named `looker-private-demo.ecomm.inventory_items`, which can be accessed by pinning the `looker-private-demo` project in BigQuery. The table consists of various fields related to ecommerce inventory items such as `id`, `product_id`, `cost`, when the item arrived at the store, and when it was sold. This notebook makes use of the following fields assuming their purpose is as described below:\n",
"\n",
"- `id`: The ID of the inventory item\n",
"- `product_id`: The ID of the product\n",
"- `created_at`: When the item arrived in the inventory/at the store\n",
"- `sold_at`: When the item was sold (*Null if still unsold*)\n",
"- `cost`: Cost at which the item was sold\n",
"- `product_category`: Category of the product\n",
"- `product_brand`: Brand of the product (dropped later as there are too many values)\n",
"- `product_retail_price`: Price of the product\n",
"- `product_department`: Department to which the product belonged to\n",
"- `product_distribution_center_id`: Which distribution center (an approximation of regions) the product was sold from\n",
"\n",
"The dataset is encoded to hide any private information. For example, ID numbers ranging from 1 to 10 are assigned to the distribution centers."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "12e1391884e6"
},
"source": [
"### Costs\n",
"\n",
"This tutorial uses the following billable components of Google Cloud:\n",
"\n",
"- Vertex AI\n",
"- BigQuery\n",
"- Cloud Storage\n",
"\n",
"\n",
"Learn about [Vertex AI\n",
"pricing](https://cloud.google.com/vertex-ai/pricing), [BigQuery pricing](https://cloud.google.com/bigquery/pricing) and [Cloud Storage\n",
"pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
"Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "61RBz8LLbxCR"
},
"source": [
"## Get started"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "No17Cw5hgx12"
},
"source": [
"### Install Vertex AI SDK for Python and other required packages\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "a1edbc2cf821"
},
"outputs": [],
"source": [
"! pip3 install --quiet --upgrade google-cloud-aiplatform \\\n",
" google-cloud-storage \\\n",
" seaborn \\\n",
" pandas \\\n",
" fsspec \\\n",
" witwidget \\\n",
" pyarrow \\\n",
" db-dtypes \\\n",
" gcsfs \\\n",
" matplotlib\n",
"\n",
"! pip3 install scikit-learn==1.2 protobuf==3.20.1"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "R5Xep4W9lq-Z"
},
"source": [
"### Restart runtime (Colab only)\n",
"\n",
"To use the newly installed packages, you must restart the runtime on Google Colab."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XRvKdaPDTznN"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
"\n",
" import IPython\n",
"\n",
" app = IPython.Application.instance()\n",
" app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SbmM4z7FOBpM"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ The kernel is going to restart. Wait until it's finished before continuing to the next step. ⚠️</b>\n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dmWOrTJ3gx13"
},
"source": [
"### Authenticate your notebook environment (Colab only)\n",
"\n",
"Authenticate your environment on Google Colab.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NyKGtVQjgx13"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
"\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DF4l8DTdWgPY"
},
"source": [
"### Set Google Cloud project information\n",
"\n",
"To get started using Vertex AI, you must have an existing Google Cloud project. Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "be175254a715"
},
"outputs": [],
"source": [
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n",
"LOCATION = \"us-central1\" # @param {type:\"string\"}\n",
"\n",
"# set the project id\n",
"! gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e5755d1a554f"
},
"source": [
"### Create a Cloud Storage bucket\n",
"\n",
"Create a storage bucket to store intermediate artifacts such as datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "d2de92accb67"
},
"outputs": [],
"source": [
"BUCKET_URI = f\"gs://your-bucket-name-{PROJECT_ID}-unique\" # @param {type:\"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b72bfdf29dae"
},
"source": [
"**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "a4453435d115"
},
"outputs": [],
"source": [
"! gsutil mb -l {LOCATION} -p {PROJECT_ID} {BUCKET_URI}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9c1d4f460b09"
},
"source": [
"### Import libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "a36a786f4538"
},
"outputs": [],
"source": [
"import os\n",
"import pickle\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"import sklearn.metrics as metrics\n",
"from google.cloud import aiplatform, storage\n",
"from google.cloud.bigquery import Client\n",
"from sklearn.ensemble import RandomForestClassifier\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import MinMaxScaler\n",
"from witwidget.notebook.visualization import WitConfigBuilder, WitWidget"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5abe40a9d335"
},
"source": [
"### Initialize Vertex AI SDK for Python\n",
"\n",
"Initialize the Vertex AI SDK for Python for your project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6a16f0d6e5a1"
},
"outputs": [],
"source": [
"aiplatform.init(project=PROJECT_ID, location=LOCATION, staging_bucket=BUCKET_URI)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "34f2b9e8bc9a"
},
"source": [
"### Load the required data from BigQuery\n",
"\n",
"The following cell integrates with BigQuery data from the same project through the Vertex AI's \"BigQuery in Notebooks\" integration. It can run an SQL query as it would run in the BigQuery console. \n",
"\n",
"**Note:** This feature only works in a notebook running on a Vertex AI Workbench managed-notebook instance."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "13863517f8c8"
},
"source": [
"#@bigquery\n",
"SELECT \n",
" id,\n",
" product_id, \n",
" created_at,\n",
" sold_at,\n",
" cost,\n",
" product_category,\n",
" product_brand,\n",
" product_retail_price,\n",
" product_department,\n",
" product_distribution_center_id\n",
"FROM \n",
"looker-private-demo.ecomm.inventory_items"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "50ed21be5ab7"
},
"source": [
"After executing the above cell, clicking **Query and load as DataFrame** button adds the following python cell that loads the queried data into a pandas dataframe."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "e89e5832338a"
},
"outputs": [],
"source": [
"# The following two lines are only necessary to run once.\n",
"# Comment out otherwise for speed-up.\n",
"client = Client(project=PROJECT_ID)\n",
"\n",
"query = \"\"\"SELECT \n",
" id,\n",
" product_id, \n",
" created_at,\n",
" sold_at,\n",
" cost,\n",
" product_category,\n",
" product_brand,\n",
" product_retail_price,\n",
" product_department,\n",
" product_distribution_center_id\n",
"FROM \n",
"looker-private-demo.ecomm.inventory_items\"\"\"\n",
"job = client.query(query)\n",
"df = job.to_dataframe()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4f9fc651fa8e"
},
"source": [
"## Explore and clean the dataset\n",
"\n",
"Check the first five rows of the dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "da9f53ab3559"
},
"outputs": [],
"source": [
"df.head(5)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5aa703b3f070"
},
"source": [
"Check the fields in the dataset and their data types and number of null values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "b00164626bad"
},
"outputs": [],
"source": [
"df.info()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "730f6c2954f5"
},
"source": [
"Apart from the `sold_at` datetime field, there aren't any fields that consist of null values in the dataset. As you're dealing with the inventory-item data, it's absolutely plausible that there would be some items that haven't been sold yet and hence the null values.\n",
"\n",
"### Clean the datetime fields\n",
"Next, convert the date fields to a proper date format to process them in the next steps."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1ff70c9450ee"
},
"outputs": [],
"source": [
"# convert to proper date columns\n",
"df[\"created_at\"] = pd.to_datetime(df[\"created_at\"], format=\"%Y-%m-%d\")\n",
"df[\"sold_at\"] = pd.to_datetime(df[\"sold_at\"].dt.strftime(\"%Y-%m-%d\"))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "50fc83a65c71"
},
"source": [
"Check the date ranges."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "db6ac4523678"
},
"outputs": [],
"source": [
"# check the date ranges\n",
"print(\"Min-sold_at : \", df[\"sold_at\"].min())\n",
"print(\"Max-sold_at : \", df[\"sold_at\"].max())\n",
"\n",
"print(\"Min-created_at : \", df[\"created_at\"].min())\n",
"print(\"Max-created_at : \", df[\"created_at\"].max())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "c8ec80d03032"
},
"source": [
"### Extract useful features\n",
"\n",
"Extract the month from the date field `created_at`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2694255a5bf2"
},
"outputs": [],
"source": [
"# calculate the month when the item has arrived\n",
"df[\"arrival_month\"] = df[\"created_at\"].dt.month"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5d9a610a5c10"
},
"source": [
"Calculate the average number of days a product is in inventory until it's sold."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5d4df4604949"
},
"outputs": [],
"source": [
"# calculate the number of days the item hasn't been sold.\n",
"df[\"shelf_days\"] = (df[\"sold_at\"] - df[\"created_at\"]).dt.days"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d276e789a9a1"
},
"source": [
"Calculate the discount percentages that apply to the products."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "c143c79dc7d1"
},
"outputs": [],
"source": [
"# calculate the discount offered\n",
"df[\"discount_perc\"] = (df[\"product_retail_price\"] - df[\"cost\"]) / df[\n",
" \"product_retail_price\"\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d5d9ce9bb19b"
},
"source": [
"### Check the categorical fields\n",
"Check the unique products and their brands in the data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ac2e5ca3912e"
},
"outputs": [],
"source": [
"# check total unique items\n",
"df[\"product_id\"].unique().shape, df[\"product_brand\"].unique().shape"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e667cd5fbb35"
},
"source": [
"The fields `product_id` and `product_brand` seem to have a lot of unique values. For the purpose of prediction, use `product_id` as the primary-key and `product_brand` is dropped as it has too many values/levels. \n",
"\n",
"Segregate the required numerical and categorical fields to analyze the dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "042f5cd1ff2e"
},
"outputs": [],
"source": [
"categ_cols = [\n",
" \"product_category\",\n",
" \"product_department\",\n",
" \"product_distribution_center_id\",\n",
" \"arrival_month\",\n",
"]\n",
"num_cols = [\"cost\", \"product_retail_price\", \"discount_perc\", \"shelf_days\"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6fa0a2d8b5ca"
},
"source": [
"Check the count of individual categories for each categorical field."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "16da0524b2ed"
},
"outputs": [],
"source": [
"for i in categ_cols:\n",
" print(i, \" - \", df[i].unique().shape[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "268ed7b88e23"
},
"source": [
"Check the distribution of the numerical fields."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bf50a707aa90"
},
"outputs": [],
"source": [
"df[num_cols].describe().T"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8888f572c8d1"
},
"source": [
"### Visualize the data distributions\n",
"\n",
"Generate bar plots for categorical fields and histograms and box plots for numerical fields to check their distributions in the dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "112fe37b915f"
},
"outputs": [],
"source": [
"for i in categ_cols:\n",
" df[i].value_counts(normalize=True).plot(kind=\"bar\")\n",
" plt.title(i)\n",
" plt.show()\n",
"\n",
"for i in num_cols:\n",
" _, ax = plt.subplots(1, 2, figsize=(10, 4))\n",
" df[i].plot(kind=\"box\", ax=ax[0])\n",
" df[i].plot(kind=\"hist\", ax=ax[1])\n",
" ax[0].set_title(i + \"-Boxplot\")\n",
" ax[1].set_title(i + \"-Histogram\")\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0250e32de345"
},
"source": [
"Most of the fields like discount, department, distribution center-id have a decent distribution. For the field `product_category`, there are some categories that don't constitute 2% of the dataset at least. Although there are outliers in some numerical fields, they are exempted from removing as there can be products that are expensive or belonging to a particular category that doesn't often see many sales. \n",
"\n",
"## Feature preprocessing\n",
"\n",
"Next, aggregate the data based on suitable categorical fields in the data and take the average number of days it took for the product to get sold. For a given `product_id`, there can be multiple item `id`'s in this dataset and you want to predict at the product level whether that particular product is going to be sold in the next couple of months. You're aggregating the data based on each of the product configurations present in this dataset like the price, cost, category and at which center it's sold. This way the model can predict whether a product with certain properties is going to be sold in the next couple of months.\n",
"\n",
"### Generate aggregate features\n",
"\n",
"For number of days a product got sold in, find the average of the `shelf_days` field."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8daf9985aa23"
},
"outputs": [],
"source": [
"groupby_cols = [\n",
" \"product_id\",\n",
" \"product_distribution_center_id\",\n",
" \"product_category\",\n",
" \"product_department\",\n",
" \"arrival_month\",\n",
" \"product_retail_price\",\n",
" \"cost\",\n",
" \"discount_perc\",\n",
"]\n",
"value_cols = [\"shelf_days\"]\n",
"\n",
"\n",
"df_prod = df[groupby_cols + value_cols].groupby(by=groupby_cols).mean().reset_index()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fbaf4c3f5a83"
},
"source": [
"Check the aggregated product level data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "74afffc4380a"
},
"outputs": [],
"source": [
"df_prod.head()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "c9aafbab9c88"
},
"source": [
"Look for null values in the data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "26523245c145"
},
"outputs": [],
"source": [
"df_prod.isna().sum() / df.shape[0]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bafab84172e7"
},
"source": [
"Only the `shelf_days` field has null values that correspond to the `product_id`'s that have no sold items. \n",
"\n",
"### Plot the data distribution\n",
"\n",
"Plot the distribution of the aggregated `shelf_days` field by generating a box plot."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "efe00052e1fd"
},
"outputs": [],
"source": [
"df_prod[\"shelf_days\"].plot(kind=\"box\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8acaa01b21bd"
},
"source": [
"Here, you can see that most of the products are sold within 60 days since they've arrived in the inventory/store. In this tutorial, you're going to train a machine learning model that predicts the probability of a product being sold within 60 days.\n",
"\n",
"### Encode the categorical fields\n",
"\n",
"Encode the the `shelf_days` field to generate the target field `sold_in_2mnt` indicating whether the product was sold in 60 days."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "383915875d38"
},
"outputs": [],
"source": [
"df_prod[\"sold_in_2mnt\"] = df_prod[\"shelf_days\"].apply(\n",
" lambda x: 1 if x >= 0 and x < 60 else 0\n",
")\n",
"df_prod[\"sold_in_2mnt\"].value_counts(normalize=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4dd380c38579"
},
"source": [
"Segregate the features into variables for model building."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "40d373151be5"
},
"outputs": [],
"source": [
"target = \"sold_in_2mnt\"\n",
"categ_cols = [\n",
" \"product_category\",\n",
" \"product_department\",\n",
" \"product_distribution_center_id\",\n",
" \"arrival_month\",\n",
"]\n",
"num_cols = [\"product_retail_price\", \"cost\", \"discount_perc\"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "87ce893f2d45"
},
"source": [
"Encode the `product_department` field."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bd68935d2a12"
},
"outputs": [],
"source": [
"df[\"product_deprtment\"] = (\n",
" df[\"product_department\"].apply(lambda x: 1 if x == \"Women\" else 0).value_counts()\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6d23387828fd"
},
"source": [
"Encode the rest of the categorical fields for model building."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8bbdbfa3a4cf"
},
"outputs": [],
"source": [
"# Create dummy variables for each categ. variable\n",
"for i in categ_cols:\n",
" ml = pd.get_dummies(df_prod[i], prefix=i + \"_\", drop_first=True)\n",
" df_new = pd.concat([df_prod, ml], axis=1)\n",
"\n",
"df_new.drop(columns=categ_cols, inplace=True)\n",
"df_new.shape"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f8c85be588b4"
},
"source": [
"### Normalize the numerical fields\n",
"\n",
"Normalize the fields `product_retail_price` and `cost` to the 0-1 scale using Min-Max normalization technique."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0790352a2ce9"
},
"outputs": [],
"source": [
"scaler = MinMaxScaler()\n",
"scaler = scaler.fit(df_new[[\"product_retail_price\", \"cost\"]])\n",
"df_new[[\"product_retail_price_norm\", \"cost_norm\"]] = scaler.transform(\n",
" df_new[[\"product_retail_price\", \"cost\"]]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "86e444ca1e02"
},
"source": [
"## Train the model\n",
"\n",
"Collect the required fields from the dataframe."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "360ab6634571"
},
"outputs": [],
"source": [
"cols = [\n",
" \"discount_perc\",\n",
" \"arrival_month__2\",\n",
" \"arrival_month__3\",\n",
" \"arrival_month__4\",\n",
" \"arrival_month__5\",\n",
" \"arrival_month__6\",\n",
" \"arrival_month__7\",\n",
" \"arrival_month__8\",\n",
" \"arrival_month__9\",\n",
" \"arrival_month__10\",\n",
" \"arrival_month__11\",\n",
" \"arrival_month__12\",\n",
" \"product_retail_price_norm\",\n",
" \"cost_norm\",\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "500051a07ae8"
},
"source": [
"Split the data into train(80%) and test(20%) sets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "44ab347a30da"
},
"outputs": [],
"source": [
"X = df_new[cols].copy()\n",
"y = df_new[target].copy()\n",
"train_X, test_X, train_y, test_y = train_test_split(\n",
" X, y, train_size=0.8, test_size=0.2, random_state=7\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "c29820548054"
},
"source": [
"Create a [Random Forest classifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier) object and fit it on the training data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5b88ca090e82"
},
"outputs": [],
"source": [
"model = RandomForestClassifier(random_state=7, n_estimators=100)\n",
"model.fit(train_X[cols], train_y)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ef4e662be2c3"
},
"source": [
"## Evaluate the model\n",
"\n",
"Predict on the test set and check the accuracy of the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "22f02b56dfb4"
},
"outputs": [],
"source": [
"pred_y = model.predict(test_X[cols])\n",
"\n",
"# Calculate the accuracy as our performance metric\n",
"accuracy = metrics.accuracy_score(test_y, pred_y)\n",
"print(\"Accuracy: \", accuracy)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6f8bd1bc40c8"
},
"source": [
"Generate the confusion-matrix on the test set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "c6f38583e47b"
},
"outputs": [],
"source": [
"confusion = metrics.confusion_matrix(test_y, pred_y)\n",
"print(f\"Confusion matrix:\\n{confusion}\")\n",
"\n",
"print(\"\\nNormalized confusion matrix:\")\n",
"for row in confusion:\n",
" print(row / row.sum())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e5be657cf605"
},
"source": [
"The model performance can be stated in terms of specificity (True-negative rate) and sensitivity (True-positive rate). In the normalized confusion matrix, the top left value represents the True-negative rate and the bottom right value represents the True-positive rate.\n",
"\n",
"## Save the model to a Cloud Storage bucket\n",
"\n",
"Next, save the model to the created Cloud Storage bucket for deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f133a2b37e72"
},
"outputs": [],
"source": [
"# save the trained model to a local file \"model.pkl\"\n",
"FILE_NAME = \"model.pkl\"\n",
"with open(FILE_NAME, \"wb\") as file:\n",
" pickle.dump(model, file)\n",
"\n",
"# Upload the saved model file to Cloud Storage\n",
"BLOB_PATH = \"inventory_prediction/\"\n",
"BLOB_NAME = os.path.join(BLOB_PATH, FILE_NAME)\n",
"\n",
"bucket = storage.Client().bucket(BUCKET_URI[5:])\n",
"\n",
"blob = bucket.blob(BLOB_NAME)\n",
"blob.upload_from_filename(FILE_NAME)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "870deb700dbe"
},
"source": [
"## Upload the model to Vertex AI\n",
"\n",
"Specify the following parameters to create a model in Vertex AI Model Registry:\n",
"\n",
"- `display_name`: The display name of the Model.\n",
"- `artifact_uri`: The path to the directory containing the Model artifact and any of its supporting files.\n",
"- `serving_container_image_uri`: The URI of the Model serving container.\n",
"\n",
"Learn more about [Vertex AI Model Registry](https://cloud.google.com/vertex-ai/docs/model-registry/introduction)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "abba6fb0e95e"
},
"outputs": [],
"source": [
"MODEL_DISPLAY_NAME = \"inventory-pred-model-unique\" # @param {type:\"string\"}\n",
"ARTIFACT_GCS_PATH = f\"{BUCKET_URI}/{BLOB_PATH}\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "abdf27ad5868"
},
"source": [
"Create a Vertex AI model resource.\n",
"\n",
"Ensure that the Sklearn's version for the serving container matches with the local version used for training the model. Learn more about the available [pre-built containers for Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7f5056eebe27"
},
"outputs": [],
"source": [
"model = aiplatform.Model.upload(\n",
" display_name=MODEL_DISPLAY_NAME,\n",
" artifact_uri=ARTIFACT_GCS_PATH,\n",
" serving_container_image_uri=\"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-2:latest\",\n",
")\n",
"\n",
"model.wait()\n",
"\n",
"print(\"Display name:\\n\", model.display_name)\n",
"print(\"Resource name:\\n\", model.resource_name)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b79b95b76f04"
},
"source": [
"## Create a Vertex AI Endpoint\n",
"\n",
"Set the display name for the endpoint."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3772df492ba8"
},
"outputs": [],
"source": [
"ENDPOINT_DISPLAY_NAME = \"inventory-pred-endpoint-unique\" # @param {type:\"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7a7a33af9232"
},
"source": [
"Create an endpoint resource on Vertex AI."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "de9b0b9098f9"
},
"outputs": [],
"source": [
"endpoint = aiplatform.Endpoint.create(display_name=ENDPOINT_DISPLAY_NAME)\n",
"\n",
"print(\"Display name:\\n\", endpoint.display_name)\n",
"print(\"Resource name:\\n\", endpoint.resource_name)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8013ca32c1a3"
},
"source": [
"## Deploy the model to the created endpoint\n",
"\n",
"Specify the machine type needed for serving the deployed model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f70de6008667"
},
"outputs": [],
"source": [
"MACHINE_TYPE = \"n1-standard-2\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9a20c48a5cc9"
},
"source": [
"Deploy the model to the created endpoint."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7739e77d3a4a"
},
"outputs": [],
"source": [
"model.deploy(endpoint=endpoint, machine_type=MACHINE_TYPE)\n",
"\n",
"model.wait()\n",
"\n",
"print(\"Model display-name:\\n\", model.display_name)\n",
"print(\"Model resource-name:\\n\", model.resource_name)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6d4f008fd6cc"
},
"source": [
"List the models deployed to the endpoint and ensure that the inventory prediction model is listed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5ea220f20668"
},
"outputs": [],
"source": [
"endpoint.list_models()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "75cbc77cbcd9"
},
"source": [
"## What-If Tool\n",
"\n",
"The What-If Tool can be used to analyze the model predictions on test data. In this tutorial, the What-If Tool is configured and run on the model deployed on Vertex AI Endpoints in the previous steps.\n",
"\n",
"`WitConfigBuilder` provides the `set_ai_platform_model()` method to configure the What-If Tool with a model deployed as a version on Ai Platform models. This feature currently supports only Ai Platform but not Vertex AI models. Fortunately, there's also an option to pass a custom function for generating predictions through the `set_custom_predict_fn()` method where either the locally trained model or a function that returns predictions from a Vertex AI model can be passed.\n",
"\n",
"Learn more about [What-If Tool](https://pair-code.github.io/what-if-tool/get-started/).\n",
"\n",
"### Prepare test samples\n",
"\n",
"Save some samples from the test data for both the available classes (Fraud/not-Fraud) to analyze the model using the What-If Tool."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "81839e084a9e"
},
"outputs": [],
"source": [
"# collect some samples for each class-label from the test data\n",
"sample_size = 200\n",
"pos_samples = test_y[test_y == 1].sample(sample_size).index\n",
"neg_samples = test_y[test_y == 0].sample(sample_size).index\n",
"test_samples_y = pd.concat([test_y.loc[pos_samples], test_y.loc[neg_samples]])\n",
"test_samples_X = test_X.loc[test_samples_y.index].copy()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8974bf829a5f"
},
"source": [
"### Run the What-If Tool on the deployed Vertex AI model\n",
"\n",
"Define a function to fetch the predictions from the deployed model and run it on the created test data configuring the What-If tool."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1cf06d79a585"
},
"outputs": [],
"source": [
"# configure the target and class-labels\n",
"TARGET_FEATURE = target\n",
"LABEL_VOCAB = [\"not-sold\", \"sold\"]\n",
"\n",
"# function to return predictions from the deployed Model\n",
"\n",
"\n",
"def endpoint_predict_sample(instances: list):\n",
" prediction = endpoint.predict(instances=instances)\n",
" preds = [[1 - i, i] for i in prediction.predictions]\n",
" return preds\n",
"\n",
"\n",
"# Combine the features and labels into one array for the What-If Tool\n",
"test_examples = np.hstack(\n",
" (test_samples_X.to_numpy(), test_samples_y.to_numpy().reshape(-1, 1))\n",
")\n",
"\n",
"# Configure the WIT with the prediction function\n",
"config_builder = (\n",
" WitConfigBuilder(test_examples.tolist(), test_samples_X.columns.tolist() + [target])\n",
" .set_custom_predict_fn(endpoint_predict_sample)\n",
" .set_target_feature(TARGET_FEATURE)\n",
" .set_label_vocab(LABEL_VOCAB)\n",
")\n",
"\n",
"# run the WIT-widget\n",
"WitWidget(config_builder, height=800)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6638bb7caaeb"
},
"source": [
"### Understanding the What-If tool\n",
"\n",
"In the **Datapoint editor** tab, you can highlight a dot in the result set and ask the What-If Tool to pick the \"nearest counterfactual\". This is a row of data closest to the row of data you selected but with the opposite outcome. Features in the left-hand table are editable and can show what tweaks are needed to get a particular row of data to flip from one outcome to another. For example, altering the *discount_percentage* feature would show how it impacts the prediction. \n",
"\n",
"<img src=\"images/Datapoint_editor.png\">\n",
"\n",
"Under the **Performance & Fairness** tab, you can slice the prediction results by a second variable. This allows digging deeper and understanding how different segments of the data react to the model's predictions. For example, in the following image, the higher the *discount_percentage*, the lesser the false negatives and the lower the *discount_percentage*, the higher the false positives. \n",
"\n",
"<img src=\"images/Performance_and_fairness.png\">\n",
"\n",
"The **Features** tab in the end provides you an intuitive and interactive way to understand the features present in the data. Similar to the exploratory data analysis steps performed in this notebook, the What-If Tool provides a visual and statistical description on the features.\n",
"\n",
"<img src=\"images/features.PNG\">"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d17d23fab0b1"
},
"source": [
"## Cleaning up\n",
"\n",
"To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
"project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
"\n",
"Otherwise, you can delete the individual resources you created in this tutorial:\n",
"\n",
"- Vertex AI Endpoint\n",
"- Vertex AI Model\n",
"- Cloud Storage bucket (set `delete_bucket` to *True* for deletion)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "481638c98a10"
},
"outputs": [],
"source": [
"# Undeploy the model\n",
"endpoint.undeploy_all()\n",
"\n",
"# Delete the endpoint\n",
"endpoint.delete()\n",
"\n",
"# Delete the model\n",
"model.delete()\n",
"\n",
"# Delete locally generated files\n",
"! rm -rf model.pkl\n",
"\n",
"# Set this to true only if you'd like to delete your bucket\n",
"delete_bucket = False\n",
"\n",
"if delete_bucket:\n",
" ! gsutil -m rm -r $BUCKET_URI"
]
}
],
"metadata": {
"colab": {
"name": "inventory_prediction.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}