notebooks/official/experiments/comparing_local_trained_models.ipynb (790 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "ur8xi4C7S06n" }, "outputs": [], "source": [ "# Copyright 2022 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "JAPoU8Sm5E6e" }, "source": [ "# Vertex AI: Track parameters and metrics for locally trained models\n", "\n", "<table align=\"left\">\n", " <td style=\"text-align: center\">\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/experiments/comparing_local_trained_models.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fexperiments%2Fcomparing_local_trained_models.ipynb\">\n", " <img width=\"32px\" src=\"https://cloud.google.com/ml-engine/images/colab-enterprise-logo-32px.png\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n", " </a>\n", " </td> \n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/official/experiments/comparing_local_trained_models.ipynb\">\n", " <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/experiments/comparing_local_trained_models.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n", " </a>\n", " </td>\n", "</table>" ] }, { "cell_type": "markdown", "metadata": { "id": "7c5ad9693bbb" }, "source": [ "## Overview\n", "\n", "As a Data Scientist, you may start running model experiments locally on your notebook. Depending on the framework you use, you need to track parameters, training time series and evaluation metrics. In this way, you are able to explain the modelling approach you have choosen. \n", "\n", "Learn more about [Vertex AI Experiments](https://cloud.google.com/vertex-ai/docs/experiments/intro-vertex-ai-experiments)." ] }, { "cell_type": "markdown", "metadata": { "id": "209510393d45" }, "source": [ "### Objective\n", "\n", "In this tutorial, you learn how to use Vertex AI Experiments to compare and evaluate model experiments.\n", "\n", "This tutorial uses the following Google Cloud ML services and resources:\n", "\n", "- Vertex AI Workbench\n", "- Vertex AI Experiments\n", "\n", "The steps performed include:\n", "\n", "- log the model parameters\n", "- log the loss and metrics on every epoch to TensorBoard\n", "- log the evaluation metrics\n" ] }, { "cell_type": "markdown", "metadata": { "id": "6f14730ed9af" }, "source": [ "### Dataset\n", "\n", "In this notebook, you train a simple distributed neural network (DNN) model to predict an automobile's miles per gallon (MPG) based on automobile information in the [auto-mpg dataset](https://www.kaggle.com/devanshbesain/exploration-and-analysis-auto-mpg).\n" ] }, { "cell_type": "markdown", "metadata": { "id": "tvgnzT1CKxrO" }, "source": [ "### Costs \n", "\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "* Vertex AI\n", "* Vertex AI TensorBoard\n", "* Cloud Storage\n", "\n", "Learn about [Vertex AI\n", "pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n", "pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n", "Calculator](https://cloud.google.com/products/calculator/)\n", "to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": { "id": "61RBz8LLbxCR" }, "source": [ "## Get started" ] }, { "cell_type": "markdown", "metadata": { "id": "No17Cw5hgx12" }, "source": [ "### Install Vertex AI SDK for Python and other required packages\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IaYsrh0Tc17L" }, "outputs": [], "source": [ "! pip3 install --upgrade tensorflow==2.8 \\\n", " protobuf==3.20.3 \\\n", " google-cloud-aiplatform \\\n", " matplotlib \\\n", " pandas \\\n", " 'numpy<2' -q --no-warn-conflicts" ] }, { "cell_type": "markdown", "metadata": { "id": "R5Xep4W9lq-Z" }, "source": [ "### Restart runtime (Colab only)\n", "\n", "To use the newly installed packages, you must restart the runtime on Google Colab." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XRvKdaPDTznN" }, "outputs": [], "source": [ "import sys\n", "\n", "if \"google.colab\" in sys.modules:\n", "\n", " import IPython\n", "\n", " app = IPython.Application.instance()\n", " app.kernel.do_shutdown(True)" ] }, { "cell_type": "markdown", "metadata": { "id": "SbmM4z7FOBpM" }, "source": [ "<div class=\"alert alert-block alert-warning\">\n", "<b>⚠️ The kernel is going to restart. Wait until it's finished before continuing to the next step. ⚠️</b>\n", "</div>\n" ] }, { "cell_type": "markdown", "metadata": { "id": "dmWOrTJ3gx13" }, "source": [ "### Authenticate your notebook environment (Colab only)\n", "\n", "Authenticate your environment on Google Colab.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NyKGtVQjgx13" }, "outputs": [], "source": [ "import sys\n", "\n", "if \"google.colab\" in sys.modules:\n", "\n", " from google.colab import auth\n", "\n", " auth.authenticate_user()" ] }, { "cell_type": "markdown", "metadata": { "id": "DF4l8DTdWgPY" }, "source": [ "### Set Google Cloud project information and initialize Vertex AI SDK for Python\n", "\n", "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com). Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Nqwi-5ufWp_B" }, "outputs": [], "source": [ "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n", "LOCATION = \"us-central1\" # @param {type:\"string\"}" ] }, { "cell_type": "markdown", "metadata": { "id": "bucket:mbsdk" }, "source": [ "### Create a Cloud Storage bucket\n", "\n", "Create a storage bucket to store intermediate artifacts such as datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bucket" }, "outputs": [], "source": [ "BUCKET_URI = f\"gs://your-bucket-name-{PROJECT_ID}-unique\" # @param {type:\"string\"}" ] }, { "cell_type": "markdown", "metadata": { "id": "create_bucket" }, "source": [ "**If your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "create_bucket" }, "outputs": [], "source": [ "! gsutil mb -l $LOCATION -p $PROJECT_ID $BUCKET_URI" ] }, { "cell_type": "markdown", "metadata": { "id": "XoEqT2Y4DJmf" }, "source": [ "### Import libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pRUOFELefqf1" }, "outputs": [], "source": [ "import uuid\n", "\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import pandas as pd\n", "from google.cloud import aiplatform as vertex_ai\n", "from tensorflow.python.keras import Sequential, layers\n", "from tensorflow.python.keras.utils import data_utils" ] }, { "cell_type": "markdown", "metadata": { "id": "xtXZWmYqJ1bh" }, "source": [ "### Define constants" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "JIOrI-hoJ46P" }, "outputs": [], "source": [ "EXPERIMENT_NAME = \"[your-experiment-name]\" # @param {type:\"string\"}" ] }, { "cell_type": "markdown", "metadata": { "id": "jWQLXXNVN4Lv" }, "source": [ "If EXPERIMENT_NAME is not set, set a default one below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Q1QInYWOKsmo" }, "outputs": [], "source": [ "if EXPERIMENT_NAME == \"[your-experiment-name]\" or EXPERIMENT_NAME is None:\n", " EXPERIMENT_NAME = f\"my-experiment-{uuid.uuid1()}\"" ] }, { "cell_type": "markdown", "metadata": { "id": "O8XJZB3gR8eL" }, "source": [ "### Initialize Vertex AI SDK for Python\n", "\n", "Initialize the Vertex AI SDK for Python for your project and corresponding bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "o_wnT10RJ7-W" }, "outputs": [], "source": [ "vertex_ai.init(project=PROJECT_ID, location=LOCATION, staging_bucket=BUCKET_URI)" ] }, { "cell_type": "markdown", "metadata": { "id": "bba904753a2c" }, "source": [ "### Create a Vertex AI Experiment with backing Vertex AI TensorBoard\n", "\n", "You can create a Vertex AI Experiment by using the init() method. This automatically gets or creates the default Vertex AI TensorBoard and associates it with your Experiment.\n", "\n", "Learn more about [TensorBoard overview](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "a600e376bf37" }, "outputs": [], "source": [ "vertex_ai.init(experiment=EXPERIMENT_NAME)" ] }, { "cell_type": "markdown", "metadata": { "id": "12ab31563365" }, "source": [ "## Training with Vertex AI Experiments\n", "\n", "Vertex AI enables users to track the steps (for example, preprocessing, training) of an experiment run, and track inputs (for example, algorithm, parameters, datasets) and outputs (for example, models, checkpoints, metrics) of those steps. \n", "\n", "To better understanding how parameters and metrics are stored and organized, the following concepts are explained:\n", "\n", "1. **Experiments** describe a context that groups your runs and the artifacts you create into a logical session. For example, in this notebook you create an Experiment and log data to that experiment.\n", "\n", "1. **Run** represents a single path/avenue that you executed while performing an experiment. A run includes artifacts that you used as inputs or outputs, and parameters that you used in this execution. An Experiment can contain multiple runs. \n", "\n", "You can use the Vertex AI SDK for Python to track metrics and parameters models trained locally for each experiment across several experiment runs. \n", "\n", "In the following example, you train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the [auto-mpg dataset](https://www.kaggle.com/devanshbesain/exploration-and-analysis-auto-mpg)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bfe39f7b7f16" }, "outputs": [], "source": [ "# Helpers ----------------------------------------------------------------------\n", "\n", "\n", "def read_data(uri):\n", " \"\"\"\n", " Read data\n", " Args:\n", " uri: path to data\n", " Returns:\n", " pandas dataframe\n", " \"\"\"\n", " dataset_path = data_utils.get_file(\"auto-mpg.data\", uri)\n", " column_names = [\n", " \"MPG\",\n", " \"Cylinders\",\n", " \"Displacement\",\n", " \"Horsepower\",\n", " \"Weight\",\n", " \"Acceleration\",\n", " \"Model Year\",\n", " \"Origin\",\n", " ]\n", " raw_dataset = pd.read_csv(\n", " dataset_path,\n", " names=column_names,\n", " na_values=\"?\",\n", " comment=\"\\t\",\n", " sep=\" \",\n", " skipinitialspace=True,\n", " )\n", " dataset = raw_dataset.dropna()\n", " dataset[\"Origin\"] = dataset[\"Origin\"].map(\n", " lambda x: {1: \"USA\", 2: \"Europe\", 3: \"Japan\"}.get(x)\n", " )\n", " dataset = pd.get_dummies(dataset, prefix=\"\", prefix_sep=\"\", dtype=float)\n", " return dataset\n", "\n", "\n", "def train_test_split(dataset, split_frac=0.8, random_state=0):\n", " \"\"\"\n", " Split data into train and test\n", " Args:\n", " dataset: pandas dataframe\n", " split_frac: fraction of data to use for training\n", " random_state: random seed\n", " Returns:\n", " train and test dataframes\n", " \"\"\"\n", " train_dataset = dataset.sample(frac=split_frac, random_state=random_state)\n", " test_dataset = dataset.drop(train_dataset.index)\n", " train_labels = train_dataset.pop(\"MPG\")\n", " test_labels = test_dataset.pop(\"MPG\")\n", "\n", " return train_dataset, test_dataset, train_labels, test_labels\n", "\n", "\n", "def normalize_dataset(train_dataset, test_dataset):\n", " \"\"\"\n", " Normalize data\n", " Args:\n", " train_dataset: pandas dataframe\n", " test_dataset: pandas dataframe\n", "\n", " Returns:\n", "\n", " \"\"\"\n", " train_stats = train_dataset.describe()\n", " train_stats = train_stats.transpose()\n", "\n", " def norm(x):\n", " return (x - train_stats[\"mean\"]) / train_stats[\"std\"]\n", "\n", " normed_train_data = norm(train_dataset)\n", " normed_test_data = norm(test_dataset)\n", "\n", " return normed_train_data, normed_test_data\n", "\n", "\n", "def build_model(num_units, dropout_rate):\n", " \"\"\"\n", " Build model\n", " Args:\n", " num_units: number of units in hidden layer\n", " dropout_rate: dropout rate\n", " Returns:\n", " compiled model\n", " \"\"\"\n", " model = Sequential(\n", " [\n", " layers.Dense(\n", " num_units,\n", " activation=\"relu\",\n", " input_shape=[9],\n", " ),\n", " layers.Dropout(rate=dropout_rate),\n", " layers.Dense(num_units, activation=\"relu\"),\n", " layers.Dense(1),\n", " ]\n", " )\n", "\n", " model.compile(loss=\"mse\", optimizer=\"adam\", metrics=[\"mae\", \"mse\"])\n", " return model\n", "\n", "\n", "def train(\n", " model,\n", " train_data,\n", " train_labels,\n", " validation_split=0.2,\n", " epochs=10,\n", "):\n", " \"\"\"\n", " Train model\n", " Args:\n", " train_data: pandas dataframe\n", " train_labels: pandas dataframe\n", " model: compiled model\n", " validation_split: fraction of data to use for validation\n", " epochs: number of epochs to train for\n", " Returns:\n", " history\n", " \"\"\"\n", " history = model.fit(\n", " train_data, train_labels, epochs=epochs, validation_split=validation_split\n", " )\n", "\n", " return history" ] }, { "cell_type": "markdown", "metadata": { "id": "u-iTnzt3B6Z_" }, "source": [ "### Run experiment and evaluate experiment runs\n", "\n", "You define several experiment configurations, run experiments and track them in Vertex AI Experiments" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "i2wnpu8_7JfV" }, "outputs": [], "source": [ "# Define experiment parameters\n", "parameters = [\n", " {\"num_units\": 16, \"dropout_rate\": 0.1, \"epochs\": 3},\n", " {\"num_units\": 16, \"dropout_rate\": 0.1, \"epochs\": 10},\n", " {\"num_units\": 16, \"dropout_rate\": 0.2, \"epochs\": 10},\n", " {\"num_units\": 32, \"dropout_rate\": 0.1, \"epochs\": 10},\n", " {\"num_units\": 32, \"dropout_rate\": 0.2, \"epochs\": 10},\n", "]\n", "\n", "# Read data\n", "dataset = read_data(\n", " \"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data\"\n", ")\n", "\n", "# Split data\n", "train_dataset, test_dataset, train_labels, test_labels = train_test_split(dataset)\n", "\n", "# Normalize data\n", "normed_train_data, normed_test_data = normalize_dataset(train_dataset, test_dataset)\n", "\n", "# Run experiments\n", "for i, params in enumerate(parameters):\n", "\n", " # Initialize Vertex AI Experiment run\n", " vertex_ai.start_run(run=f\"auto-mpg-local-run-{i}\")\n", "\n", " # Log training parameters\n", " vertex_ai.log_params(params)\n", "\n", " # Build model\n", " model = build_model(\n", " num_units=params[\"num_units\"], dropout_rate=params[\"dropout_rate\"]\n", " )\n", "\n", " # Train model\n", " history = train(\n", " model,\n", " normed_train_data,\n", " train_labels,\n", " epochs=params[\"epochs\"],\n", " )\n", "\n", " # Log additional parameters\n", " vertex_ai.log_params(history.params)\n", "\n", " # Log metrics per epochs\n", " for idx in range(0, history.params[\"epochs\"]):\n", " vertex_ai.log_time_series_metrics(\n", " {\n", " \"train_mae\": history.history[\"mae\"][idx],\n", " \"train_mse\": history.history[\"mse\"][idx],\n", " }\n", " )\n", "\n", " # Log final metrics\n", " loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)\n", " if np.isnan(loss):\n", " loss = 0\n", " if np.isnan(mae):\n", " mae = 0\n", " if np.isnan(mse):\n", " mse = 0\n", " vertex_ai.log_metrics({\"eval_loss\": loss, \"eval_mae\": mae, \"eval_mse\": mse})\n", "\n", " vertex_ai.end_run()" ] }, { "cell_type": "markdown", "metadata": { "id": "jZLrJZTfL7tE" }, "source": [ "### Extract parameters and metrics into a dataframe for analysis\n", "\n", "We can also extract all parameters and metrics associated with any Experiment into a dataframe for further analysis." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "jbRf1WoH_vbY" }, "outputs": [], "source": [ "experiment_df = vertex_ai.get_experiment_df()\n", "experiment_df.T" ] }, { "cell_type": "markdown", "metadata": { "id": "EYuYgqVCMKU1" }, "source": [ "### Visualizing an experiment's parameters and metrics\n", "\n", "You use parallel coordinates plotting to visualize experiment's parameters and metrics" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "r8orCj8iJuO1" }, "outputs": [], "source": [ "plt.rcParams[\"figure.figsize\"] = [15, 5]\n", "\n", "ax = pd.plotting.parallel_coordinates(\n", " experiment_df.reset_index(level=0),\n", " \"run_name\",\n", " cols=[\n", " \"param.num_units\",\n", " \"param.dropout_rate\",\n", " \"param.epochs\",\n", " \"metric.eval_loss\",\n", " \"metric.eval_mse\",\n", " \"metric.eval_mae\",\n", " ],\n", " color=[\"blue\", \"green\", \"pink\", \"red\"],\n", ")\n", "ax.set_yscale(\"symlog\")\n", "ax.legend(bbox_to_anchor=(1.0, 0.5))" ] }, { "cell_type": "markdown", "metadata": { "id": "WTHvPMweMlP1" }, "source": [ "## Visualizing experiments in Cloud Console\n", "\n", "Run the following to get the URL of Vertex AI Experiments for your project.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GmN9vE9pqqzt" }, "outputs": [], "source": [ "print(\"Vertex AI Experiments:\")\n", "print(\n", " f\"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}\"\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "TpV-iwP9qw9c" }, "source": [ "## Cleaning up\n", "\n", "To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n", "project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n", "\n", "Otherwise, you can delete the individual resources you created in this tutorial." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "659e822f49c1" }, "outputs": [], "source": [ "# Delete experiment\n", "exp = vertex_ai.Experiment(EXPERIMENT_NAME)\n", "backing_tensorboard = exp.get_backing_tensorboard_resource()\n", "exp.delete(delete_backing_tensorboard_runs=True)\n", "\n", "# Delete Tensorboard\n", "delete_tensorboard = False # Set True for deletion\n", "\n", "if delete_tensorboard:\n", " backing_tensorboard.delete()\n", "\n", "# Delete Cloud Storage objects that were created\n", "delete_bucket = False # Set True for deletion\n", "\n", "if delete_bucket:\n", " ! gsutil rm -rf {BUCKET_URI}" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "comparing_local_trained_models.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }