notebooks/official/automl/sdk_automl_tabular_forecasting_batch.ipynb (866 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "copyright"
},
"outputs": [],
"source": [
"# Copyright 2022 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "title"
},
"source": [
"# Vertex AI SDK: AutoML tabular forecasting model for batch prediction\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_tabular_forecasting_batch.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fautoml%2Fsdk_automl_tabular_forecasting_batch.ipynb\">\n",
" <img width=\"32px\" src=\"https://cloud.google.com/ml-engine/images/colab-enterprise-logo-32px.png\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td> \n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/official/automl/sdk_automl_tabular_forecasting_batch.ipynb\">\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_tabular_forecasting_batch.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "overview:automl"
},
"source": [
"## Overview\n",
"\n",
"\n",
"This tutorial demonstrates how to use the Vertex AI SDK to create tabular forecasting models and generate batch prediction using a Google Cloud [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users) model.\n",
"\n",
"Learn more about [Forecasting with AutoML](https://cloud.google.com/vertex-ai/docs/tabular-data/forecasting/overview)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "02b9af111927"
},
"source": [
"### Objective\n",
"\n",
"In this tutorial, you learn how to create an AutoML tabular forecasting model from a Python script, and then generate batch prediction using the Vertex AI SDK. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Cloud Console.\n",
"\n",
"This tutorial uses the following Google Cloud ML services:\n",
"\n",
"- AutoML Training\n",
"- Vertex AI batch prediction\n",
"- Vertex AI model resource\n",
"\n",
"The steps performed include:\n",
"\n",
"- Create a Vertex AI dataset resource.\n",
"- Train an AutoML tabular forecasting model resource.\n",
"- Obtain the evaluation metrics for the model resource.\n",
"- Make a batch prediction."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dataset:covid,forecast"
},
"source": [
"### Dataset\n",
"\n",
"The dataset used for this tutorial is a time series dataset containing samples drawn from the Iowa Liquor Retail Sales dataset. Data is made available by the Iowa Department of Commerce. It's provided under the Creative Commons Zero v1.0 Universal license. For more details, see: https://console.cloud.google.com/marketplace/product/iowa-department-of-commerce/iowa-liquor-sales. This dataset doesn't require any feature engineering. The version of the dataset used in this tutorial is stored in BigQuery."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "costs"
},
"source": [
"### Costs\n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI\n",
"pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n",
"pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
"Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "61RBz8LLbxCR"
},
"source": [
"## Get started"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "No17Cw5hgx12"
},
"source": [
"### Install Vertex AI SDK for Python and other required packages\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "install_aip:mbsdk"
},
"outputs": [],
"source": [
"! pip3 install --upgrade --quiet google-cloud-aiplatform"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "R5Xep4W9lq-Z"
},
"source": [
"### Restart runtime (Colab only)\n",
"\n",
"To use the newly installed packages, you must restart the runtime on Google Colab."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XRvKdaPDTznN"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
"\n",
" import IPython\n",
"\n",
" app = IPython.Application.instance()\n",
" app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SbmM4z7FOBpM"
},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>⚠️ The kernel is going to restart. Wait until it's finished before continuing to the next step. ⚠️</b>\n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dmWOrTJ3gx13"
},
"source": [
"### Authenticate your notebook environment (Colab only)\n",
"\n",
"Authenticate your environment on Google Colab.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NyKGtVQjgx13"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
"\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DF4l8DTdWgPY"
},
"source": [
"### Set Google Cloud project information\n",
"\n",
"To get started using Vertex AI, you must have an existing Google Cloud project. Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "set_project_id"
},
"outputs": [],
"source": [
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n",
"LOCATION = \"us-central1\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"! gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bucket:mbsdk"
},
"source": [
"### Create a Cloud Storage bucket\n",
"\n",
"Create a storage bucket to store intermediate artifacts such as datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bucket"
},
"outputs": [],
"source": [
"BUCKET_URI = f\"gs://your-bucket-name-{PROJECT_ID}-unique\" # @param {type:\"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "create_bucket"
},
"source": [
"**If your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "create_bucket"
},
"outputs": [],
"source": [
"! gsutil mb -l {LOCATION} -p {PROJECT_ID} {BUCKET_URI}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "setup_vars"
},
"source": [
"### Import libraries and define constants"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "import_aip:mbsdk"
},
"outputs": [],
"source": [
"import urllib\n",
"\n",
"from google.cloud import aiplatform, bigquery"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "init_aip:mbsdk"
},
"source": [
"## Initialize Vertex AI SDK for Python\n",
"\n",
"Initialize the Vertex AI SDK for Python for your project and corresponding bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "init_aip:mbsdk"
},
"outputs": [],
"source": [
"aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tutorial_start:automl"
},
"source": [
"## Tutorial\n",
"\n",
"Now you're ready to begin creating your own AutoML tabular forecasting model."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "import_file:u_dataset,csv"
},
"source": [
"#### Location of BigQuery training data.\n",
"\n",
"Now set the variable `TRAINING_DATASET_BQ_PATH` to the location of the BigQuery table. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "import_file:covid,csv,forecast"
},
"outputs": [],
"source": [
"TRAINING_DATASET_BQ_PATH = (\n",
" \"bq://bigquery-public-data:iowa_liquor_sales_forecasting.2020_sales_train\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "create_dataset:tabular,forecast"
},
"source": [
"### Create the Dataset\n",
"\n",
"Next, create the dataset resource by using the `create` method of the `TimeSeriesDataset` class, which takes the following parameters:\n",
"\n",
"- `display_name`: The human readable name for the dataset resource.\n",
"- `gcs_source`: A list of one or more dataset index files to import the data items into the dataset resource.\n",
"- `bq_source`: Alternatively, import data items from a BigQuery table into the dataset resource.\n",
"\n",
"This operation may take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "create_dataset:tabular,forecast"
},
"outputs": [],
"source": [
"dataset = aiplatform.TimeSeriesDataset.create(\n",
" display_name=\"iowa_liquor_sales_train\",\n",
" bq_source=[TRAINING_DATASET_BQ_PATH],\n",
")\n",
"\n",
"time_column = \"date\"\n",
"time_series_identifier_column = \"store_name\"\n",
"target_column = \"sale_dollars\"\n",
"\n",
"print(dataset.resource_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "set_transformations:covid"
},
"outputs": [],
"source": [
"COLUMN_SPECS = {\n",
" time_column: \"timestamp\",\n",
" target_column: \"numeric\",\n",
" \"city\": \"categorical\",\n",
" \"zip_code\": \"categorical\",\n",
" \"county\": \"categorical\",\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "create_automl_pipeline:tabular,forecast"
},
"source": [
"### Create and run training job\n",
"\n",
"To train an AutoML model, you perform two steps: \n",
"1) Create a training job.\n",
"2) Specify your training parameters and run the job.\n",
"\n",
"#### Create training job\n",
"\n",
"An AutoML training job is created using the `AutoMLForecastingTrainingJob` class, with the following parameters:\n",
"\n",
"- `display_name`: The human readable name for the training job resource.\n",
"- `column_transformations`: (Optional): Transformations to apply to the input columns\n",
"- `optimization_objective`: The optimization objective to minimize or maximize.\n",
" - `minimize-rmse`\n",
" - `minimize-mae`\n",
" - `minimize-rmsle`\n",
"\n",
"The instantiated object is the job for the training pipeline."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "create_automl_pipeline:tabular,forecast"
},
"outputs": [],
"source": [
"MODEL_DISPLAY_NAME = \"iowa-liquor-sales-forecast-model\"\n",
"\n",
"training_job = aiplatform.AutoMLForecastingTrainingJob(\n",
" display_name=MODEL_DISPLAY_NAME,\n",
" optimization_objective=\"minimize-rmse\",\n",
" column_specs=COLUMN_SPECS,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "run_automl_pipeline:forecast"
},
"source": [
"#### Run the training pipeline\n",
"\n",
"Next, start the training job by invoking the `run` method, with the following parameters:\n",
"\n",
"- `dataset`: The dataset resource to train the model.\n",
"- `target_column`: The column in the dataset that contains the values the model is trying to forecast.\n",
"- `time_column`: Time-series column for the forecast model.\n",
"- `time_series_identifier_column`: ID column for the time-series column.\n",
"- `available_at_forecast_columns`: List of columns that are available at the time of forecasting.\n",
"- `unavailable_at_forecast_columns`: List of columns that aren't available at the time of forecasting.\n",
"- `time_series_attribute_columns`: Columns that contain attributes or metadata related to the time series data, such as \"city,\" \"zip_code,\" and \"county\" in this example. These attributes can help the model understand the context of the time series.\n",
"- `forecast_horizon`: It determines how far into the future you want to predict, representing the number of time steps ahead for which the model generates predictions.\n",
"- `context_window`: The number of historical time steps the model uses as context for making predictions. A context window of 30 means the model uses data from the past 30 time steps to forecast future values.\n",
"- `data_granularity_unit`: The unit of time used for granularity in the data, such as \"day\" or \"hour.\" This specifies the time interval between data points.\n",
"- `data_granularity_count`: The count of the granularity unit. For example, a data_granularity_count of 1 with a `data_granularity_unit` of \"day\" means each data point represents one day.\n",
"- `weight_column`: This parameter lets you assign different weights to different data points in your training set.\n",
"- `budget_milli_node_hours`: Maximum training time specified in unit of millihours (1000 = hour).\n",
"- `model_display_name`: The human readable name for the trained model.\n",
"- `predefined_split_column_name`: The name of a column used to specify predefined splits for training and evaluation. If not used, it’s set to `None`.\n",
"\n",
"The `run` method when completed returns the model resource.\n",
"\n",
"The execution of the training pipeline may take up to one hour."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "run_automl_pipeline:forecast"
},
"outputs": [],
"source": [
"model = training_job.run(\n",
" dataset=dataset,\n",
" target_column=target_column,\n",
" time_column=time_column,\n",
" time_series_identifier_column=time_series_identifier_column,\n",
" available_at_forecast_columns=[time_column],\n",
" unavailable_at_forecast_columns=[target_column],\n",
" time_series_attribute_columns=[\"city\", \"zip_code\", \"county\"],\n",
" forecast_horizon=30,\n",
" context_window=30,\n",
" data_granularity_unit=\"day\",\n",
" data_granularity_count=1,\n",
" weight_column=None,\n",
" budget_milli_node_hours=1000,\n",
" model_display_name=MODEL_DISPLAY_NAME,\n",
" predefined_split_column_name=None,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "evaluate_the_model:mbsdk"
},
"source": [
"## Review model evaluation scores\n",
"\n",
"Once your model training is complete, you can examine the evaluation scores to assess the model performance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "evaluate_the_model:mbsdk"
},
"outputs": [],
"source": [
"model_evaluations = model.list_model_evaluations()\n",
"\n",
"for model_evaluation in model_evaluations:\n",
" print(model_evaluation.to_dict())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "make_prediction"
},
"source": [
"## Send a batch prediction request\n",
"\n",
"Send a batch prediction to your deployed model."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "batch_request:mbsdk,both_csv"
},
"source": [
"### Make the batch prediction request\n",
"\n",
"Now that your Model resource is trained, you can make a batch prediction by invoking the `batch_predict()` method using a BigQuery source and destination, with the following parameters:\n",
"\n",
"- `job_display_name`: The human readable name for the batch prediction job.\n",
"- `bigquery_source`: BigQuery URI to a table, up to 2000 characters long. For example: `bq://projectId.bqDatasetId.bqTableId`\n",
"- `bigquery_destination_prefix`: The BigQuery dataset or table for storing the batch prediction results.\n",
"- `instances_format`: The format for the input instances. Since a BigQuery source is used here, this should be set to `bigquery`.\n",
"- `predictions_format`: The format for the output predictions, `bigquery` is used here to output to a BigQuery table.\n",
"- `generate_explanations`: Set to `True` to generate explanations.\n",
"- `sync`: Set **True** to wait until the completion of the job."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2c9d935c6ab9"
},
"outputs": [],
"source": [
"batch_predict_bq_output_dataset_name = \"iowa_liquor_sales_predictions\"\n",
"batch_predict_bq_output_dataset_path = \"{}.{}\".format(\n",
" PROJECT_ID, batch_predict_bq_output_dataset_name\n",
")\n",
"batch_predict_bq_output_uri_prefix = \"bq://{}.{}\".format(\n",
" PROJECT_ID, batch_predict_bq_output_dataset_name\n",
")\n",
"# Must be the same location as batch_predict_bq_input_uri\n",
"client = bigquery.Client(project=PROJECT_ID)\n",
"bq_dataset_id = bigquery.Dataset(batch_predict_bq_output_dataset_path)\n",
"dataset_location = \"US\" # @param {type : \"string\"}\n",
"bq_dataset_id.location = dataset_location\n",
"# delete any existing dataset\n",
"try:\n",
" client.delete_dataset(bq_dataset_id, delete_contents=True)\n",
"except Exception as e:\n",
" print(e)\n",
"bq_dataset = client.create_dataset(bq_dataset_id)\n",
"print(\n",
" \"Created bigquery dataset {} in {}\".format(\n",
" batch_predict_bq_output_dataset_path, dataset_location\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "99b7a9287ba6"
},
"source": [
"For AutoML models, manual scaling can be adjusted by setting both min and max nodes i.e., `starting_replica_count` and `max_replica_count` as the same value(in this example, set to 1). The node count can be increased or decreased as required by the load\n",
" \n",
"The `batch_predict` method can export predictions either to BigQuery or GCS. In this example, the predictions are exported to BigQuery"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "batch_request:mbsdk,both_csv"
},
"outputs": [],
"source": [
"PREDICTION_DATASET_BQ_PATH = (\n",
" \"bq://bigquery-public-data:iowa_liquor_sales_forecasting.2021_sales_predict\"\n",
")\n",
"\n",
"batch_prediction_job = model.batch_predict(\n",
" job_display_name=\"iowa_liquor_sales_forecasting_predictions\",\n",
" bigquery_source=PREDICTION_DATASET_BQ_PATH,\n",
" instances_format=\"bigquery\",\n",
" bigquery_destination_prefix=batch_predict_bq_output_uri_prefix,\n",
" predictions_format=\"bigquery\",\n",
" generate_explanation=True,\n",
" sync=False,\n",
")\n",
"\n",
"print(batch_prediction_job)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "batch_request_wait:mbsdk"
},
"source": [
"### Wait for completion of batch prediction job\n",
"\n",
"Next, wait for the batch job to complete. Alternatively, you can set the `sync` parameter to `True` in the `batch_predict()` method to wait until the batch prediction job is completed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "batch_request_wait:mbsdk"
},
"outputs": [],
"source": [
"batch_prediction_job.wait()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "get_batch_prediction:mbsdk,forecast"
},
"source": [
"### Get the predictions and explanations\n",
"\n",
"Next, get the results from the completed batch prediction job and print them out. Each result row includes the prediction and explanation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "get_batch_prediction:mbsdk,forecast"
},
"outputs": [],
"source": [
"for row in batch_prediction_job.iter_outputs():\n",
" print(row)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "78080aa9088e"
},
"source": [
"### Visualize the forecasts\n",
"\n",
"Lastly, follow the given link to visualize the generated forecasts in [Data Studio](https://support.google.com/datastudio/answer/6283323?hl=en).\n",
"The code block included in this section dynamically generates a Data Studio link that specifies the template, the location of the forecasts, and the query to generate the chart. The data is populated from the forecasts generated using BigQuery options where the destination dataset is `batch_predict_bq_output_dataset_path`.\n",
"\n",
"You can inspect the used template at https://datastudio.google.com/c/u/0/reporting/067f70d2-8cd6-4a4c-a099-292acd1053e8. This was created by Google specifically to view forecasting predictions.\n",
"\n",
"**Note:** The Data Studio dashboard can only show the charts properly when the `batch_predict` job is run successfully using the BigQuery options."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f82f00be2160"
},
"outputs": [],
"source": [
"tables = client.list_tables(batch_predict_bq_output_dataset_path)\n",
"\n",
"prediction_table_id = \"\"\n",
"for table in tables:\n",
" if (\n",
" table.table_id.startswith(\"predictions_\")\n",
" and table.table_id > prediction_table_id\n",
" ):\n",
" prediction_table_id = table.table_id\n",
"batch_predict_bq_output_uri = \"{}.{}\".format(\n",
" batch_predict_bq_output_dataset_path, prediction_table_id\n",
")\n",
"\n",
"\n",
"def _sanitize_bq_uri(bq_uri):\n",
" if bq_uri.startswith(\"bq://\"):\n",
" bq_uri = bq_uri[5:]\n",
" return bq_uri.replace(\":\", \".\")\n",
"\n",
"\n",
"def get_data_studio_link(\n",
" batch_prediction_bq_input_uri,\n",
" batch_prediction_bq_output_uri,\n",
" time_column,\n",
" time_series_identifier_column,\n",
" target_column,\n",
"):\n",
" batch_prediction_bq_input_uri = _sanitize_bq_uri(batch_prediction_bq_input_uri)\n",
" batch_prediction_bq_output_uri = _sanitize_bq_uri(batch_prediction_bq_output_uri)\n",
" base_url = \"https://datastudio.google.com/c/u/0/reporting\"\n",
" query = (\n",
" \"SELECT \\\\n\"\n",
" \" CAST(input.{} as DATETIME) timestamp_col,\\\\n\"\n",
" \" CAST(input.{} as STRING) time_series_identifier_col,\\\\n\"\n",
" \" CAST(input.{} as NUMERIC) historical_values,\\\\n\"\n",
" \" CAST(predicted_{}.value as NUMERIC) predicted_values,\\\\n\"\n",
" \" * \\\\n\"\n",
" \"FROM `{}` input\\\\n\"\n",
" \"LEFT JOIN `{}` output\\\\n\"\n",
" \"ON\\\\n\"\n",
" \"CAST(input.{} as DATETIME) = CAST(output.{} as DATETIME)\\\\n\"\n",
" \"AND CAST(input.{} as STRING) = CAST(output.{} as STRING)\"\n",
" )\n",
" query = query.format(\n",
" time_column,\n",
" time_series_identifier_column,\n",
" target_column,\n",
" target_column,\n",
" batch_prediction_bq_input_uri,\n",
" batch_prediction_bq_output_uri,\n",
" time_column,\n",
" time_column,\n",
" time_series_identifier_column,\n",
" time_series_identifier_column,\n",
" )\n",
" params = {\n",
" \"templateId\": \"067f70d2-8cd6-4a4c-a099-292acd1053e8\",\n",
" \"ds0.connector\": \"BIG_QUERY\",\n",
" \"ds0.projectId\": PROJECT_ID,\n",
" \"ds0.billingProjectId\": PROJECT_ID,\n",
" \"ds0.type\": \"CUSTOM_QUERY\",\n",
" \"ds0.sql\": query,\n",
" }\n",
" params_str_parts = []\n",
" for k, v in params.items():\n",
" params_str_parts.append('\"{}\":\"{}\"'.format(k, v))\n",
" params_str = \"\".join([\"{\", \",\".join(params_str_parts), \"}\"])\n",
" return \"{}?{}\".format(base_url, urllib.parse.urlencode({\"params\": params_str}))\n",
"\n",
"\n",
"print(\n",
" get_data_studio_link(\n",
" PREDICTION_DATASET_BQ_PATH,\n",
" batch_predict_bq_output_uri,\n",
" time_column,\n",
" time_series_identifier_column,\n",
" target_column,\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cleanup:mbsdk"
},
"source": [
"# Cleaning up\n",
"\n",
"To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
"project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
"\n",
"Otherwise, you can delete the individual resources you created in this tutorial:\n",
"\n",
"- Dataset\n",
"- AutoML Training Job\n",
"- Model\n",
"- Batch Prediction Job\n",
"- Cloud Storage Bucket"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cleanup:mbsdk"
},
"outputs": [],
"source": [
"# Delete dataset\n",
"dataset.delete()\n",
"\n",
"# Training job\n",
"training_job.delete()\n",
"\n",
"# Delete model\n",
"model.delete()\n",
"\n",
"# Delete batch prediction job\n",
"batch_prediction_job.delete()\n",
"\n",
"# Delete the dataset\n",
"try:\n",
" client.delete_dataset(bq_dataset_id, delete_contents=True, not_found_ok=True)\n",
"except Exception as e:\n",
" print(e)\n",
"\n",
"# Set this to true only if you'd like to delete your bucket\n",
"delete_bucket = False # set True for deletion\n",
"\n",
"if delete_bucket:\n",
" ! gsutil rm -r $BUCKET_URI"
]
}
],
"metadata": {
"colab": {
"name": "sdk_automl_tabular_forecasting_batch.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}