notebooks/part_3_time_series_forecasting.ipynb (1,789 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"id": "lvNpj26L7bbY67xWsvugZull",
"metadata": {
"tags": [],
"id": "lvNpj26L7bbY67xWsvugZull"
},
"source": [
"# Copyright 2025 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# CleanSight (Part 3): Predictions of bus stop related events\n",
"*powered by BigQuery and Gemini*"
],
"metadata": {
"id": "g4xlhet9alul"
},
"id": "g4xlhet9alul"
},
{
"cell_type": "markdown",
"source": [
"<table align=\"left\">\n",
"<td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/data-to-ai/blob/main/notebooks/part_3_time_series_forecasting.ipynb\">\n",
" <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
"</td>\n",
"<td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fdata-to-ai%2Fmain%2Fnotebooks%2Fpart_3_time_series_forecasting.ipynb\">\n",
" <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
"</td>\n",
"<td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/data-to-ai/main/notebooks/part_3_time_series_forecasting.ipynb\">\n",
" <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
" </a>\n",
"</td>\n",
"<td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/bigquery/import?url=https://github.com/GoogleCloudPlatform/data-to-ai/blob/main/notebooks/part_3_time_series_forecasting.ipynb\">\n",
" <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/bigquery/v1/32px.svg\" alt=\"BigQuery Studio logo\"><br> Open in BigQuery Studio\n",
" </a>\n",
"</td>\n",
"<td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/data-to-ai/blob/main/notebooks/part_3_time_series_forecasting.ipynb\">\n",
" <img width=\"32px\" src=\"https://upload.wikimedia.org/wikipedia/commons/9/91/Octicons-mark-github.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
"</table>"
],
"metadata": {
"id": "1tE-f4hma-QX"
},
"id": "1tE-f4hma-QX"
},
{
"cell_type": "markdown",
"source": [
"# Overview\n",
"\n",
"This notebook is a continuation of the demo of BigQuery capabilities in a fictional project called **CleanSight**. We'll explore how advanced time-series predictions can be combined with data produced using multimodal analysis.\n",
"\n",
"This notebook will highlight:\n",
"\n",
"* Forecasting expected number of riders for a particular bus stop based on the BigQuery's multiple time-series with univariate and multivariate models.\n",
"* Using WeatherNext Graph BigQuery dataset for creating time-series models and for forecasting based on the predicted weather.\n",
"\n",
"Let's get started!"
],
"metadata": {
"id": "M6SimqgxbcmM"
},
"id": "M6SimqgxbcmM"
},
{
"cell_type": "markdown",
"source": [
"# Prerequisites"
],
"metadata": {
"id": "hLx6XOQbj-NP"
},
"id": "hLx6XOQbj-NP"
},
{
"cell_type": "markdown",
"source": [
"The sample code below assumes you have access to a [WeatherNext](https://deepmind.google/technologies/weathernext/#access-weathernext) BigQuery dataset.\n",
"\n",
"We will be using the WeatherNet Graph dataset. For more details on how to get access and the terms and conditions of using this data refer to [the dataset specific documentation](https://console.cloud.google.com/bigquery/analytics-hub/discovery/projects/gcp-public-data-weathernext/locations/us/dataExchanges/weathernext_19397e1bcb7/listings/weathernext_graph_forecasts_19398be87ec). Note: you would need to open this link in a new tab if you get an error related to a missing project.\n",
"\n",
"If you don't have access to the WeatherNext data you can forecast rideship using the univariate model."
],
"metadata": {
"id": "l40wdqdEkIlC"
},
"id": "l40wdqdEkIlC"
},
{
"cell_type": "markdown",
"source": [
"## WeatherNext Graph dataset"
],
"metadata": {
"id": "ACfseMUMlvmv"
},
"id": "ACfseMUMlvmv"
},
{
"cell_type": "markdown",
"source": [
"Once you get access, a linked dataset named `weathernext_graph_forecasts` will be created in your Google Cloud project.\n",
"\n",
"The dataset will contain a table with a numeric name. This table gets updated 4 times a day with newly forecasted data. The data is forecast for a particular rectangular geographic area. The forecast is done in 6 hour intervals and a number of data points are included: expected temperatures at different levels above ground, wind, humidity, etc."
],
"metadata": {
"id": "9Wy1OQNpl3cA"
},
"id": "9Wy1OQNpl3cA"
},
{
"cell_type": "markdown",
"source": [
"## Getting started"
],
"metadata": {
"id": "TJNbPGBEgQKC"
},
"id": "TJNbPGBEgQKC"
},
{
"cell_type": "markdown",
"source": [
"Let's first create some environment variables, including your Google Cloud project ID and the region to deploy resources into.\n",
"\n",
"This notebook assumes that the WeatherNext dataset was created in the same project. If you would like to use a different project, some of the SQL statements will need to be modified to include the project ID of the project which hosts the WeatherNext dataset."
],
"metadata": {
"id": "ERSjkLhkgRu9"
},
"id": "ERSjkLhkgRu9"
},
{
"cell_type": "code",
"source": [
"PROJECT_ID = \"\" # @param {type:\"string\"}\n",
"REGION = \"us-central1\" # @param {type:\"string\"}\n",
"\n",
"if PROJECT_ID == \"\":\n",
" PROJECT_ID = !gcloud config get-value project\n",
" PROJECT_ID = PROJECT_ID[0]"
],
"metadata": {
"id": "UbBMzAcsgaOW"
},
"id": "UbBMzAcsgaOW",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Enable necessary APIs"
],
"metadata": {
"id": "HjmqxYLZgs5_"
},
"id": "HjmqxYLZgs5_"
},
{
"cell_type": "code",
"source": [
"!gcloud services enable --project {PROJECT_ID} \\\n",
" aiplatform.googleapis.com"
],
"metadata": {
"id": "T5kAeQ5vgvs0"
},
"id": "T5kAeQ5vgvs0",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Install packages"
],
"metadata": {
"id": "mm46iiNJg-Oj"
},
"id": "mm46iiNJg-Oj"
},
{
"cell_type": "code",
"source": [
"%pip install --upgrade --user --quiet \\\n",
" google-cloud-aiplatform \\\n",
" google-cloud-bigquery"
],
"metadata": {
"id": "PWQI4caghEpz"
},
"id": "PWQI4caghEpz",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Create visualization helper functions\n"
],
"metadata": {
"id": "7yLx_qY8ZHG3"
},
"id": "7yLx_qY8ZHG3"
},
{
"cell_type": "code",
"source": [
"import matplotlib.pyplot as plt\n",
"import pandas as pd\n",
"\n",
"def display_columns_as_rows(data):\n",
" display_data = data.transpose()\n",
" styler = display_data.style\n",
" styler.set_table_styles([\n",
" {'selector': 'th.col_heading', 'props': 'text-align: right;'},\n",
" {'selector': 'th.row_heading', 'props': 'text-align: right;'},\n",
" {'selector': 'td', 'props': 'text-align: right;'},\n",
" ], overwrite=False)\n",
" display(styler)\n",
"\n",
"def plot_historical_and_forecast(title,\n",
" historical_data,\n",
" timestamp_col_name,\n",
" data_col_name,\n",
" temperature_col_name=None,\n",
" percipitation_col_name=None,\n",
" forecast_output=None,\n",
" forecast_temperature=None,\n",
" actual=None):\n",
"\n",
" historical_data = historical_data.sort_values(timestamp_col_name)\n",
"\n",
" display_weather_graphs = temperature_col_name is not None\n",
"\n",
" figure = plt.figure(figsize=(20, 14 if display_weather_graphs else 6))\n",
" # plt.xlabel('Date and time')\n",
"\n",
" if display_weather_graphs:\n",
" plt.tick_params(left = False, bottom = False, labelleft = False, labelbottom = False)\n",
"\n",
" temperature_axis = plt.subplot(312)\n",
" plt.ylabel('Temperature (K)')\n",
" plt.plot(historical_data[timestamp_col_name], historical_data[temperature_col_name], color='orange', label='Temperature')\n",
" plt.legend(loc = 'upper center', prop={'size': 14})\n",
" temperature_axis.tick_params(bottom = False, labelbottom = False)\n",
"\n",
" percipitation_axis = plt.subplot(313, sharex = temperature_axis)\n",
" plt.ylabel('Percipitation (mm)')\n",
" plt.plot(historical_data[timestamp_col_name], historical_data[percipitation_col_name], color='green', label = 'Precipitation')\n",
" plt.legend(loc = 'upper center', prop={'size': 14})\n",
"\n",
" main_axis = plt.subplot(311, sharex = temperature_axis)\n",
" main_axis.tick_params(bottom = False, labelbottom = False)\n",
"\n",
" # Plot the input historical data\n",
" plt.ylabel('Number of riders')\n",
" plt.plot(historical_data[timestamp_col_name], historical_data[data_col_name], label = 'Historical')\n",
"\n",
"\n",
" if forecast_output is not None:\n",
" forecast_output = forecast_output.sort_values('forecast_timestamp')\n",
" forecast_output['forecast_timestamp'] = pd.to_datetime(forecast_output['forecast_timestamp'])\n",
" x_data = forecast_output['forecast_timestamp']\n",
" y_data = forecast_output['forecast_value']\n",
" confidence_level = forecast_output['confidence_level'].iloc[0] * 100\n",
" low_CI = forecast_output['prediction_interval_lower_bound']\n",
" upper_CI = forecast_output['prediction_interval_upper_bound']\n",
" # Plot the forecast data\n",
" plt.plot(x_data, y_data, alpha = 1, label = 'Forecast', linestyle='--')\n",
" # Shade the confidence interval\n",
" plt.fill_between(x_data, low_CI, upper_CI, color = '#539caf', alpha = 0.4,\n",
" label = f'{confidence_level} confidence interval')\n",
"\n",
" # Plot actual data\n",
" if actual is not None:\n",
" actual = actual.sort_values(timestamp_col_name)\n",
" plt.plot(actual[timestamp_col_name], actual[data_col_name], label = 'Actual', linestyle='--')\n",
"\n",
" # Display title, legend\n",
" plt.title(f'{title}', fontsize=20)\n",
" plt.legend(loc = 'upper center', prop={'size': 14})"
],
"metadata": {
"id": "AQmMdxj0ZLOe"
},
"id": "AQmMdxj0ZLOe",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Prepare the weather data to be used during the model creation"
],
"metadata": {
"id": "Bez0o0Za8O1a"
},
"id": "Bez0o0Za8O1a"
},
{
"cell_type": "markdown",
"source": [
"Let's create a BigQuery dataset which will contain several views to simplify access to the linked WeatherNext dataset."
],
"metadata": {
"id": "qLcAc-rz9TAb"
},
"id": "qLcAc-rz9TAb"
},
{
"cell_type": "code",
"source": [
"%%bigquery --project {PROJECT_ID}\n",
"\n",
"DECLARE latest_init_time TIMESTAMP;\n",
"\n",
"-- Find out the latest partition and the name of the table which contains the forecast.\n",
"DECLARE weather_metadata DEFAULT (SELECT\n",
" STRUCT(table_name, STRUCT (partition_id AS id, SUBSTRING(partition_id, 0, 4) AS year,\n",
" SUBSTRING(partition_id, 5, 2) AS month,\n",
" SUBSTRING(partition_id, 7, 2) AS day) AS part) FROM\n",
" `weathernext_graph_forecasts.INFORMATION_SCHEMA.PARTITIONS` WHERE partition_id != '__NULL__' ORDER BY partition_id DESC LIMIT 1);\n",
"\n",
"CREATE SCHEMA IF NOT EXISTS weathernext_graph_derived OPTIONS (description = 'WeatherNext Forecasts - Derived', location = \"US\");\n",
"\n",
"EXECUTE IMMEDIATE FORMAT(\"CREATE VIEW IF NOT EXISTS `weathernext_graph_derived.forecast` AS SELECT * FROM `weathernext_graph_forecasts.%s`\",\n",
" weather_metadata.table_name);\n",
"\n",
"EXECUTE IMMEDIATE FORMAT(\"\"\"\n",
" SELECT MAX(init_time)\n",
" FROM `weathernext_graph_derived.forecast` WHERE init_time >= TIMESTAMP('%s-%s-%s 00:00:00')\n",
" \"\"\", weather_metadata.part.year, weather_metadata.part.month, weather_metadata.part.day)\n",
" INTO latest_init_time;\n",
"\n",
"EXECUTE IMMEDIATE FORMAT(\"\"\"\n",
"CREATE OR REPLACE VIEW `weathernext_graph_derived.latest_forecast` AS SELECT * FROM `weathernext_graph_derived.forecast` WHERE init_time = TIMESTAMP('%s')\n",
"\"\"\", STRING(latest_init_time));\n"
],
"metadata": {
"id": "86b8mZGy-FE0"
},
"id": "86b8mZGy-FE0",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"We now have a dataset named `weathernext_graph_derived`. This dataset has two views:\n",
"* `forecast`, which is just an alias to the forecast table in the linked dataset\n",
"* `latest_forecast`, which is selects the latest forecast\n",
"\n",
"Consider using the `latest_forecast` view to explore the contents of the dataset because it implements partition pruning of the source table for the cost effective to access partitioned tables.\n",
"\n",
"You can rerun this cell as many times as you need to update the `latest_forecast` view."
],
"metadata": {
"id": "SmqZm7qK-7oQ"
},
"id": "SmqZm7qK-7oQ"
},
{
"cell_type": "markdown",
"source": [
"Here's the definition of the latest_forecast_view:"
],
"metadata": {
"id": "ooNHuuXjBW-d"
},
"id": "ooNHuuXjBW-d"
},
{
"cell_type": "code",
"source": [
"!bq show --format=prettyjson '{PROJECT_ID}:weathernext_graph_derived.latest_forecast'"
],
"metadata": {
"id": "wtQXaCTzBcvY"
},
"id": "wtQXaCTzBcvY",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now, we will extract the historic forecasts for the time period for which we plan to train the model. We are going to use several weather data points, temperature and precipitation, to do the forecasting."
],
"metadata": {
"id": "V7IgYVVbGxlo"
},
"id": "V7IgYVVbGxlo"
},
{
"cell_type": "code",
"source": [
"%%bigquery --project {PROJECT_ID}\n",
"\n",
"-- We pick a short period, 1 month, to show how to train the model. A much larger time period\n",
"-- can be used in production.\n",
"DECLARE historic_data_start_time DEFAULT TIMESTAMP_SUB(CURRENT_TIMESTAMP, INTERVAL 31 DAY);\n",
"DECLARE historic_data_end_time DEFAULT CURRENT_TIMESTAMP;\n",
"\n",
"-- We use zip codes to identify the area we would like to forecast, but you can define\n",
"-- the area to cover in a number of different ways.\n",
"DECLARE zipcodes_to_cover DEFAULT [\"10001\", \"10002\"];\n",
"DECLARE geo_area_to_cover DEFAULT (\n",
" WITH zip_areas AS (\n",
" SELECT zip_code_geom as area\n",
" FROM `bigquery-public-data.geo_us_boundaries.zip_codes`\n",
" WHERE zip_code in UNNEST(zipcodes_to_cover))\n",
" SELECT ST_UNION(ARRAY_AGG(area)) as combined_area FROM zip_areas);\n",
"\n",
"-- Drop the existing table in case we need to re-create the forecast\n",
"DROP TABLE IF EXISTS weathernext_graph_derived.historical_local_forecast;\n",
"\n",
"CREATE TABLE weathernext_graph_derived.historical_local_forecast AS (\n",
"WITH\n",
"all_forecast_points AS (\n",
" SELECT\n",
" geography,\n",
" geography_polygon,\n",
" forecast_point.time as time,\n",
" -- The model sometimes produces very small negative precipitation numbers; here we normalize the data\n",
" GREATEST(forecast_point.total_precipitation_6hr, 0.) as total_precipitation_6hr,\n",
" forecast_point.`2m_temperature` as `2m_temperature`,\n",
" -- Order the forecast points for same area in reverse chronological order of the forecast to pick the latest\n",
" ROW_NUMBER() OVER (\n",
" PARTITION BY ST_ASBINARY(geography),forecast_point.time\n",
" ORDER BY init_time DESC) as row_number,\n",
" ST_ASBINARY(geography) as location_id,\n",
" FROM weathernext_graph_derived.forecast, UNNEST(forecast.forecast) as forecast_point\n",
" WHERE\n",
" -- Select enough data to cover the required time period\n",
" forecast.init_time BETWEEN TIMESTAMP_SUB(historic_data_start_time, INTERVAL 12 HOUR) AND historic_data_end_time AND\n",
" forecast_point.time BETWEEN TIMESTAMP_SUB(historic_data_start_time, INTERVAL 6 HOUR) AND historic_data_end_time AND\n",
" -- Select all the forecasts that can be relevant\n",
" ST_INTERSECTS(geo_area_to_cover, forecast.geography_polygon)\n",
")\n",
"SELECT\n",
" geography,\n",
" geography_polygon,\n",
" STRUCT(\n",
" time,\n",
" total_precipitation_6hr,\n",
" `2m_temperature`,\n",
" -- We also get the previous forecast for this location in order to make subsequent SQLs simpler\n",
" LAG(`2m_temperature`) OVER(PARTITION BY location_id ORDER BY time) as prev_2m_temperature\n",
" ) as forecast\n",
" FROM all_forecast_points\n",
" WHERE row_number = 1\n",
");\n"
],
"metadata": {
"id": "WlP9NFkkHZe3"
},
"id": "WlP9NFkkHZe3",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"We now have the dataset of pretty accurate weather forecasts for a given area, spaced by 6 hours:"
],
"metadata": {
"id": "kPRQCXcXJsZm"
},
"id": "kPRQCXcXJsZm"
},
{
"cell_type": "code",
"source": [
"%%bigquery historical_forecast --project {PROJECT_ID}\n",
"\n",
"SELECT geography, forecast.time, forecast.total_precipitation_6hr, forecast.`2m_temperature`, forecast.`prev_2m_temperature`\n",
" FROM weathernext_graph_derived.historical_local_forecast\n",
" WHERE forecast.prev_2m_temperature IS NOT NULL\n",
" ORDER BY ST_ASBINARY(geography), forecast.time LIMIT 20\n"
],
"metadata": {
"id": "HCGPs2ZbJ8lU"
},
"id": "HCGPs2ZbJ8lU",
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"display(historical_forecast)"
],
"metadata": {
"id": "5Db6tiWJLAQD"
},
"id": "5Db6tiWJLAQD",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Move extracted weather data to the location of bus stop data"
],
"metadata": {
"id": "TyrOjU-eDFU1"
},
"id": "TyrOjU-eDFU1"
},
{
"cell_type": "markdown",
"source": [
"If your bus stop data resides in the \"US\" BigQuery location then there is no need to do anything because the weather and the bus stop data are co-located and can be joined in the same query. Otherwise you will need to copy the extracted weather data. You can do that by using [cross-regional table copy](https://cloud.google.com/bigquery/docs/managing-tables#copy_tables_across_regions) capabilities of BigQuery."
],
"metadata": {
"id": "T0NKOfN6DWxM"
},
"id": "T0NKOfN6DWxM"
},
{
"cell_type": "code",
"source": [
"%%bigquery --project {PROJECT_ID} --location {REGION}\n",
"\n",
"CREATE SCHEMA IF NOT EXISTS multimodal;\n",
"\n",
"DROP TABLE IF EXISTS multimodal.historical_local_forecast;"
],
"metadata": {
"id": "0VjEdNVGGJ9d"
},
"id": "0VjEdNVGGJ9d",
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"! bq cp -f -n '{PROJECT_ID}:weathernext_graph_derived.historical_local_forecast' '{PROJECT_ID}:multimodal.historical_local_forecast'"
],
"metadata": {
"id": "y2XwaNPOFUqd"
},
"id": "y2XwaNPOFUqd",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Generate ridership data\n",
"\n",
"Let's assume that bus ridership depends on a number of factors - time of day, day of week, temperature and precipitation. We will generate a synthetic data set of bus ridership for several bus stops."
],
"metadata": {
"id": "4ZeA3dCQPpVE"
},
"id": "4ZeA3dCQPpVE"
},
{
"cell_type": "markdown",
"source": [
"### Create ridership table"
],
"metadata": {
"id": "OeEJB9VBT7-m"
},
"id": "OeEJB9VBT7-m"
},
{
"cell_type": "code",
"source": [
"%%bigquery\n",
"\n",
"DROP TABLE IF EXISTS `multimodal.ridership`;\n",
"\n",
"CREATE TABLE `multimodal.ridership`\n",
" (bus_stop_id STRING, event_ts TIMESTAMP, temperature FLOAT64, total_precipitation_6hr FLOAT64, num_riders INT64)\n",
" CLUSTER BY bus_stop_id;"
],
"metadata": {
"id": "9H2YWpgnUCRw"
},
"id": "9H2YWpgnUCRw",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Generate ridership events\n"
],
"metadata": {
"id": "aX6ZPXPwUGXE"
},
"id": "aX6ZPXPwUGXE"
},
{
"cell_type": "markdown",
"source": [
"We will generate synthetic ridership events based on a simple approach - generate an array of timestamps in the past, use an array with a couple of bus stops with some metadata (location, typical number of passengers, times of day and week these bus stops are typically busy, etc.). We will use real weather prediction data based on the time stop location and the time point to determine the likely temperature and precipitation. A temporary function will take all the metadata and the weather data and generate the number of riders."
],
"metadata": {
"id": "Ozvfu5_V1807"
},
"id": "Ozvfu5_V1807"
},
{
"cell_type": "markdown",
"source": [
"Let's define a SQL function which approximates the temperature at a particular point in time. This function uses linear approximation between two forecasted temperatures 6 hours apart. This is not a perfect approximation formula for temperatures and can be replaced with more sophisticated one if needed.\n",
"\n",
"We will use this function for generation of the synthetic data and for actual forecasting:"
],
"metadata": {
"id": "Me3fliKA3dUc"
},
"id": "Me3fliKA3dUc"
},
{
"cell_type": "code",
"source": [
"%%bigquery --project {PROJECT_ID}\n",
"\n",
"-- Approximate temperature. That assumes the previous temperature was forecast 6 hours ago.\n",
"CREATE OR REPLACE FUNCTION multimodal.temperature_approx(\n",
" forecast STRUCT<\n",
" time TIMESTAMP,\n",
" total_precipitation_6h FLOAT64,\n",
" `2m_temperature` FLOAT64,\n",
" prev_2m_temperature FLOAT64 >, event_ts TIMESTAMP) AS (\n",
" IF(forecast.prev_2m_temperature IS NULL,\n",
" forecast.`2m_temperature`, -- There is no previous temperature; use the current one\n",
" forecast.prev_2m_temperature +\n",
" (forecast.`2m_temperature` - forecast.prev_2m_temperature) -- temperature span, potentially negative\n",
" * (TIMESTAMP_DIFF(event_ts, TIMESTAMP_SUB(forecast.time, INTERVAL 6 HOUR), MINUTE)) -- number of minutes between the previous period and the time point\n",
" / (6 * 60) -- number of minutes in 6 hours\n",
" )\n",
" );"
],
"metadata": {
"id": "LboBZR0t3t3M"
},
"id": "LboBZR0t3t3M",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"We also will create a table with a couple of fictitious bus stops - their ids, locations, and some meta data."
],
"metadata": {
"id": "lJdmnF4FBRXY"
},
"id": "lJdmnF4FBRXY"
},
{
"cell_type": "markdown",
"source": [],
"metadata": {
"id": "s4QmsVenIemG"
},
"id": "s4QmsVenIemG"
},
{
"cell_type": "code",
"source": [
"%%bigquery --project {PROJECT_ID}\n",
"\n",
"-- Two bus stops in New York, NY, USA\n",
"DECLARE bus_stop_location_1 DEFAULT ST_GEOGPOINT(-73.98886258282087, 40.745073789633736);\n",
"DECLARE bus_stop_location_2 DEFAULT ST_GEOGPOINT( -73.9899701866148, 40.714129256307956);\n",
"\n",
"CREATE OR REPLACE TABLE multimodal.bus_stops AS (\n",
"SELECT bus_stop_id, location, base_number_of_riders, busy_in_morning, busy_in_evening, busy_on_weekend FROM UNNEST([\n",
" STRUCT(\"bus-stop-1\" as bus_stop_id, bus_stop_location_1 as location, 5 as base_number_of_riders, false AS busy_in_morning, true AS busy_in_evening, true AS busy_on_weekend),\n",
" STRUCT(\"bus-stop-2\" as bus_stop_id, bus_stop_location_2 as location, 10 as base_number_of_riders, true AS busy_in_morning, false AS busy_in_evening, false AS busy_on_weekend)])\n",
");\n"
],
"metadata": {
"id": "VbnrddOPBicD"
},
"id": "VbnrddOPBicD",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now, let's generate the ridership data:"
],
"metadata": {
"id": "l3_qIHUL4Pcr"
},
"id": "l3_qIHUL4Pcr"
},
{
"cell_type": "code",
"source": [
"%%bigquery --project {PROJECT_ID}\n",
"\n",
"DECLARE time_zone DEFAULT \"America/New_York\";\n",
"\n",
"DECLARE end_ts DEFAULT CURRENT_TIMESTAMP();\n",
"-- One month worth of data\n",
"DECLARE start_ts DEFAULT TIMESTAMP_SUB(end_ts, INTERVAL 31 DAY);\n",
"\n",
"\n",
"-- This function generates the number of riders for a given time point\n",
"-- It uses several different factors and adds some variance.\n",
"CREATE TEMP FUNCTION generate_number_of_riders(\n",
" base_number_of_riders INT64,\n",
" busy_in_morning BOOL,\n",
" busy_in_evening BOOL,\n",
" busy_on_weekend BOOL,\n",
" temperature FLOAT64,\n",
" precipitation FLOAT64,\n",
" event_ts TIMESTAMP)\n",
"AS (\n",
" CAST(\n",
" base_number_of_riders\n",
" *\n",
" -- Multiplier based on the temperature (in Kelvin)\n",
" CASE\n",
" -- Less than -3C/26F\n",
" WHEN temperature < 270 THEN .7\n",
" -- More than 32C/90F\n",
" WHEN temperature > 305 THEN .4\n",
" ELSE 1\n",
" END\n",
" *\n",
" -- Multiplier based on precipitation.\n",
" -- The higher the precipitation the fewer the riders\n",
" 1/(1 + precipitation)\n",
" *\n",
" -- Multiplier based on the day of week\n",
" CASE\n",
" -- 1 - Sunday\n",
" WHEN EXTRACT(DAYOFWEEK FROM event_ts) BETWEEN 2 AND 6 THEN 2\n",
" ELSE IF(busy_on_weekend, 3, 1)\n",
" END\n",
" *\n",
" -- Multiplier based on the time of the day\n",
" CASE\n",
" -- No riders at night\n",
" WHEN EXTRACT(HOUR FROM event_ts) BETWEEN 0 AND 6\n",
" THEN 0\n",
" -- Morning peak hours\n",
" WHEN EXTRACT(HOUR FROM event_ts) BETWEEN 7 AND 9\n",
" THEN IF(busy_in_morning, 1.5, 1.3)\n",
" -- Evening peak hours\n",
" WHEN EXTRACT(HOUR FROM event_ts) BETWEEN 15 AND 18\n",
" THEN IF(busy_in_evening, 1.5, 1.3)\n",
" -- Otherwise just the base number of riders\n",
" ELSE 1\n",
" END\n",
" -- Add 20% variance\n",
" * (.9 + (RAND()/5))\n",
" AS INT64\n",
" )\n",
");\n",
"\n",
"\n",
"INSERT INTO multimodal.ridership (bus_stop_id, event_ts, temperature, total_precipitation_6hr, num_riders)\n",
"(\n",
"WITH\n",
"event_timestamps AS (\n",
" SELECT TIMESTAMP(DATETIME(event_ts_in_utc, time_zone)) event_ts FROM\n",
" UNNEST(GENERATE_TIMESTAMP_ARRAY(start_ts, end_ts, INTERVAL 5 MINUTE)) as event_ts_in_utc\n",
"),\n",
"bus_stops_and_event_timestamps AS (\n",
" -- Cartesian join of the bus stops and time points\n",
" SELECT bus_stops.*, event_ts FROM multimodal.bus_stops, event_timestamps\n",
"),\n",
"events_and_weather AS (\n",
" SELECT\n",
" bus_stop_id,\n",
" event_ts,\n",
" base_number_of_riders,\n",
" busy_in_morning,\n",
" busy_in_evening,\n",
" busy_on_weekend,\n",
" weather.forecast,\n",
" multimodal.temperature_approx(weather.forecast, event_ts) as temperature,\n",
" FROM bus_stops_and_event_timestamps events, multimodal.historical_local_forecast weather\n",
" WHERE ST_COVERS(weather.geography_polygon, events.location) AND\n",
" event_ts BETWEEN TIMESTAMP_SUB( weather.forecast.time, INTERVAL 6 HOUR) AND weather.forecast.time\n",
")\n",
" SELECT\n",
" bus_stop_id,\n",
" event_ts,\n",
" temperature,\n",
" forecast.total_precipitation_6hr as total_precipitation_6hr,\n",
" generate_number_of_riders(\n",
" base_number_of_riders,\n",
" busy_in_morning,\n",
" busy_in_evening,\n",
" busy_on_weekend,\n",
" temperature,\n",
" forecast.total_precipitation_6hr,\n",
" event_ts) num_riders\n",
" FROM events_and_weather\n",
");"
],
"metadata": {
"id": "IouW45IIQvmm"
},
"id": "IouW45IIQvmm",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Visualize generated data"
],
"metadata": {
"id": "3yMyeDdVUXHY"
},
"id": "3yMyeDdVUXHY"
},
{
"cell_type": "markdown",
"source": [
"Let's take a look at the last 20 days of generated data. The first bus stop graph also shows the temperature and percipitation values."
],
"metadata": {
"id": "ue5rOAnkJ_IE"
},
"id": "ue5rOAnkJ_IE"
},
{
"cell_type": "code",
"source": [
"%%bigquery ridership_history\n",
"\n",
"SELECT bus_stop_id, event_ts, num_riders, temperature, total_precipitation_6hr\n",
" FROM `multimodal.ridership`\n",
" WHERE event_ts > TIMESTAMP_SUB(CURRENT_TIMESTAMP, INTERVAL 20 DAY)"
],
"metadata": {
"id": "9CtOQ1mBZR3M"
},
"id": "9CtOQ1mBZR3M",
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"bus_stop_list = list(ridership_history.bus_stop_id.unique())\n",
"bus_stop_list.sort()\n",
"\n",
"first_stop = True\n",
"for bus_stop_id in bus_stop_list:\n",
"\n",
" historical_data = ridership_history[ridership_history.bus_stop_id==bus_stop_id]\n",
" plot_historical_and_forecast(historical_data = historical_data,\n",
" timestamp_col_name = \"event_ts\",\n",
" data_col_name = \"num_riders\",\n",
" temperature_col_name=\"temperature\" if first_stop else None,\n",
" percipitation_col_name=\"total_precipitation_6hr\" if first_stop else None,\n",
" title = bus_stop_id)\n",
"\n",
" first_stop = False"
],
"metadata": {
"id": "GWVIBjf8Z2pK"
},
"id": "GWVIBjf8Z2pK",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Forecast bus ridership"
],
"metadata": {
"id": "hfMnFM06jIyJ"
},
"id": "hfMnFM06jIyJ"
},
{
"cell_type": "markdown",
"source": [
"Let's see how two built-in BigQuery models, the ARIMA_PLUS and ARIMA_PLUS_XREG, can be used to do time-series forecasting."
],
"metadata": {
"id": "ejbEK15cZa7I"
},
"id": "ejbEK15cZa7I"
},
{
"cell_type": "markdown",
"source": [
"## Univariate forecasting using the ARIMA_PLUS model"
],
"metadata": {
"id": "ssQ8c1nqZvEc"
},
"id": "ssQ8c1nqZvEc"
},
{
"cell_type": "markdown",
"source": [
"This model is trained purely on the time point input, hence it is a \"univariate\" model.\n",
"\n"
],
"metadata": {
"id": "myjZHzNVY02I"
},
"id": "myjZHzNVY02I"
},
{
"cell_type": "markdown",
"source": [
"### Train the model"
],
"metadata": {
"id": "17SuJiPZ5XsH"
},
"id": "17SuJiPZ5XsH"
},
{
"cell_type": "markdown",
"source": [
"The CREATE MODEL statement used to train the ARIMA_PLUS model has the usual time-series parameters - `time_series_data_col` to identify which data to forecast and `time_series_timestamp_col` to identify the column which contains the time point).\n",
"\n",
"It also has the `time_series_id_col` option. This option identifies the column which will identify a unique time-series within the trained data. In our case, after the training is done there will be two separate models - one for \"stop1\" and another for \"stop2\". There can be hundreds of thousands of time-series models created using a single CREATE MODEL statement."
],
"metadata": {
"id": "cOs-QLAe5bUP"
},
"id": "cOs-QLAe5bUP"
},
{
"cell_type": "code",
"source": [
"%%bigquery --project {PROJECT_ID}\n",
"\n",
"CREATE OR REPLACE MODEL `multimodal.ridership_arima_plus`\n",
"OPTIONS(\n",
" model_type = 'ARIMA_PLUS',\n",
" time_series_id_col = 'bus_stop_id',\n",
" time_series_data_col = 'num_riders',\n",
" time_series_timestamp_col = 'event_ts'\n",
")\n",
"AS SELECT\n",
" bus_stop_id,\n",
" event_ts,\n",
" num_riders\n",
"FROM `multimodal.ridership`;"
],
"metadata": {
"id": "xiriX7sZY_5p"
},
"id": "xiriX7sZY_5p",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Forecast ridership"
],
"metadata": {
"id": "ZuhrdnEV5rYn"
},
"id": "ZuhrdnEV5rYn"
},
{
"cell_type": "markdown",
"source": [
"Let's forecast using this model. This is done by calling the table-valued-function (TVF) ML.FORECAST with a reference to the model trained in the prevous step. There is no additional data needed to forecast. The second parameter to the function affects how many data points since the last model training period is to produce (\"horizon\") and the level of confidence in the forecast values. For details, refer to the [ML.FORECAST documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-forecast)."
],
"metadata": {
"id": "geVFxNo-avF8"
},
"id": "geVFxNo-avF8"
},
{
"cell_type": "code",
"source": [
"%%bigquery arima_plus_forecast --project {PROJECT_ID}\n",
"\n",
"SELECT\n",
" *\n",
"FROM\n",
" ML.FORECAST (\n",
" model `multimodal.ridership_arima_plus`,\n",
" STRUCT (1000 AS horizon, 0.8 AS confidence_level));"
],
"metadata": {
"id": "oGJFowYfa2V8"
},
"id": "oGJFowYfa2V8",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Here's the data returned by the forecast function:"
],
"metadata": {
"id": "nb7jedUbc8wC"
},
"id": "nb7jedUbc8wC"
},
{
"cell_type": "code",
"source": [
"display(arima_plus_forecast)"
],
"metadata": {
"id": "ktrbu9eBc1DZ"
},
"id": "ktrbu9eBc1DZ",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"#### Visualize the forecast"
],
"metadata": {
"id": "oJnF-fzF55Dq"
},
"id": "oJnF-fzF55Dq"
},
{
"cell_type": "markdown",
"source": [
"Looks like our forecast is pretty accurate based on the previous ridership pattern:"
],
"metadata": {
"id": "uypUNS9ybV_d"
},
"id": "uypUNS9ybV_d"
},
{
"cell_type": "code",
"source": [
"bus_stop_list = list(ridership_history.bus_stop_id.unique())\n",
"bus_stop_list.sort()\n",
"\n",
"for bus_stop_id in bus_stop_list:\n",
"\n",
" historical_data = ridership_history[ridership_history.bus_stop_id==bus_stop_id]\n",
" forecast_data = arima_plus_forecast[arima_plus_forecast.bus_stop_id==bus_stop_id]\n",
" plot_historical_and_forecast(historical_data = historical_data,\n",
" timestamp_col_name = \"event_ts\",\n",
" data_col_name = \"num_riders\",\n",
" forecast_output = forecast_data,\n",
" title = bus_stop_id)"
],
"metadata": {
"id": "6lllJZdRblqT"
},
"id": "6lllJZdRblqT",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Multivariate forecasting using the ARIMA_PLUS_XREG model"
],
"metadata": {
"id": "6F7BaDwQaECx"
},
"id": "6F7BaDwQaECx"
},
{
"cell_type": "markdown",
"source": [
"The ARIMA_PLUS_XREG model is called a \"multivariate\" model because in addition to the time points it also uses additional features, provided for each time point, to identify if they affect the time-series. In our case these features are the temperature and precipitation."
],
"metadata": {
"id": "Vbk1qmXOmb9p"
},
"id": "Vbk1qmXOmb9p"
},
{
"cell_type": "markdown",
"source": [
"### Train the model"
],
"metadata": {
"id": "yzogKPO25C0H"
},
"id": "yzogKPO25C0H"
},
{
"cell_type": "markdown",
"source": [
"The same CREATE MODEL statement is used to train this model Many options, e.g, `time_series_data_col`, `time_series_timestamp_col`, `time_series_id_col` have the same meaning as for the ARIMA_PLUS model.\n",
"\n",
"The main difference - the ARIMA_PLUS_XREG model uses all columns besides those identified by the options above as the feature columns and uses linear regression to calculate covariate weights.\n",
"\n",
"For details on the additional options, explanation of the training process, and best practices when training and using the model please refer to BigQuery documentation on [the CREATE MODEL statement for ARIMA_PLUS_XREG models](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create-multivariate-time-series)."
],
"metadata": {
"id": "oqGe3Wzm5HV_"
},
"id": "oqGe3Wzm5HV_"
},
{
"cell_type": "code",
"source": [
"%%bigquery\n",
"\n",
"CREATE OR REPLACE MODEL `multimodal.ridership_arima_plus_xreg`\n",
"OPTIONS(\n",
" model_type = 'ARIMA_PLUS_XREG',\n",
" time_series_id_col = 'bus_stop_id',\n",
" time_series_data_col = 'num_riders',\n",
" time_series_timestamp_col = 'event_ts'\n",
")\n",
"AS SELECT\n",
" bus_stop_id,\n",
" event_ts,\n",
" num_riders,\n",
" -- Two additional features that can affect the forecasting\n",
" temperature,\n",
" total_precipitation_6hr\n",
"FROM `multimodal.ridership`;"
],
"metadata": {
"id": "xM1y3GXljNwn"
},
"id": "xM1y3GXljNwn",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Forecast ridership"
],
"metadata": {
"id": "O4g_N5GukpKD"
},
"id": "O4g_N5GukpKD"
},
{
"cell_type": "markdown",
"source": [
"Forecasting with the ARIMA_PLUS model was pretty simple. But in case of the XREG model we also need to provide the expected feature columns. We are going to need to get the weather forecast data ready."
],
"metadata": {
"id": "R279u7DGKVXs"
},
"id": "R279u7DGKVXs"
},
{
"cell_type": "markdown",
"source": [
"#### Get the latest weather forecast\n"
],
"metadata": {
"id": "IH05QFf3M1za"
},
"id": "IH05QFf3M1za"
},
{
"cell_type": "markdown",
"source": [
"We have already prepared the historical data forecast. Now we are going to get the latest, most accurate, forecast extracted into a separate table."
],
"metadata": {
"id": "GpA2HwFhNu4z"
},
"id": "GpA2HwFhNu4z"
},
{
"cell_type": "code",
"source": [
"%%bigquery --project {PROJECT_ID}\n",
"\n",
"-- We are going to use the same geo area as the one that we used for historical data generation\n",
"\n",
"-- We use zip codes to identify the area we would like to forecast, but you can define\n",
"-- the area to cover in a number of different ways.\n",
"DECLARE zipcodes_to_cover DEFAULT [\"10001\", \"10002\"];\n",
"DECLARE geo_area_to_cover DEFAULT (\n",
" WITH zip_areas AS (\n",
" SELECT zip_code_geom as area\n",
" FROM `bigquery-public-data.geo_us_boundaries.zip_codes`\n",
" WHERE zip_code in UNNEST(zipcodes_to_cover))\n",
" SELECT ST_UNION(ARRAY_AGG(area)) as combined_area FROM zip_areas);\n",
"\n",
"DROP TABLE IF EXISTS weathernext_graph_derived.latest_local_forecast;\n",
"\n",
"CREATE TABLE weathernext_graph_derived.latest_local_forecast AS (\n",
" SELECT\n",
" geography,\n",
" geography_polygon,\n",
" STRUCT(\n",
" forecast_point.time as time,\n",
" GREATEST(forecast_point.total_precipitation_6hr, 0.) as total_precipitation_6hr,\n",
" forecast_point.`2m_temperature` as `2m_temperature`,\n",
" LAG(`2m_temperature`) OVER(PARTITION BY ST_ASBINARY(geography) ORDER BY time) as prev_2m_temperature) as forecast\n",
" -- \"latest_forecast\" is a view we generated in a previous step.\n",
" -- A new forecast is created every 6 hours. To update the view to point to the latest forecast - rerun the block.\n",
" FROM weathernext_graph_derived.latest_forecast, UNNEST(latest_forecast.forecast) as forecast_point\n",
" WHERE ST_INTERSECTS(geo_area_to_cover, latest_forecast.geography_polygon)\n",
");\n"
],
"metadata": {
"id": "ag1RJVO9N8Id"
},
"id": "ag1RJVO9N8Id",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Let's see what's in that table:"
],
"metadata": {
"id": "ZGN_FvGFyhnu"
},
"id": "ZGN_FvGFyhnu"
},
{
"cell_type": "code",
"source": [
"%%bigquery local_forecast --project {PROJECT_ID}\n",
"\n",
"SELECT geography, forecast.time, forecast.total_precipitation_6hr, forecast.`2m_temperature`, forecast.`prev_2m_temperature`\n",
" FROM weathernext_graph_derived.latest_local_forecast\n",
" WHERE forecast.prev_2m_temperature IS NOT NULL\n",
" ORDER BY ST_ASBINARY(geography), forecast.time LIMIT 20"
],
"metadata": {
"id": "ROyUVU_dyqPx"
},
"id": "ROyUVU_dyqPx",
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"display(local_forecast)"
],
"metadata": {
"id": "qDPRYN53zn5L"
},
"id": "qDPRYN53zn5L",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Let's move the latest forecast data to the location of the model:"
],
"metadata": {
"id": "0mFPcZuA0FnK"
},
"id": "0mFPcZuA0FnK"
},
{
"cell_type": "code",
"source": [
"%%bigquery --project {PROJECT_ID}\n",
"\n",
"DROP TABLE IF EXISTS multimodal.latest_local_forecast;"
],
"metadata": {
"id": "-8L08fxQK9Rk"
},
"id": "-8L08fxQK9Rk",
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"! bq cp -f -n '{PROJECT_ID}:weathernext_graph_derived.latest_local_forecast' '{PROJECT_ID}:multimodal.latest_local_forecast'"
],
"metadata": {
"id": "yGb6uc-B06-b"
},
"id": "yGb6uc-B06-b",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"#### Run the time-series forecast"
],
"metadata": {
"id": "RhQ_QkGxNjbg"
},
"id": "RhQ_QkGxNjbg"
},
{
"cell_type": "markdown",
"source": [
"If we were to use the ARIMA_PLUS model we could have just run the ML.FORECAST function to get time-series predictions. But the ARIMA_PLUS_XREG assumes that additional features will affect the forecast and you must provide the expected features to the ML.FORECAST function.\n",
"\n",
"We already have most of the parts to prepare the features. Let's first test the feature preparation SQL before running the model.\n",
"\n"
],
"metadata": {
"id": "VeOl0UzUHWve"
},
"id": "VeOl0UzUHWve"
},
{
"cell_type": "code",
"source": [
"%%bigquery expected_features --project {PROJECT_ID}\n",
"\n",
"DECLARE time_zone DEFAULT \"America/New_York\";\n",
"\n",
"-- Forecast from now...\n",
"DECLARE start_ts DEFAULT CURRENT_TIMESTAMP();\n",
"-- to 5 days forward\n",
"DECLARE end_ts DEFAULT TIMESTAMP_ADD(start_ts, INTERVAL 5 DAY);\n",
"\n",
"WITH\n",
"event_timestamps AS (\n",
" SELECT TIMESTAMP(DATETIME(event_ts_in_utc, time_zone)) event_ts FROM\n",
" UNNEST(GENERATE_TIMESTAMP_ARRAY(start_ts, end_ts, INTERVAL 5 MINUTE)) as event_ts_in_utc\n",
"),\n",
"bus_stops_and_event_timestamps AS (\n",
" -- Cartesian join of the bus stops and time points\n",
" -- We only need the bus stop ids and locations, not all the meta data\n",
" SELECT bus_stops.bus_stop_id, bus_stops.location, event_ts\n",
" FROM multimodal.bus_stops, event_timestamps\n",
"),\n",
"events_and_weather AS (\n",
" SELECT\n",
" bus_stop_id,\n",
" event_ts,\n",
" weather.forecast,\n",
" multimodal.temperature_approx(weather.forecast, event_ts) as temperature,\n",
" -- we are getting the latest forecast data, not historical\n",
" FROM bus_stops_and_event_timestamps events, multimodal.latest_local_forecast weather\n",
" WHERE ST_COVERS(weather.geography_polygon, events.location) AND\n",
" event_ts BETWEEN TIMESTAMP_SUB(weather.forecast.time, INTERVAL 6 HOUR) AND weather.forecast.time\n",
")\n",
"SELECT\n",
" bus_stop_id,\n",
" event_ts,\n",
" -- the two features used by the model\n",
" temperature,\n",
" forecast.total_precipitation_6hr as total_precipitation_6hr,\n",
" FROM events_and_weather\n",
" -- we are going to drop these clauses later, this is just to help with visualization\n",
" ORDER by bus_stop_id, event_ts\n",
" LIMIT 50;"
],
"metadata": {
"id": "ekoHAClRID0o"
},
"id": "ekoHAClRID0o",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Let's see what these features look like:"
],
"metadata": {
"id": "Z4pdlEhjJDxr"
},
"id": "Z4pdlEhjJDxr"
},
{
"cell_type": "code",
"source": [
"display(expected_features)"
],
"metadata": {
"id": "_lsEtN5VJIYt"
},
"id": "_lsEtN5VJIYt",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"OK, the features look correct. One nuance - you might see that the temperature values are unchanged for some earlier event timestamps. This is because our latest forecast table doesn't have temperature values for earlier forecasts and the temperature approximation function just takes the current value if there is no earlier one. We can find this data if needed by looking at the previous forecast, but that would result in more complex SQL statement.\n",
"\n",
"Let's run the forecast. We will use most of the feature preparation SQL in the forecasting function. Alternatively, we could have prepared the features and saved them in a table and used the whole table as the feature input to the forecast function."
],
"metadata": {
"id": "qjNikfTFJvK-"
},
"id": "qjNikfTFJvK-"
},
{
"cell_type": "code",
"source": [
"%%bigquery ridership_forecast --location {REGION}\n",
"\n",
"DECLARE time_zone DEFAULT \"America/New_York\";\n",
"\n",
"-- Forecast from now...\n",
"DECLARE start_ts DEFAULT CURRENT_TIMESTAMP();\n",
"-- to 5 days forward\n",
"DECLARE end_ts DEFAULT TIMESTAMP_ADD(start_ts, INTERVAL 5 DAY);\n",
"\n",
"SELECT\n",
" *\n",
"FROM\n",
" ML.FORECAST (\n",
" model `multimodal.ridership_arima_plus_xreg`,\n",
" STRUCT (1000 AS horizon, 0.8 AS confidence_level),\n",
" (\n",
"WITH\n",
"event_timestamps AS (\n",
" SELECT TIMESTAMP(DATETIME(event_ts_in_utc, time_zone)) event_ts FROM\n",
" UNNEST(GENERATE_TIMESTAMP_ARRAY(start_ts, end_ts, INTERVAL 5 MINUTE)) as event_ts_in_utc\n",
"),\n",
"bus_stops_and_event_timestamps AS (\n",
" -- Cartesian join of the bus stops and time points\n",
" -- We only need the bus stop ids and locations, not all the meta data\n",
" SELECT bus_stops.bus_stop_id, bus_stops.location, event_ts FROM multimodal.bus_stops, event_timestamps\n",
"),\n",
"events_and_weather AS (\n",
" SELECT\n",
" bus_stop_id,\n",
" event_ts,\n",
" weather.forecast,\n",
" multimodal.temperature_approx(weather.forecast, event_ts) as temperature,\n",
" -- we are getting the latest forecast data, not historical\n",
" FROM bus_stops_and_event_timestamps events, multimodal.latest_local_forecast weather\n",
" WHERE ST_COVERS(weather.geography_polygon, events.location) AND\n",
" event_ts BETWEEN TIMESTAMP_SUB( weather.forecast.time, INTERVAL 6 HOUR) AND weather.forecast.time\n",
")\n",
"SELECT\n",
" bus_stop_id,\n",
" event_ts,\n",
" -- the two features used by the model\n",
" temperature,\n",
" forecast.total_precipitation_6hr as total_precipitation_6hr,\n",
" FROM events_and_weather )\n",
" );"
],
"metadata": {
"id": "KcS2GPMqktw5"
},
"id": "KcS2GPMqktw5",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"#### Visualize the forecast"
],
"metadata": {
"id": "Qjh7K1_HlXwr"
},
"id": "Qjh7K1_HlXwr"
},
{
"cell_type": "markdown",
"source": [
"Here's what the ML.FORECAST function returns:"
],
"metadata": {
"id": "1vPfTPSuVQnW"
},
"id": "1vPfTPSuVQnW"
},
{
"cell_type": "code",
"source": [
"display(ridership_forecast)"
],
"metadata": {
"id": "Bix0T7drJF_E"
},
"id": "Bix0T7drJF_E",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Let's visualize the forecast:"
],
"metadata": {
"id": "tleV9kMlVjKU"
},
"id": "tleV9kMlVjKU"
},
{
"cell_type": "code",
"source": [
"\n",
"bus_stop_list = list(ridership_history.bus_stop_id.unique())\n",
"bus_stop_list.sort()\n",
"\n",
"for bus_stop_id in bus_stop_list:\n",
"\n",
" historical_data = ridership_history[ridership_history.bus_stop_id==bus_stop_id]\n",
" forecast_data = ridership_forecast[ridership_forecast.bus_stop_id==bus_stop_id]\n",
" plot_historical_and_forecast(historical_data = historical_data,\n",
" timestamp_col_name = \"event_ts\",\n",
" data_col_name = \"num_riders\",\n",
" forecast_output = forecast_data,\n",
" title = bus_stop_id)"
],
"metadata": {
"id": "MI2t8RfHlbpT"
},
"id": "MI2t8RfHlbpT",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Selecting the right model"
],
"metadata": {
"id": "hM5NAO7mowoT"
},
"id": "hM5NAO7mowoT"
},
{
"cell_type": "markdown",
"source": [
"In the previous sections we have seen the mechanics of creating and using two time-series forecasting models. The univariate model is simpler to create and is simpler to use for forecasting. The multivariate model can give more accurate forecasting. In a real production implementation it would be important to capture the prediction results and compare with the actual outcomes. This would help to decide if it's worth building the multivariate model, fine tune other model parameters - training data period, additional features, effect of holidays and seasonality, etc."
],
"metadata": {
"id": "GysbMt0ho5Ff"
},
"id": "GysbMt0ho5Ff"
},
{
"cell_type": "markdown",
"source": [
"Part of the model selection process is model evaluation and forecast evaluation. "
],
"metadata": {
"id": "Ebv39XrfO6o-"
},
"id": "Ebv39XrfO6o-"
},
{
"cell_type": "markdown",
"source": [
"## Model evaluation"
],
"metadata": {
"id": "kMarPZ2DP4qf"
},
"id": "kMarPZ2DP4qf"
},
{
"cell_type": "code",
"source": [
"%%bigquery arima_plus_model_evaluation --project {PROJECT_ID}\n",
"\n",
"SELECT\n",
" *\n",
"FROM\n",
" ML.ARIMA_EVALUATE(MODEL multimodal.ridership_arima_plus, STRUCT(FALSE AS show_all_candidate_models))"
],
"metadata": {
"id": "AqfIvpT5Ps4E"
},
"id": "AqfIvpT5Ps4E",
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"display_columns_as_rows(arima_plus_model_evaluation)"
],
"metadata": {
"id": "0FoUfb7WUlDO"
},
"id": "0FoUfb7WUlDO",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"You can see that there are multiple time series models under the over, one for each bus stop. For details on how to interpret the output of ML.ARIMA_EVALUATE function refer to the [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-arima-evaluate)."
],
"metadata": {
"id": "xcCjLpKHXB7a"
},
"id": "xcCjLpKHXB7a"
},
{
"cell_type": "markdown",
"source": [
"## Forecast explanation"
],
"metadata": {
"id": "KNodZZ8IQCiD"
},
"id": "KNodZZ8IQCiD"
},
{
"cell_type": "markdown",
"source": [
"To evaluate the forecast, use the same parameters as use for ML.FORECAST function to call ML.EXPLAIN_FORECAST:"
],
"metadata": {
"id": "K2jgJeqZQHJl"
},
"id": "K2jgJeqZQHJl"
},
{
"cell_type": "code",
"source": [
"%%bigquery arima_plus_forecast_explanation --project {PROJECT_ID}\n",
"\n",
"SELECT *\n",
" FROM ML.EXPLAIN_FORECAST(\n",
" MODEL multimodal.ridership_arima_plus,\n",
" STRUCT(300 AS horizon, 0.8 AS confidence_level))"
],
"metadata": {
"id": "vUCLXzghQd3A"
},
"id": "vUCLXzghQd3A",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"The output of the function contains two types of records - the ones that were used for trainging and the actual forecast.\n",
"\n",
"Here are a couple of records of historical records:"
],
"metadata": {
"id": "ihstT3O6EM_G"
},
"id": "ihstT3O6EM_G"
},
{
"cell_type": "code",
"source": [
"data_to_show = arima_plus_forecast_explanation[\n",
" (arima_plus_forecast_explanation['bus_stop_id'] == 'bus-stop-1') &\n",
" (arima_plus_forecast_explanation['time_series_type'] == 'history')]\n",
"\n",
"\n",
"display_columns_as_rows(data_to_show.head(2))"
],
"metadata": {
"id": "VEMbJ4Q4ROzK"
},
"id": "VEMbJ4Q4ROzK",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"And here are some fo the forecast records:"
],
"metadata": {
"id": "8QXl2XjWEsx9"
},
"id": "8QXl2XjWEsx9"
},
{
"cell_type": "code",
"source": [
"data_to_show = arima_plus_forecast_explanation[\n",
" (arima_plus_forecast_explanation['bus_stop_id'] == 'bus-stop-1') &\n",
" (arima_plus_forecast_explanation['time_series_type'] == 'forecast')]\n",
"\n",
"\n",
"display_columns_as_rows(data_to_show.head(2))"
],
"metadata": {
"id": "4jzn6HjYExXk"
},
"id": "4jzn6HjYExXk",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"For details on how to interpret the output of the function refer to the [BigQuery documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-explain-forecast)."
],
"metadata": {
"id": "tDBHRk4uThp6"
},
"id": "tDBHRk4uThp6"
},
{
"cell_type": "markdown",
"source": [
"# Conclusion"
],
"metadata": {
"id": "6LW15mttL7QA"
},
"id": "6LW15mttL7QA"
},
{
"cell_type": "markdown",
"source": [
"We showed how you can use two different time-series forecasting models available in BigQuery.\n",
"\n",
"We also showed how the WeatherNext Graph dataset can be used to get historical and future weather forecasts.\n",
"\n",
"There are multiple use cases for these time-series forecasts. For example, an AI agent can use the results of ridership forecast as an input to decision on how to perform a task like this: \"In the next week schedule the repair of the bus stop #2 during the least impactful to passengers time\"."
],
"metadata": {
"id": "CcuS8-5QWgPP"
},
"id": "CcuS8-5QWgPP"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.10"
},
"colab": {
"provenance": [],
"name": "predictions.ipynb",
"toc_visible": true
}
},
"nbformat": 4,
"nbformat_minor": 5
}