colab-enterprise/Campaign-Performance-Forecasting-TimesFM.ipynb (839 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "u2eMxVsHcUpt" }, "source": [ "### <font color='#4285f4'>Overview</font>" ] }, { "cell_type": "markdown", "metadata": { "id": "xELYIAeekMSH" }, "source": [ "TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for univariate time-series forecasting.\n", "\n", "TimesFM is a State of the Art LLM that can predict times series data where no pre-training is required. This notebook will show a basic example so customers can then incorporate into their analytics. This is the best place to start with TimesFM before diving into the other notebooks.\n", "\n", "Process Flow:\n", "\n", "1. Gather 2 weeks of sales data\n", "2. Gather any dynamic covariates\n", " * a. Categorical - categories that affect our sales (e.g. day of the week, if a marketing campaign was in effect)\n", " * b. Numerical - numeric values that affect our sales (e.g. temperature)\n", "\n", " For any dynamic covariates we need to add 7 more values since we want to predict the next 7 days of sales. We will know if we are running a marketing campaign and we can check the weather forecast for the future temperature.\n", "3. Gather any static covariates\n", " * a. Categorical - categories that affect our sales (e.g. the menu item we are selling)\n", " * b. Numerical - numeric values that affect our sales (e.g. the price of the menu item)\n", "4. Specify the frequency\n", " * a. 0 for high frequency (default), 1 for medium, and 2 for low.\n", "5. Load the model\n", "6. Run the prediction\n", "\n", "Notes:\n", "\n", "- TimesFM allows you to perform time series forcasting with pretraining a model.\n", "- Run this on a e2-standard-8 machine with 250 GB of disk.\n", "- A GPU is **not** required for testing purposes in this notebook.\n", "- You can deploy TimesFM via the Vertex Model Garden **utilizing a GPU**.\n", "\n", "Cost:\n", "* Low: Using TimesFM locally\n", "* Medium: Remember to stop your Colab Enterprise Notebook Runtime\n", "\n", "Author:\n", "* Adam Paternostro" ] }, { "cell_type": "markdown", "metadata": { "id": "XnryEuWs1L3g" }, "source": [ "**About TimesFM**\n", "- View GitHub repository: [Link](https://github.com/google-research/timesfm/)\n", "- TimesFM is a \"State-of-the-Art Large Language Models\". These are the most advanced and powerful language models currently available.\n", "- TimesFM supports univariate time-series forecasting\n", " - This is like trying to predict what the temperature will be tomorrow, based only on the past temperature data. We're not looking at other factors like rainfall or humidity, just the temperature itself. It's like saying, \"Based on how the temperature has changed in the past, what's my best guess for tomorrow?\"\n", "- TimesFM also supports Covariate/Multivariate support\n", " - Now imagine we want to improve our predictions by considering other factors that might influence the temperature, like rainfall or humidity. That's where covariate/multivariate support comes in. It's like saying, \"Okay, I know past temperatures are important, but what if I also looked at past rainfall and humidity to make my prediction even better?\"\n", " - Covariates: These are the additional factors (like rainfall and humidity) that we think might influence the thing we're trying to predict. They're like sidekicks helping us make a more informed guess.\n", " - Multivariate: This just means we're now dealing with multiple variables (temperature, rainfall, humidity) instead of just one.\n", " So with covariate/multivariate support, our forecasting model gets smarter. It can learn how changes in rainfall or humidity tend to affect the temperature, and use that information to make more accurate predictions. It's like having a team of experts working together to solve a puzzle, instead of just one person trying to figure it out alone.\n", " - Notebook with covariates ([link](https://github.com/google-research/timesfm/blob/master/notebooks/covariates.ipynb))\n", "- Decoder-only patched-transformer architecture\n", " - This is getting into the technical nuts and bolts of how the model is built.\n", " - \"Transformer\" is a type of neural network architecture that has become very popular for language tasks.\n", " - \"Decoder-only\" means it's specifically designed for generating text (like in translation or writing tasks), not for understanding and analyzing existing text.\n", " - \"Patched\" likely refers to some modifications made to the basic transformer design to make it more efficient or better suited to the specific task.\n", "- Can handle different context and horizon length\n", " - \"Context\" refers to the surrounding information the model uses to make predictions. So, this model can work with varying amounts of context, from short sentences to longer passages.\n", " - \"Horizon length\" is how far into the future the model is trying to predict. This design can handle both short-term and longer-term predictions.\n", "- Fast inference due to patching\n", " - \"Inference\" is the process of using the model to make predictions.\n", " - The \"patching\" mentioned earlier helps make this process faster. This is important for real-world applications where you need quick responses.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### <font color='#4285f4'>Video Walkthrough</font>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[![Video](https://storage.googleapis.com/data-analytics-golden-demo/chocolate-ai/v1/Videos/adam-paternostro-video.png)](https://storage.googleapis.com/data-analytics-golden-demo/chocolate-ai/v1/Videos/Campaign-Performance-Forecasting-TimesFM.mp4)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import HTML\n", "\n", "HTML(\"\"\"\n", "<video width=\"800\" height=\"600\" controls>\n", " <source src=\"https://storage.googleapis.com/data-analytics-golden-demo/chocolate-ai/v1/Videos/Campaign-Performance-Forecasting-TimesFM.mp4\" type=\"video/mp4\">\n", " Your browser does not support the video tag.\n", "</video>\n", "\"\"\")" ] }, { "cell_type": "markdown", "metadata": { "id": "zGv9RMvk60SC" }, "source": [ "### <font color='#4285f4'>License</font>" ] }, { "cell_type": "markdown", "metadata": { "id": "jKMvg3Zu65ir" }, "source": [ "```\n", "# Copyright 2024 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License.\n", "```" ] }, { "cell_type": "markdown", "metadata": { "id": "5OSlvfBidakD" }, "source": [ "### <font color='#4285f4'>Deploy TimesFM</font>" ] }, { "cell_type": "markdown", "metadata": { "id": "G3qHbG1QdeeV" }, "source": [ "1. Open Vertex Model Garden\n", " - https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/timesfm\n", "2. Click the Deploy button\n", "3. Select\n", " - Resource Id: google/timesfm-v20240828\n", " - Model Name: (leave default - name does not matter) \n", " - Endpoint name: (leave default - name does not matter)\n", " - Region: us-central1 (if you change you need to change the **Initialize** variables below)\n", " - Machine spec: (leave default - n1-standard-8)\n", "4. Click Deploy\n", "5. Wait 20 minutes\n", "6. Open Vertex Model Registry\n", " - https://console.cloud.google.com/vertex-ai/models\n", "7. Click on the model name\n", "8. Click on the model name under \"Deploy your model\"\n", "9. Click on \"Sample Request\" (at the top)\n", "10. Copy the ```ENDPOINT_ID=\"000000000000000000\"```\n", "11. Update the variable endpoint_id in the **Initialize** code below.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### TimesFM Deployment Video" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[![TimesFM Deployment Video](https://storage.googleapis.com/data-analytics-golden-demo/chocolate-ai/v1/Videos/adam-paternostro-video.png)](https://storage.googleapis.com/data-analytics-golden-demo/chocolate-ai/v1/Videos/Campaign-Performance-Forecasting-TimesFM-Install.mp4)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import HTML\n", "\n", "HTML(\"\"\"\n", "<h2>Deploying TimesFM to a Vertex AI Endpoint Instructions</h2>\n", "<video width=\"800\" height=\"600\" controls>\n", " <source src=\"https://storage.googleapis.com/data-analytics-golden-demo/chocolate-ai/v1/Videos/Campaign-Performance-Forecasting-TimesFM-Install.mp4\" type=\"video/mp4\">\n", " Your browser does not support the video tag.\n", "</video>\n", "\"\"\")" ] }, { "cell_type": "markdown", "metadata": { "id": "SfPcM-Vw1L3g" }, "source": [ "### <font color='#4285f4'>Pip installs</font>" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ieaRyjdaBxWK" }, "outputs": [], "source": [ "# PIP Installs\n", "import sys\n", "\n", "# https://PLACEHOLDER.com/index.html\n", "\n", "# For better performance and production, deploy to Vertex AI endpoint with GPU\n", "# This takes about 5 minutes to install and you will need to reset your runtime\n", "# !{sys.executable} -m pip install timesfm <- THERE ARE TOO MANY DEPENDENCIES TO EASILY TO THIS IN COLAB" ] }, { "cell_type": "markdown", "metadata": { "id": "efaJ07zhVI_N" }, "source": [ "### <font color='#4285f4'>Initialize</font>" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tuKLZfCbVNic" }, "outputs": [], "source": [ "import IPython.display\n", "import google.auth\n", "import requests\n", "import json\n", "import uuid\n", "import base64\n", "import os\n", "import cv2\n", "import random\n", "import time\n", "import datetime\n", "import base64\n", "import random" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "oZc-kjKAVNY1" }, "outputs": [], "source": [ "# Set these (run this cell to verify the output)\n", "\n", "endpoint_id=\"000000000000000000\" # <- YOU MUST SET THIS !!!!\n", "\n", "bigquery_location = \"${bigquery_location}\"\n", "region = \"${region}\"\n", "location = \"${location}\"\n", "storage_account = \"${chocolate_ai_bucket}\"\n", "\n", "# Get the current date and time\n", "now = datetime.datetime.now()\n", "\n", "# Format the date and time as desired\n", "formatted_date = now.strftime(\"%Y-%m-%d-%H-%M\")\n", "\n", "\n", "# Get some values using gcloud\n", "project_id = !(gcloud config get-value project)\n", "user = !(gcloud auth list --filter=status:ACTIVE --format=\"value(account)\")\n", "\n", "if len(project_id) != 1:\n", " raise RuntimeError(f\"project_id is not set: {project_id}\")\n", "project_id = project_id[0]\n", "\n", "if len(user) != 1:\n", " raise RuntimeError(f\"user is not set: {user}\")\n", "user = user[0]\n", "\n", "print(f\"project_id = {project_id}\")\n", "print(f\"user = {user}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "5tCZETkqUvoo" }, "source": [ "### <font color='#4285f4'>Helper Methods</font>" ] }, { "cell_type": "markdown", "metadata": { "id": "uvNzLg_jU1EZ" }, "source": [ "#### restAPIHelper\n", "Calls the Google Cloud REST API using the current users credentials." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "omDBd355U2vi" }, "outputs": [], "source": [ "def restAPIHelper(url: str, http_verb: str, request_body: str) -> str:\n", " \"\"\"Calls the Google Cloud REST API passing in the current users credentials\"\"\"\n", "\n", " import requests\n", " import google.auth\n", " import json\n", "\n", " # Get an access token based upon the current user\n", " creds, project = google.auth.default()\n", " auth_req = google.auth.transport.requests.Request()\n", " creds.refresh(auth_req)\n", " access_token=creds.token\n", "\n", " headers = {\n", " \"Content-Type\" : \"application/json\",\n", " \"Authorization\" : \"Bearer \" + access_token\n", " }\n", "\n", " if http_verb == \"GET\":\n", " response = requests.get(url, headers=headers)\n", " elif http_verb == \"POST\":\n", " response = requests.post(url, json=request_body, headers=headers)\n", " elif http_verb == \"PUT\":\n", " response = requests.put(url, json=request_body, headers=headers)\n", " elif http_verb == \"PATCH\":\n", " response = requests.patch(url, json=request_body, headers=headers)\n", " elif http_verb == \"DELETE\":\n", " response = requests.delete(url, headers=headers)\n", " else:\n", " raise RuntimeError(f\"Unknown HTTP verb: {http_verb}\")\n", "\n", " if response.status_code == 200:\n", " return json.loads(response.content)\n", " #image_data = json.loads(response.content)[\"predictions\"][0][\"bytesBase64Encoded\"]\n", " else:\n", " error = f\"Error restAPIHelper -> ' Status: '{response.status_code}' Text: '{response.text}'\"\n", " raise RuntimeError(error)" ] }, { "cell_type": "markdown", "metadata": { "id": "ewl3wFMBU3jB" }, "source": [ "#### timesFMInference\n", "Calls TimesFM Vertex Model Endpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "FjpDTDR6U36h" }, "outputs": [], "source": [ "def timesFMInference(project_number, endpoint_id, payload):\n", " url = f\"https://{location}-aiplatform.googleapis.com/v1/projects/{project_number}/locations/{location}/endpoints/{endpoint_id}:predict\"\n", " # print(f\"url: {url}\")\n", " response = restAPIHelper(url, http_verb=\"POST\", request_body=payload)\n", " # print(f\"response: {response}\")\n", " return response" ] }, { "cell_type": "markdown", "metadata": { "id": "ydB-qC_dVt0k" }, "source": [ "#### getProjectNumber\n", "Gets the project number from a project id" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "PEEEjUEOVs1H" }, "outputs": [], "source": [ "def getProjectNumber(project_id):\n", " \"\"\"Batch activates service apis\"\"\"\n", "\n", " url = f\"https://cloudresourcemanager.googleapis.com/v1/projects/{project_id}\"\n", " json_result = restAPIHelper(url, \"GET\", None)\n", " print(f\"getProjectNumber (GET) json_result: {json_result}\")\n", "\n", " project_number = json_result[\"projectNumber\"]\n", " return project_number" ] }, { "cell_type": "markdown", "metadata": { "id": "2d24Ai71MQLE" }, "source": [ "### <font color='#4285f4'>Tutorial: Sales Forecast (Marketing Campaign, Day of Week, Temperature)</font>" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "BO7wjTAoOhpG" }, "outputs": [], "source": [ "# We want to predict our chocolate ai sales based upon past sales, past marketing campaigns and the temperature.\n", "\n", "# Let's view our example data.\n", "# We have 2 weeks of existings sales data (columns C through P)\n", "# For the sales data we know:\n", "# 1. If a marketing campaign was taking place\n", "# 2. The day of the week (maybe weekends are busier?)\n", "# 3. The temperature (maywe we sell more on cold days that hot?)\n", "# We have the price and item name which are \"static\"\n", "\n", "# We want to predict the next 1 week of sales data\n", "# We need to provide if we will be running a marketing campaign, the temperature (so get the next weeks weather data) and the day of the week\n", "# We can then run our prediction\n", "\n", "from IPython.display import Image\n", "Image(url='https://storage.googleapis.com/data-analytics-golden-demo/chocolate-ai/v1/Artifacts/TimesFM.png', width=1200)" ] }, { "cell_type": "markdown", "metadata": { "id": "WyZCO8gNj6jW" }, "source": [ "### <font color='#4285f4'>Configure TimesFM</font>" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "au6UXu381L3h" }, "outputs": [], "source": [ "# import timesfm" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Nt2AW7gpNbLE" }, "outputs": [], "source": [ "context_len = 512\n", "horizon_len = 7 # Predict next 7 days, this could be 128 without requiring compute (129 would be a step up). This is more of the max horizon len.\n", "input_patch_len = 32\n", "output_patch_len = 128\n", "num_layers = 20\n", "model_dims = 1280\n", "timesfm_backend = \"cpu\" # cpu, gpu or cuda\n", "xreg_mode = \"xreg + timesfm\"\n", "\n", "# from jax._src import config\n", "# config.update(\"jax_platforms\", timesfm_backend)\n", "\n", "# model = timesfm.TimesFm(\n", "# context_len=context_len,\n", "# horizon_len=horizon_len,\n", "# input_patch_len=input_patch_len,\n", "# output_patch_len=output_patch_len,\n", "# num_layers=num_layers,\n", "# model_dims=model_dims,\n", "# backend=timesfm_backend,\n", "# )\n", "\n", "# Load the model\n", "# This can produce \"ERROR:absl:For checkpoint version > 1.0, we require users to provide\", you can ignore that\n", "# model.load_from_checkpoint(repo_id=\"google/timesfm-1.0-200m\")" ] }, { "cell_type": "markdown", "metadata": { "id": "ovlvCGbZj2_x" }, "source": [ "### <font color='#4285f4'>Run the Prediction</font>" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2mZAuyhGMo1-" }, "outputs": [], "source": [ "# This is our sales data, 2 weeks of data\n", "# Array size: 14 elements in the inner array since we have 2 week a sales data\n", "inputs = [[100,105,125,133,145,107,156,101,106,105,105,104,136,165]]\n", "\n", "# These are our categorical covariates (additional factors that we think might influence the thing we're trying to predict).\n", "# Here we consider the day of the week and if a marketing campaign was in progress\n", "# Array size: 21 elements in the inner array since we have 2 week a sales data and are predicting 7 more days. We need to provide the future 7 days of data.\n", "dynamic_categorical_covariates = {\n", " \"day_of_week\": [[1,2,3,4,5,6,7,1,2,3,4,5,6,7,1,2,3,4,5,6,7]],\n", " \"marketing_campaign\": [[\"N\",\"N\",\"Y\",\"Y\",\"Y\",\"N\",\"N\",\"N\",\"N\",\"N\",\"N\",\"N\",\"Y\",\"N\",\"N\",\"Y\",\"N\",\"N\",\"N\",\"N\",\"N\"]]\n", "}\n", "\n", "# These are our numerical covariates (additional numeric factors, just like the categories, but numbers)\n", "# Here we consider the temperature of the day\n", "# Array size: 21 elements in the inner array since we have 2 week a sales data and are predicting 7 more days. We need to provide the future 7 days of data.\n", "dynamic_numerical_covariates = {\n", " \"temperature\": [[90,90,90,90,90,90,100,90,90,90,90,90,90,100,90,90,90,90,90,100,90]]\n", "}\n", "\n", "# These are our static covariates (additional factors that we think are fixed, like the price of the product)\n", "# Here we consider the price of the product\n", "# Array size: 1 element in the inner array since we have 1 static covariate for the entire prediction\n", "static_numerical_covariates = {\n", " \"price\": [7.95]\n", "}\n", "\n", "# These are our static categorical covariates (additional factors that we think are fixed)\n", "# Here we consider the menu item\n", "# Array size: 1 element in the inner array since we have 1 static covariate for the entire prediction\n", "static_categorical_covariates = {\n", " \"menu_item\" : [\"cafe-mocha\"]\n", "}\n", "\n", "# frequency of each context time series. 0 for high frequency (default), 1 for medium, and 2 for low.\n", "# Array size: 1 element in the inner array since we have 1 frequency for the entire prediction\n", "frequency = [0]\n", "\n", "# model_forecast, xreg_forecast = model.forecast_with_covariates(\n", "# inputs=inputs,\n", "# dynamic_categorical_covariates=dynamic_categorical_covariates,\n", "# dynamic_numerical_covariates=dynamic_numerical_covariates,\n", "# static_numerical_covariates=static_numerical_covariates,\n", "# static_categorical_covariates=static_categorical_covariates,\n", "# freq=frequency,\n", "# xreg_mode=\"xreg + timesfm\", # default\n", "# ridge=0.0,\n", "# force_on_cpu=False,\n", "# normalize_xreg_target_per_input=True, # default\n", "# )\n", "\n", "# See the next 7 days of forecasted values\n", "# model_forecast[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8vNG0pwvRZjo" }, "outputs": [], "source": [ "# Create the JSON payload to the Vertex Model Endpoint\n", "# You can have multiple array elements under \"instances\" and TimesFM will create a prediction for each\n", "# The array elements from above are placed in the payload\n", "payload = {\n", " \"instances\": [\n", " {\n", " \"input\": inputs[0],\n", " \"freq\": frequency[0],\n", " \"horizon\": horizon_len,\n", " \"dynamic_numerical_covariates\": {\n", " \"temperature\": dynamic_numerical_covariates[\"temperature\"][0]\n", " },\n", " \"dynamic_categorical_covariates\": {\n", " \"day_of_week\": dynamic_categorical_covariates[\"day_of_week\"][0],\n", " \"marketing_campaign\": dynamic_categorical_covariates[\"marketing_campaign\"][0]\n", " },\n", " \"static_numerical_covariates\": {\n", " \"price\": static_numerical_covariates[\"price\"][0]\n", " },\n", " \"static_categorical_covariates\": {\n", " \"menu_item\": static_categorical_covariates[\"menu_item\"][0]\n", " },\n", " \"xreg_kwargs\": {\n", " \"xreg_mode\" : xreg_mode\n", " }\n", " }\n", " ]\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vPBiCltjWfd4" }, "outputs": [], "source": [ "# Get the project number in order to call teh endpoint\n", "project_number = getProjectNumber(project_id)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5wE1frYDXREm" }, "outputs": [], "source": [ "# Calls TimeFM to make a prediction\n", "times_fm_inference = timesFMInference(project_number, endpoint_id, payload)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1zpXfDsoXV_U" }, "outputs": [], "source": [ "# Create an array of forecasted elements\n", "model_forecast = [times_fm_inference[\"predictions\"][0][\"point_forecast\"]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qE_oUrIEXje_" }, "outputs": [], "source": [ "# View our prediction\n", "model_forecast" ] }, { "cell_type": "markdown", "metadata": { "id": "MFsRl_rL1L3h" }, "source": [ "### <font color='#4285f4'>Visualize the results</font>" ] }, { "cell_type": "markdown", "metadata": { "id": "m2rNGUnzD8e3" }, "source": [ "- The light blue is a past known sales data\n", "- The dark blue is our predicted data by TimesFM\n", "- The labels that are bolded are days we are running a marketing campaign\n", "- The yellow line is our temperature\n", "\n", "**Prediction**\n", "- We predict higher sales on the 16th and 20th\n", " - The 16th is a marketing campaign (the label is bolded)\n", " - The 20th is a high temperature day (the line spikes)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4igTKeR-FgTL" }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "# Data\n", "sales = inputs[0]\n", "predicted_sales = model_forecast[0]\n", "marketing_campaigns = dynamic_categorical_covariates[\"marketing_campaign\"]\n", "temperature = dynamic_numerical_covariates[\"temperature\"][0]\n", "\n", "# Create x-axis values\n", "all_days = list(range(1, 22)) # Days as integers from 1 to 21\n", "days = list(range(1, 15)) # Adjust range if needed\n", "days_predicted = list(range(15, 22))\n", "\n", "# Create the plot\n", "fig, ax1 = plt.subplots(figsize=(12, 6))\n", "\n", "# Plot sales as a bar chart\n", "ax1.bar(days, sales, color='#{:02x}{:02x}{:02x}'.format(92, 200, 243), label='Sales')\n", "\n", "# Plot predicted sales as a bar chart\n", "ax1.bar(days_predicted, predicted_sales, color='#{:02x}{:02x}{:02x}'.format(53, 106, 228), label='Predicted Sales')\n", "\n", "# Set x-axis ticks and labels for all days\n", "ax1.set_xticks(all_days)\n", "ax1.set_xticklabels(all_days)\n", "\n", "ax1.set_xlabel('Days (bold means marketing campaign)')\n", "ax1.set_ylabel('Sales', color=\"black\")\n", "ax1.tick_params(axis='y', labelcolor='#{:02x}{:02x}{:02x}'.format(53, 106, 228))\n", "\n", "# Create a second y-axis for temperature\n", "ax2 = ax1.twinx()\n", "ax2.plot(all_days, temperature, color='#{:02x}{:02x}{:02x}'.format(176, 202, 78), linestyle = '--', alpha = 1)\n", "ax2.set_ylabel('Temperature', color=\"black\")\n", "ax2.tick_params(axis='y', labelcolor='#{:02x}{:02x}{:02x}'.format(176, 202, 78))\n", "\n", "# Add marketing campaign indicators (bold the \"days\" when we had a marketing campaign)\n", "for i, campaign in enumerate(marketing_campaigns[0]):\n", " if campaign == 'Y':\n", " ax1.get_xticklabels()[i].set_weight('bold')\n", "\n", "# Add legend\n", "lines, labels = ax1.get_legend_handles_labels()\n", "lines2, labels2 = ax2.get_legend_handles_labels()\n", "ax1.legend(lines + lines2, labels + labels2)\n", "\n", "plt.title('Predicted Sales by TimesFM')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": { "id": "YWLRoH_R1L3h" }, "source": [ "### <font color='#4285f4'>Clean Up</font>" ] }, { "cell_type": "markdown", "metadata": { "id": "W4r27FxbizbT" }, "source": [ "**To save on costs:**\n", "\n", "1. Open Vertex Model Registry\n", " - https://console.cloud.google.com/vertex-ai/models\n", "2. Click on the model name\n", "3. Under \"Deploy your model\" click the 3 dots and select \"Undeploy Model\"\n", "4. Under \"Deploy your model\" click the 3 dots and select \"Delete Endpoint\"\n", "5. Go back one screen and click the 3 dots and select \"Delete Model\"\n" ] }, { "cell_type": "markdown", "metadata": { "id": "ZTwvp1BB1L3h" }, "source": [ "### <font color='#4285f4'>Reference Links</font>" ] }, { "cell_type": "markdown", "metadata": { "id": "bH8-Vz-X1L3h" }, "source": [ "- [GitHub](https://github.com/google-research/timesfm)\n", "- [Vertex AI Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/timesfm)" ] } ], "metadata": { "colab": { "collapsed_sections": [ "u2eMxVsHcUpt", "zGv9RMvk60SC", "SfPcM-Vw1L3g", "efaJ07zhVI_N", "5tCZETkqUvoo", "uvNzLg_jU1EZ", "ewl3wFMBU3jB", "ydB-qC_dVt0k", "2d24Ai71MQLE", "WyZCO8gNj6jW", "ovlvCGbZj2_x", "YWLRoH_R1L3h", "ZTwvp1BB1L3h" ], "name": "Campaign-Performance-Forecasting-TimesFM", "private_outputs": true, "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }