notebooks/part_2_large_scale_understanding.ipynb (851 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "id": "y5uylk8B8TegnfJxTcMPklE2", "metadata": { "tags": [], "id": "y5uylk8B8TegnfJxTcMPklE2" }, "source": [ "# Copyright 2025 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not ue this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "# CleanSight (Part 2): Large-scale multimodal understanding\n", "\n", "<table align=\"left\">\n", "<td style=\"text-align: center\">\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/data-to-ai/blob/main/notebooks/part_2_large_scale_understanding.ipynb\">\n", " <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n", " </a>\n", "</td>\n", "<td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fdata-to-ai%2Fmain%2Fnotebooks%2Fpart_2_large_scale_understanding.ipynb\">\n", " <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n", " </a>\n", "</td>\n", "<td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/data-to-ai/main/notebooks/part_2_large_scale_understanding.ipynb\">\n", " <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n", " </a>\n", "</td>\n", "<td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/bigquery/import?url=https://github.com/GoogleCloudPlatform/data-to-ai/blob/main/notebooks/part_2_large_scale_understanding.ipynb\">\n", " <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/bigquery/v1/32px.svg\" alt=\"BigQuery Studio logo\"><br> Open in BigQuery Studio\n", " </a>\n", "</td>\n", "<td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/data-to-ai/blob/main/notebooks/part_2_large_scale_understanding.ipynb\">\n", " <img width=\"32px\" src=\"https://upload.wikimedia.org/wikipedia/commons/9/91/Octicons-mark-github.svg\" alt=\"GitHub logo\"><br> View on GitHub\n", " </a>\n", "</table>" ], "metadata": { "id": "yumqITSFiffv" }, "id": "yumqITSFiffv" }, { "cell_type": "markdown", "source": [ "## Note\n", "\n", "Because this notebook involves large amounts of data (thousands of images), **some of the steps may take several minutes to complete**. If you are demonstrating these capabilities in a live setting, you may want to pre-run some of these steps.\n", "\n", "## Overview\n", "\n", "This notebook is the second part of the CleanSight example application. Whereas Part 1 represents an operational system that ingests and processes bus stop images as they arrive, this notebook focuses on the large-scale AI capabilities available once a large number of images has been collected.\n", "\n", "Moreover, this notebook will compare the different **embedding models** and **vector search** methods to help you determine which is right for your use case.\n", "\n", "If you have not already, run the Part 1 notebook which will set up some required infrastructure and resources. Where possible, the same buckets, connections, and other resources are re-used from the Part 1 notebook." ], "metadata": { "id": "ktNVdhAwkEjP" }, "id": "ktNVdhAwkEjP" }, { "cell_type": "code", "source": [ "%pip install --upgrade --user google-cloud-aiplatform" ], "metadata": { "id": "h_KvYlGg5DzS" }, "id": "h_KvYlGg5DzS", "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "from IPython.display import HTML, display\n", "import pandas as pd\n", "\n", "PROJECT_ID = \"<your project>\" # @param {type:\"string\"}\n", "REGION = \"us-central1\" # @param {type:\"string\"}\n", "BQ_DATASET = \"multimodal\"\n", "\n", "if PROJECT_ID == \"<your project>\":\n", " PROJECT_ID = !gcloud config get-value project\n", " PROJECT_ID = PROJECT_ID[0]\n", "\n", "BUCKET_NAME = f\"{PROJECT_ID}-multimodal\"\n", "SOURCE_PATH = f\"gs://{BUCKET_NAME}/sources\"\n", "TARGET_PATH = f\"gs://{BUCKET_NAME}/target\"\n", "USER_AGENT = \"cloud-solutions/data-to-ai-nb-usage-v1\"\n", "\n", "PROJECT_ID" ], "metadata": { "id": "ZJl4dZ6FurrZ" }, "id": "ZJl4dZ6FurrZ", "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "def preview_image(url):\n", " if pd.notna(url):\n", " return f'<img src=\"{url}\" style=\"width:400px; height:auto; transition: transform 0.25s ease; border: 1px solid black;\" onmouseover=\"this.style.transform=\\'scale(1.5)\\';\" onmouseout=\"this.style.transform=\\'scale(1.0)\\';\">'\n", " else:\n", " return None" ], "metadata": { "id": "akXiqbzSaPGX" }, "id": "akXiqbzSaPGX", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "## 1. Import full sample collection and extend schema\n", "\n", "This step will copy over 5,000 bus stop images into your bucket, and will take 3-5 minutes. This is a collection of synthetic images based on real bus stop photos. They have been edited in an automated process using Gemini and Imagen to produce our example dataset." ], "metadata": { "id": "Z00JYZJpr0QC" }, "id": "Z00JYZJpr0QC" }, { "cell_type": "code", "source": [ "!gcloud storage cp -r gs://bus-stops-open-access/edited-images/* {TARGET_PATH}" ], "metadata": { "collapsed": true, "id": "IgrqaYJZuWV4" }, "id": "IgrqaYJZuWV4", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "Additionally, we want to analyze and store a few additional details about each bus stop for future search and analysis, and so we'll extend the `image_reports` table." ], "metadata": { "id": "L299q0U_a-uj" }, "id": "L299q0U_a-uj" }, { "cell_type": "code", "source": [ "%%bigquery\n", "\n", "ALTER TABLE `multimodal.image_reports` ADD COLUMN cleanliness_description STRING;\n", "ALTER TABLE `multimodal.image_reports` ADD COLUMN safety_description STRING;" ], "metadata": { "id": "vithtzMiaHF6" }, "id": "vithtzMiaHF6", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "## 2. Generate image reports" ], "metadata": { "id": "vJC_sSpK3DVj" }, "id": "vJC_sSpK3DVj" }, { "cell_type": "markdown", "source": [ "Now let's revisit the table `image_reports`. For each image in the object table `objects`, we want to **enrich** the image record with generic text description, safety rating, cleanliness rating, and descriptions of the safety/cleanliness observations for later retreival and analysis.\n", "\n", "The Part 1 notebook showed how to perform this inside BigQuery. Here in Part 2, the following code section will show how you would do this in Python using [controlled generation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/control-generated-output)." ], "metadata": { "id": "d-Kfuj0o1S5s" }, "id": "d-Kfuj0o1S5s" }, { "cell_type": "code", "source": [ "import vertexai\n", "from vertexai import generative_models\n", "from vertexai.generative_models import GenerationConfig, GenerativeModel, Part, Image" ], "metadata": { "id": "dnM0oWfz2_EG" }, "id": "dnM0oWfz2_EG", "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "image_reports_prompt_text = \"\"\"\n", " You are given an image of a bus stop. Analyze the image and provide the following:\n", " - a brief, generic description of the bus stop and its surroundings\n", " - count the number of people visible in the image\n", " - a rating of the cleanliness on a scale of 1 to 3 inclusive, where 3 indicates perfectly clean, and 1 indicates dirty and/or poor overall condition.\n", " - a rating of the safety of the bus stop on a scale of 1 to 3 inclusive, where 3 indicates no safety concerns, and 1 indicates unsafe conditions.\n", " - briefly describe cleanliness of the bus stop and its overall condition\n", " - briefly describe any safety concerns apparent in the image, such as low-hanging power lines, snow or ice, or vehicles in the loading area.\n", "\"\"\"\n", "\n", "response_schema = {\n", " 'type': 'object',\n", " 'properties': {\n", " 'description': {\n", " 'type': 'string'\n", " },\n", " 'number_of_people': {\n", " 'type': 'integer'\n", " },\n", " 'cleanliness_level': {\n", " 'type': 'integer'\n", " },\n", " 'safety_level': {\n", " 'type': 'integer'\n", " },\n", " 'cleanliness_description': {\n", " 'type': 'string'\n", " },\n", " 'safety_description': {\n", " 'type': 'string'\n", " }\n", " }\n", "}\n", "generation_config = GenerationConfig(\n", " candidate_count=1,\n", " max_output_tokens=1024,\n", " response_mime_type='application/json',\n", " response_schema=response_schema\n", ")\n", "\n", "model = GenerativeModel(model_name='gemini-2.0-flash-lite-001', generation_config=generation_config)" ], "metadata": { "id": "gSDkQfX83UFW" }, "id": "gSDkQfX83UFW", "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "from google.cloud import bigquery, storage\n", "from google.api_core.client_info import ClientInfo\n", "import time, json, pprint\n", "\n", "client = bigquery.Client(client_info=ClientInfo(user_agent=USER_AGENT))\n", "\n", "ot_sql = f'select * from `multimodal.objects` order by updated desc limit 1'\n", "query_job = client.query(ot_sql)\n", "rows = query_job.result()\n", "\n", "for i, image in enumerate(rows):\n", " # including small delay to avoid potential quota issues\n", " time.sleep(3)\n", "\n", " image_metadata = { m['name']:m['value'] for m in image.metadata }\n", " if 'image_gen_prompt' in image_metadata:\n", " image_metadata.pop('image_gen_prompt')\n", " if 'source_image_uri' in image_metadata:\n", " image_metadata.pop('source_image_uri')\n", "\n", " prompt_image = Part.from_uri(image.uri, image.content_type)\n", "\n", " prompt = [image_reports_prompt_text, prompt_image]\n", " try:\n", " response = model.generate_content(prompt)\n", " json_response = json.loads(response.text)\n", "\n", " bq_row = { **json_response, **image_metadata }\n", "\n", " # the Gemini response schema and the object metadata mostly match our table,\n", " # but we still need to rename and/or remove a couple things\n", " bq_row['updated'] = bq_row.pop('event_date') + \" 00:00\"\n", " bq_row['report_id'] = bq_row.pop('image_id')\n", " bq_row['uri'] = image.uri\n", "\n", " pprint.pp(bq_row)\n", "\n", " # if you're using a script like this to generate data, this is where you\n", " # might insert the synthetic record into the BQ table.\n", " #client.insert_rows_json(client.get_table('multimodal.image_reports'), [bq_row])\n", "\n", " except Exception as e:\n", " print(e)\n" ], "metadata": { "id": "mnGIWSqv3gcq", "collapsed": true }, "id": "mnGIWSqv3gcq", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "**Of course**, we're working with over 5,000 images of bus stops and don't have time here to generate descriptions and ratings for all of them in this demo. So, we've pre-generated the `bus_stops` and `image_reports` tables that you can load in directly.\n", "\n", "A new table in this notebook, `bus_stops`, represents the physical bus stop with an address and a geographic location. Each record in `image_reports` is associated with a `bus_stop`." ], "metadata": { "id": "FBGqU4eTroVz" }, "id": "FBGqU4eTroVz" }, { "cell_type": "code", "source": [ "%%bigquery\n", "\n", "LOAD DATA OVERWRITE `multimodal.bus_stops`\n", "FROM FILES (\n", " format = 'NEWLINE_DELIMITED_JSON',\n", " json_extension = 'GEOJSON',\n", " uris = ['gs://bus-stops-open-access/loader-data/bus_stops000000000000.json']);\n", "\n", "LOAD DATA OVERWRITE `multimodal.image_reports`\n", "FROM FILES (\n", " format = 'JSON',\n", " uris = ['gs://bus-stops-open-access/loader-data/image_reports_g15.json']);\n", "\n", "-- match the URIs in the sample data to those in our local object table\n", "UPDATE `multimodal.image_reports` report\n", "SET uri = obj.uri\n", "FROM `multimodal.objects` obj\n", "WHERE report_id = (select value from unnest(metadata) where name='image_id');\n", "\n", "SELECT count(*) from `multimodal.image_reports`;" ], "metadata": { "id": "qH-R3dQZsA-5" }, "id": "qH-R3dQZsA-5", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "# 3. Create text and multimodal embeddings\n", "\n", "Here we are reprising the \"Semantic Similar Search\" section of the Part 1 notebook -- except this time, we have enough data needed to build a proper BigQuery vector index!\n", "\n", "Again we'll start by generating embeddings for each report description, and store them in the `image_reports_vector_db` table.\n", "\n", "This step will take approximately 60 seconds to complete on our 5,000+ image table." ], "metadata": { "id": "ByNWlGJrvLc2" }, "id": "ByNWlGJrvLc2" }, { "cell_type": "code", "source": [ "%%bigquery\n", "CREATE OR REPLACE TABLE `multimodal.image_reports_vector_db` AS (\n", "SELECT\n", " report_id, uri, bus_stop_id, content as description,\n", " cleanliness_level, safety_level,\n", " ml_generate_embedding_result AS embedding,\n", " ml_generate_embedding_status AS status\n", "FROM\n", " ML.GENERATE_EMBEDDING(\n", " MODEL `multimodal.text_embedding_model`,\n", " (SELECT * EXCEPT(description), description as content FROM `multimodal.image_reports` WHERE description IS NOT NULL),\n", " STRUCT('SEMANTIC_SIMILARITY' as task_type)\n", " )\n", ");" ], "metadata": { "id": "WmSq3RJWvjY7" }, "id": "WmSq3RJWvjY7", "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "%%bigquery\n", "\n", "CREATE VECTOR INDEX reports_text_index ON `multimodal.image_reports_vector_db`(embedding)\n", "STORING (report_id, uri, bus_stop_id, description, cleanliness_level, safety_level)\n", "OPTIONS (index_type = 'IVF', distance_type = 'COSINE')" ], "metadata": { "id": "twwzEWDRvW7Q" }, "id": "twwzEWDRvW7Q", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "**Now, for something a little different...**\n", "\n", "Instead of 1) generating text descriptions and then 2) generating text embeddings, we can generate **multimodal** embeddings directly from the object table using the [`multimodalembedding`](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/multimodal-embeddings-api) model.\n", "\n", "Let's do this, and then we can compare and contrast searches between the two approaches." ], "metadata": { "id": "tBIhoFMz16Hl" }, "id": "tBIhoFMz16Hl" }, { "cell_type": "code", "source": [ "%%bigquery\n", "\n", "CREATE OR REPLACE MODEL `multimodal.mm_embedding_model`\n", "REMOTE WITH CONNECTION `us-central1.multimodal`\n", "OPTIONS ( endpoint = 'multimodalembedding@001')" ], "metadata": { "id": "bEwpUdbG149k" }, "id": "bEwpUdbG149k", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "**NOTICE** the `LIMIT 1` at the end of the following query. This is an example for how you would generate multimodal embeddings directly from an object table. Since there are 5000+ images in our object table, this query could take a long time (5-10 minutes).\n", "\n", "If you want to see it run for yourself, remove the `LIMIT` and run it. Otherwise, run this cell as-is and proceed to the next cell to load a pre-made table." ], "metadata": { "id": "G-M-EvYB5lSX" }, "id": "G-M-EvYB5lSX" }, { "cell_type": "code", "source": [ "%%bigquery\n", "\n", "CREATE OR REPLACE TABLE `multimodal.image_reports_vector_mm_db` AS\n", "SELECT * FROM\n", " ML.GENERATE_EMBEDDING(\n", " MODEL `multimodal.mm_embedding_model`,\n", " (SELECT * FROM `multimodal.objects`\n", " LIMIT 1)\n", " )" ], "metadata": { "id": "KgpdTFQN4zJz" }, "id": "KgpdTFQN4zJz", "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "%%bigquery\n", "\n", "LOAD DATA OVERWRITE `multimodal.image_reports_vector_mm_db`\n", "FROM FILES (\n", " format = 'JSON',\n", " uris = ['gs://bus-stops-open-access/loader-data/image_reports_vector_mm_db.json']);\n", "\n", "-- match the URIs in the sample data to those in our local image_reports table\n", "UPDATE `multimodal.image_reports_vector_mm_db` vdb\n", "SET uri = reports.uri\n", "FROM (select report_id, uri from `multimodal.image_reports` group by report_id, uri) reports\n", "WHERE report_id = (select value from unnest(vdb.metadata) where name='image_id');\n", "\n", "CREATE OR REPLACE VECTOR INDEX reports_text_index ON `multimodal.image_reports_vector_mm_db`(ml_generate_embedding_result)\n", "OPTIONS (index_type = 'IVF', distance_type = 'COSINE');" ], "metadata": { "id": "E0UbdNaT_mZe" }, "id": "E0UbdNaT_mZe", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "#4. Using text and multimodal embeddings for search and analysis\n", "\n", "Let's recap where we stand currently:\n", "1. We created the `image_reports` table, which contains generated descriptions of the images from the objects table (`multimodal.objects`)\n", "2. We then created `image_reports_vector_db` from embeddings generated from those descriptions.\n", "3. Separately, we created the `image_reports_vector_mm_db` by generating multimodal embeddings directly from the object table.\n", "\n", "In the previous notebook, we focused on the use case of a transit agency who wants to maintain the cleanliness of its bus stops. Here, we'll explore anadditional use case that leverages the increased volume of data we have available." ], "metadata": { "id": "S-71V-7eADSi" }, "id": "S-71V-7eADSi" }, { "cell_type": "markdown", "source": [ "## Use case: Ad Verification\n", "\n", "As a marketer, if I buy an ad on a bus stop, I want to verify that it was actually displayed according to my requirements during the time period I expect.\n", "\n", "The transit agency collecting these images should be able to quickly prove to their marketing partner that their ad was indeed shown to riders at the bus stop.\n", "\n", "### Base Identification\n", "\n", "As an example of this, the query below searches for all bus stops that display an advertisement for Burger King. The marketer can then check these timestamped images against their ad buys to verify that the ad was in service during the contracted period." ], "metadata": { "id": "jY-DoPTGd-cP" }, "id": "jY-DoPTGd-cP" }, { "cell_type": "code", "source": [ "%%bigquery df1\n", "\n", "SELECT\n", " base.updated,\n", " CONCAT(\"https://storage.mtls.cloud.google.com/\", SPLIT(base.uri, \"gs://\")[OFFSET(1)]) AS url,\n", "FROM\n", " VECTOR_SEARCH(\n", " TABLE `multimodal.image_reports_vector_mm_db`,\n", " 'ml_generate_embedding_result',\n", " (SELECT * FROM ML.GENERATE_EMBEDDING(\n", " MODEL `multimodal.mm_embedding_model`,\n", " (SELECT 'bus stop has an ad for Burger King' AS content)\n", " )),\n", " 'ml_generate_embedding_result',\n", " top_k => 3);" ], "metadata": { "id": "fRMG7LiXFAvn" }, "id": "fRMG7LiXFAvn", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "Before we look at the results, it's worth taking a step back to note just how ***fast*** that query ran! The generated embeddings combined with the `VECTOR INDEX` enable us to search over vast collections of varied and complex imagery in no time at all." ], "metadata": { "id": "npv8Om7Dcq6w" }, "id": "npv8Om7Dcq6w" }, { "cell_type": "code", "source": [ "df1['image'] = df1['url'].apply(preview_image)\n", "HTML(df1[['updated', 'image']].to_html(escape=False))" ], "metadata": { "id": "eiuQsGWFYfiR" }, "id": "eiuQsGWFYfiR", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "### Quality Verification\n", "\n", "Using `VECTOR_SEARCH`, we're able to identify which bus stops are displaying the queried advertisement. Now, let's take this one step further.\n", "\n", "Mere identification may not be enough to prove to an advertiser that their ad is displayed in the proper manner. What if the ad is damaged? What if it is being displayed in poor conditions, such as in a bus stop that is dirty or unsafe?\n", "\n", "As an advertiser, I want to make sure my product or service is being marketed under the best possible conditions. So, let's try to search the previous resultset for potential issues.\n", "\n" ], "metadata": { "id": "hyo03_i2h1Sj" }, "id": "hyo03_i2h1Sj" }, { "cell_type": "code", "source": [ "%%bigquery df2\n", "\n", "SELECT\n", " base.updated,\n", " CONCAT(\"https://storage.mtls.cloud.google.com/\", SPLIT(base.uri, \"gs://\")[OFFSET(1)]) AS url,\n", "FROM\n", " VECTOR_SEARCH(\n", " TABLE `multimodal.image_reports_vector_mm_db`,\n", " 'ml_generate_embedding_result',\n", " (SELECT * FROM ML.GENERATE_EMBEDDING(\n", " MODEL `multimodal.mm_embedding_model`,\n", " (SELECT 'bus stop has an ad for Burger King AND ALSO contains cleanliness issue such as litter or trash' AS content)\n", " )),\n", " 'ml_generate_embedding_result',\n", " top_k => 1);" ], "metadata": { "id": "G3KHDdVTryB-" }, "id": "G3KHDdVTryB-", "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "df2['image'] = df2['url'].apply(preview_image)\n", "HTML(df2[['updated', 'image']].to_html(escape=False))" ], "metadata": { "id": "n3XFAYy6sARR" }, "id": "n3XFAYy6sARR", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "Clearly, such an unclean bus stop would not meet the requirements of a reasonable advertiser. In a real business setting, the advertiser would have grounds to request that the transit agency invest in cleaning up the dirty areas. This would result in a **win-win** for everyone -- the advertiser, the transit agency, and bus riders!" ], "metadata": { "id": "AZislxSWvoIS" }, "id": "AZislxSWvoIS" }, { "cell_type": "markdown", "source": [ "# Use Case: Weather Report\n", "\n", "Ok, great -- we can verify that ads are running, and we can assess the quality of those ad displays.\n", "\n", "Another factor that affects the economics of advertising and transit ridership is of course, **the weather**! As a transit agency, I want to understand how weather affects the usage of my bus stops.\n", "\n", "All weather events have two identifiable components: a **time** and a **location**.\n", "\n", "In our data model, we can match up weather events using `image_reports.updated` as our time, and `bus_stop.geom`.\n", "\n", "For the next section, we'll be using a historic storms dataset from NOAA that is available in the BigQuery public datasets program. In order to proceed, let's copy that table into our local BigQuery dataset." ], "metadata": { "id": "ziuSAZy29uyz" }, "id": "ziuSAZy29uyz" }, { "cell_type": "code", "source": [ "!bq cp -f -n bigquery-public-data:noaa_historic_severe_storms.storms_2024 $PROJECT_ID:multimodal.storms" ], "metadata": { "id": "MW5ExsabIbVR" }, "id": "MW5ExsabIbVR", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "Let's first do some exploration. Run the following query to see which bus stops have been most affected by severe weather events in the past year." ], "metadata": { "id": "io6hFoxAfv-1" }, "id": "io6hFoxAfv-1" }, { "cell_type": "code", "source": [ "%%bigquery\n", "\n", "SELECT\n", " bus_stop_id,\n", " count(*) as occurrences\n", " FROM `multimodal.bus_stops` stops\n", "\n", "INNER JOIN `multimodal.storms` storms\n", " ON (\n", " ST_INTERSECTS(stops.geometry, ST_BUFFER(storms.event_point, 5000))\n", " )\n", "group by bus_stop_id\n", "order by occurrences desc" ], "metadata": { "id": "ioftsms0fvkm" }, "id": "ioftsms0fvkm", "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "Looks like just about every bus stop experience some kind of severe weather event -- but some are definitely more affected than others!\n", "\n", "An understanding of which bus stations are most affected by weather can be useful for scheduling maintenance and making sure that the station and all the attendant infrastructure are in good working order." ], "metadata": { "id": "7V56WZYxgDF0" }, "id": "7V56WZYxgDF0" }, { "cell_type": "markdown", "source": [ "To observe particular instances of severe weather at that bus stop over time, we can revisit `VECTOR_SEARCH` and combine it with our `storm` table to corroborate recorded weather events with observed images.\n", "\n", "The following is an example query that combines the vector search and the spatio-temporal join against historical weather events; you can play around with this query and modify for your own use case." ], "metadata": { "id": "mR3zeZiNoKUl" }, "id": "mR3zeZiNoKUl" }, { "cell_type": "code", "source": [ "%%bigquery df3\n", "\n", "WITH severe_weather_reports AS (\n", " SELECT\n", " base.updated,\n", " base.uri,\n", " distance,\n", " CONCAT(\"https://storage.mtls.cloud.google.com/\", SPLIT(base.uri, \"gs://\")[OFFSET(1)]) AS url\n", " FROM\n", " VECTOR_SEARCH(\n", " TABLE `multimodal.image_reports_vector_mm_db`,\n", " 'ml_generate_embedding_result',\n", " (SELECT * FROM ML.GENERATE_EMBEDDING(\n", " MODEL `multimodal.mm_embedding_model`,\n", " (SELECT 'bus stop appears affected by snow, wind, hail, or is otherwise damaged in some way' AS content)\n", " )),\n", " 'ml_generate_embedding_result',\n", " top_k => 5)\n", ")\n", "SELECT\n", " distinct(reports.bus_stop_id) as bus_stop_id,\n", " reports.updated,\n", " event_type,\n", " url,\n", " distance\n", "FROM `multimodal.image_reports` reports\n", "\n", "INNER JOIN severe_weather_reports\n", " ON (severe_weather_reports.uri = reports.uri)\n", "\n", "INNER JOIN `multimodal.bus_stops` stops\n", " ON (stops.bus_stop_id = reports.bus_stop_id)\n", "\n", "INNER JOIN `multimodal.storms` storms\n", " ON (\n", " ST_INTERSECTS(stops.geometry, ST_BUFFER(storms.event_point, 10000))\n", " )\n", "ORDER BY distance ASC\n", "LIMIT 3" ], "metadata": { "id": "Obqfb7C5gezO" }, "id": "Obqfb7C5gezO", "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "df3['image'] = df3['url'].apply(preview_image)\n", "HTML(df3[['bus_stop_id', 'updated', 'event_type', 'image']].to_html(escape=False))" ], "metadata": { "id": "G2a8FI3UgqLj" }, "id": "G2a8FI3UgqLj", "execution_count": null, "outputs": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.10" }, "colab": { "provenance": [], "name": "part_2_large_scale_understanding.ipynb", "private_outputs": true } }, "nbformat": 4, "nbformat_minor": 5 }