notebooks/community/model_garden/model_garden_camp_zipnerf_gradio.ipynb (3,182 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "-e1HpvsDh34Q" }, "outputs": [], "source": [ "# Copyright 2024 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "L5o1Ggr5h34U" }, "source": [ "# Vertex AI Model Garden - CamP ZipNeRF (Jax) Gradio Notebook\n", "<table><tbody><tr>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fcommunity%2Fmodel_garden%2Fmodel_garden_camp_zipnerf_gradio.ipynb\">\n", " <img alt=\"Google Cloud Colab Enterprise logo\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" width=\"32px\"><br> Run in Colab Enterprise\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_camp_zipnerf_gradio.ipynb\">\n", " <img alt=\"GitHub logo\" src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" width=\"32px\"><br> View on GitHub\n", " </a>\n", " </td>\n", "</tr></tbody></table>" ] }, { "cell_type": "markdown", "metadata": { "id": "U-SERmqUh34V" }, "source": [ "**_NOTE_**: This notebook has been tested in the following environment:\n", "\n", "* Python version = 3.9" ] }, { "cell_type": "markdown", "metadata": { "id": "V6QmW0Doh34W" }, "source": [ "## Overview\n", "This notebook launches a Gradio application based on the [jax implementation](https://github.com/jonbarron/camp_zipnerf) of [CamP: Camera Preconditioning for Neural Radiance Fields](https://camp-nerf.github.io/). The application is designed for training and rendering Neural Radiance Fields (NeRFs) more efficiently in jax. CamP addresses some of the limitations of traditional NeRF techniques, which, while powerful for creating detailed 3D models from 2D images, can be computationally intensive and slow.\n", "\n", "The goal of the Gradio interface is to provide a truly user-friendly experience for users with limited knowledge of Google Cloud. This ensures that anyone can easily access and leverage the powerful capabilities of CamP without needing extensive technical expertise." ] }, { "cell_type": "markdown", "metadata": { "id": "vkSMThcKh34W" }, "source": [ "## Objective\n", "\n", "In this tutorial, you will learn how to:\n", "\n", "- Use [COLMAP](https://colmap.github.io/) to perform Structure from Motion (SfM), a technique that estimates the three-dimensional structure of a scene from a series of two-dimensional images.\n", "- Calibrate, train and render NERF scenes using [Vertex AI custom jobs](https://cloud.google.com/vertex-ai/docs/samples/aiplatform-create-custom-job-sample).\n", "- Render a video along a custom camera path using a series of keyframe photos.\n", "\n", "This tutorial uses the following Google Cloud ML services and resources:\n", "\n", "- Vertex AI Training\n", "- Vertex AI Custom Job\n", "\n", "Additionally, we provide a comprehensive **pipeline** that automates the entire process by running all three jobs (SfM, calibration/training, and rendering) in a Directed Acyclic Graph (DAG) fashion. This pipeline ensures efficient and sequential execution of each step, streamlining the workflow and minimizing manual intervention." ] }, { "cell_type": "markdown", "metadata": { "id": "myi4N60Xh34W" }, "source": [ "## Costs\n", "\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "* Vertex AI\n", "* Cloud Storage\n", "\n", "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": { "id": "vofRExleAA8k" }, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "qayv5ifRh34Y" }, "outputs": [], "source": [ "# @title Setup Google Cloud project and prepare the dependencies\n", "\n", "# @markdown 1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n", "# @markdown 2. [Optional] [Create a Cloud Storage bucket](https://cloud.google.com/storage/docs/creating-buckets) for storing\n", "# @markdown experiment outputs. Set the BUCKET_URI for the experiment environment. The specified Cloud Storage bucket (`BUCKET_URI`)\n", "# @markdown should be located in the same region as where the notebook was launched. Note that a multi-region bucket (eg. \"us\") is\n", "# @markdown not considered a match for a single region covered by the multi-region range (eg. \"us-central1\").\n", "# @markdown If not set, a unique GCS bucket will be created instead.\n", "\n", "! pip3 install --upgrade gradio==4.29.0\n", "! pip3 install --upgrade pandas==2.2.1\n", "! pip3 install --upgrade opencv-python==4.10.0.84\n", "# Uninstall nest-asyncio and uvloop as a workaround to https://github.com/gradio-app/gradio/issues/8238#issuecomment-2101066984\n", "! pip3 uninstall --yes nest-asyncio uvloop\n", "! pip3 install google-cloud-bigquery==3.24.0\n", "! pip3 install kfp==2.7.0\n", "! pip3 install google-cloud-pipeline-components==2.14.1\n", "! pip3 install --upgrade oauth2client==1.4.2\n", "! pip3 install six==1.16.0\n", "! pip3 install moviepy==1.0.3\n", "\n", "\n", "import os\n", "from datetime import datetime\n", "\n", "from google.cloud import aiplatform\n", "\n", "# Get the default cloud project id.\n", "PROJECT_ID = os.environ[\"GOOGLE_CLOUD_PROJECT\"]\n", "\n", "# Get the default region for launching jobs.\n", "REGION = os.environ[\"GOOGLE_CLOUD_REGION\"]\n", "\n", "# Enable the Vertex AI API and Compute Engine API, if not already.\n", "print(\"Enabling Vertex AI and Compute Engine API.\")\n", "! gcloud services enable aiplatform.googleapis.com compute.googleapis.com\n", "\n", "# Cloud Storage bucket for storing the experiment artifacts.\n", "# A unique GCS bucket will be created for the purpose of this notebook. If you\n", "# prefer using your own GCS bucket, please change the value yourself below.\n", "now = datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n", "BUCKET_URI = \"\" # @param {type: \"string\"}\n", "\n", "assert BUCKET_URI.startswith(\"gs://\"), \"BUCKET_URI must start with `gs://`.\"\n", "if BUCKET_URI is None or BUCKET_URI.strip() == \"\" or BUCKET_URI == \"gs://\":\n", " # Create a unique GCS bucket for this notebook, if not specified by the user\n", " BUCKET_URI = f\"gs://{PROJECT_ID}-tmp-{now}\"\n", " ! gsutil mb -l {REGION} {BUCKET_URI}\n", "else:\n", " BUCKET_NAME = \"/\".join(BUCKET_URI.split(\"/\")[:3])\n", " shell_output = ! gsutil ls -Lb {BUCKET_NAME} | grep \"Location constraint:\" | sed \"s/Location constraint://\"\n", " bucket_region = shell_output[0].strip().lower()\n", " if bucket_region != REGION:\n", " raise ValueError(\n", " \"Bucket region %s is different from notebook region %s\"\n", " % (bucket_region, REGION)\n", " )\n", "\n", "print(f\"Using this GCS Bucket: {BUCKET_URI}\")\n", "\n", "# Set up the default SERVICE_ACCOUNT.\n", "shell_output = ! gcloud projects describe $PROJECT_ID\n", "project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n", "SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n", "SERVICE_ACCOUNT_CC = (\n", " f\"service-{project_number}@gcp-sa-aiplatform-cc.iam.gserviceaccount.com\"\n", ")\n", "\n", "print(\"Using this default Service Account:\", SERVICE_ACCOUNT)\n", "\n", "BUCKET_NAME = \"/\".join(BUCKET_URI.split(\"/\")[:3])\n", "# Provision permissions to the two SERVICE_ACCOUNTs with the GCS bucket\n", "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.admin\n", "\n", "staging_bucket = os.path.join(BUCKET_URI, \"zipnerf_staging\")\n", "aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=staging_bucket)\n", "\n", "PROJECT_NUMBER = project_number\n", "\n", "# The pre-built calibration docker image.\n", "CALIBRATION_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/pytorch-cloudnerf-calibrate:latest\"\n", "# The pre-built training docker image.\n", "TRAINING_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/jax-cloudnerf-train:latest\"\n", "# The pre-built rendering docker image.\n", "RENDERING_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/jax-cloudnerf-render:latest\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "OBhvKerXh34a" }, "outputs": [], "source": [ "# @title BigQuery Setup for CamP ZipNeRF Experiment Tracking\n", "\n", "# @markdown The app leverages [BigQuery](https://cloud.google.com/bigquery) to create interactive dataframes that persistently tracks the lifecycle of NeRF experiments. This ensures that even if the runtime is stopped or lost, the app continues to work with the same information.\n", "\n", "# @markdown Each user is assigned a unique database name in BigQuery, generated based on their bucket name.\n", "\n", "from datetime import datetime\n", "\n", "from google.cloud import bigquery\n", "\n", "# Initialize the BigQuery client\n", "client = bigquery.Client()\n", "\n", "PROJECT_ID = \"cloud-nas-260507\" # Update this with your actual project ID\n", "bucket_name = BUCKET_URI.replace(\"gs://\", \"\").replace(\"-\", \"_\")\n", "DATASET_NAME = f\"nerf_gradio_app_data_{bucket_name}\"\n", "\n", "# Define dataset and table IDs\n", "dataset_id = f\"{PROJECT_ID}.{DATASET_NAME}\"\n", "table_ids = {\n", " \"colmap_data\": f\"{dataset_id}.colmap_data\",\n", " \"training_data\": f\"{dataset_id}.training_data\",\n", " \"rendering_data\": f\"{dataset_id}.rendering_data\",\n", "}\n", "\n", "# Create dataset\n", "dataset = bigquery.Dataset(dataset_id)\n", "dataset = client.create_dataset(dataset, exists_ok=True)\n", "print(f\"Created app interface dataset {client.project}.{dataset.dataset_id}\")\n", "# Define schemas\n", "schemas = {\n", " \"colmap_data\": [\n", " bigquery.SchemaField(\"Job_Status\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Pipeline_State\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Pipeline_Resource_Name\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Experiment_ID\", \"STRING\", mode=\"REQUIRED\"),\n", " bigquery.SchemaField(\"Colmap_Job_ID\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Scene_Name\", \"STRING\", mode=\"REQUIRED\"),\n", " bigquery.SchemaField(\"Image_Count\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Created_Time\", \"TIMESTAMP\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Start_Time\", \"TIMESTAMP\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"End_Time\", \"TIMESTAMP\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Matcher_Type\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Camera_Type\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Video_Frame_FPS\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Max_Num_Features\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Use_Hierarchical_Mapper\", \"BOOL\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"GCS_Dataset_Path\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"GCS_Experiment_Path\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Machine_Type\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Accelerator_Type\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Accelerator_Count\", \"INT64\", mode=\"NULLABLE\"),\n", " ],\n", " \"training_data\": [\n", " bigquery.SchemaField(\"Job_Status\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Pipeline_State\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Pipeline_Resource_Name\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Experiment_ID\", \"STRING\", mode=\"REQUIRED\"),\n", " bigquery.SchemaField(\"Training_Job_ID\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Training_Job_Name\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Colmap_Job_ID\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Scene_Name\", \"STRING\", mode=\"REQUIRED\"),\n", " bigquery.SchemaField(\"Image_Count\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Created_Time\", \"TIMESTAMP\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Start_Time\", \"TIMESTAMP\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"End_Time\", \"TIMESTAMP\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Training_Factor\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Training_Max_Steps\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"ZipNeRF_Gin_Config\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"CamP_Gin_Config\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"GCS_Experiment_Path\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Machine_Type\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Accelerator_Type\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Accelerator_Count\", \"INT64\", mode=\"NULLABLE\"),\n", " ],\n", " \"rendering_data\": [\n", " bigquery.SchemaField(\"Job_Status\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Pipeline_State\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Pipeline_Resource_Name\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Experiment_ID\", \"STRING\", mode=\"REQUIRED\"),\n", " bigquery.SchemaField(\"Rendering_Job_ID\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Rendering_Job_Name\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Training_Job_ID\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Training_Job_Name\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Colmap_Job_ID\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Scene_Name\", \"STRING\", mode=\"REQUIRED\"),\n", " bigquery.SchemaField(\"Image_Count\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Created_Time\", \"TIMESTAMP\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Start_Time\", \"TIMESTAMP\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"End_Time\", \"TIMESTAMP\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Render_Factor\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Render_Resolution_Width\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Render_Resolution_Height\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Render_FPS\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"ZipNeRF_Gin_Config\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"CamP_Gin_Config\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"GCS_Keyframes_File\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"GCS_Render_Path_File\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Render_Camtype\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Render_Focal\", \"FLOAT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Render_Path_Frames\", \"INT64\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"GCS_Experiment_Path\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Machine_Type\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Accelerator_Type\", \"STRING\", mode=\"NULLABLE\"),\n", " bigquery.SchemaField(\"Accelerator_Count\", \"INT64\", mode=\"NULLABLE\"),\n", " ],\n", "}\n", "\n", "# Create tables\n", "for table_name, schema in schemas.items():\n", " table_id = table_ids[table_name]\n", " table = bigquery.Table(table_id, schema=schema)\n", " table = client.create_table(table, exists_ok=True)\n", " print(f\"Created table {table.project}.{table.dataset_id}.{table.table_id}\")\n", "\n", "\n", "def insert_or_update_row(row, table_id, primary_key=\"Experiment_ID\"):\n", " columns = \", \".join(row.keys())\n", " values = \", \".join([f\"@{key}\" for key in row.keys()])\n", " updates = \", \".join(\n", " [f\"T.{key} = S.{key}\" for key in row.keys() if key != primary_key]\n", " )\n", "\n", " query = f\"\"\"\n", " MERGE `{table_id}` T\n", " USING (SELECT {', '.join([f\"{value} AS {key}\" for key, value in zip(row.keys(), values.split(', '))])}) S\n", " ON T.{primary_key} = S.{primary_key}\n", " WHEN MATCHED THEN\n", " UPDATE SET {updates}\n", " WHEN NOT MATCHED THEN\n", " INSERT ({columns})\n", " VALUES ({values})\n", " \"\"\"\n", "\n", " query_parameters = [\n", " bigquery.ScalarQueryParameter(\n", " key,\n", " (\n", " \"STRING\"\n", " if key\n", " in [\n", " \"Job_Status\",\n", " \"Pipeline_State\",\n", " \"Pipeline_Resource_Name\",\n", " \"Experiment_ID\",\n", " \"Colmap_Job_ID\",\n", " \"Scene_Name\",\n", " \"Training_Job_ID\",\n", " \"Rendering_Job_ID\",\n", " \"Matcher_Type\",\n", " \"Camera_Type\",\n", " \"ZipNeRF_Gin_Config\",\n", " \"CamP_Gin_Config\",\n", " \"GCS_Keyframes_File\",\n", " \"GCS_Render_Path_File\",\n", " \"Render_Camtype\",\n", " \"GCS_Dataset_Path\",\n", " \"GCS_Experiment_Path\",\n", " \"Training_Job_Name\",\n", " \"Rendering_Job_Name\",\n", " \"Machine_Type\",\n", " \"Accelerator_Type\",\n", " ]\n", " else (\n", " \"INT64\"\n", " if key\n", " in [\n", " \"Image_Count\",\n", " \"Video_Frame_FPS\",\n", " \"Max_Num_Features\",\n", " \"Training_Factor\",\n", " \"Training_Max_Steps\",\n", " \"Render_Factor\",\n", " \"Render_Resolution_Width\",\n", " \"Render_Resolution_Height\",\n", " \"Render_FPS\",\n", " \"Render_Path_Frames\",\n", " \"Accelerator_Count\",\n", " ]\n", " else (\n", " \"BOOL\"\n", " if key == \"Use_Hierarchical_Mapper\"\n", " else \"FLOAT64\"\n", " if key == \"Render_Focal\"\n", " else \"TIMESTAMP\"\n", " )\n", " )\n", " ),\n", " value,\n", " )\n", " for key, value in row.items()\n", " ]\n", "\n", " job_config = bigquery.QueryJobConfig(query_parameters=query_parameters)\n", "\n", " query_job = client.query(query, job_config=job_config)\n", " query_job.result() # Wait for the job to complete\n", " print(f\"Inserted/Updated row with {primary_key}: {row[primary_key]}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "j1evctm2h34g" }, "outputs": [], "source": [ "# @title Gradio App Utility Functions\n", "\n", "\n", "import concurrent.futures\n", "import glob\n", "import hashlib\n", "import logging\n", "import mimetypes\n", "import os\n", "import re\n", "import shutil\n", "import threading\n", "import time\n", "from datetime import datetime\n", "from typing import List\n", "\n", "import cv2\n", "import gradio as gr\n", "import numpy as np\n", "import pandas as pd\n", "from google.cloud import aiplatform, bigquery, storage\n", "from google_cloud_pipeline_components.v1.custom_job import CustomTrainingJobOp\n", "from google_cloud_pipeline_components.v1.vertex_notification_email import \\\n", " VertexNotificationEmailOp\n", "from kfp import compiler, dsl\n", "from moviepy.editor import VideoFileClip\n", "from PIL import Image\n", "\n", "GCS_URI_PREFIX = \"gs://\"\n", "\n", "JOB_STATE_MAPPING = {\n", " 0: \"NOT STARTED\",\n", " 1: \"QUEUED\",\n", " 2: \"PENDING\",\n", " 3: \"RUNNING\",\n", " 4: \"SUCCEEDED\",\n", " 5: \"FAILED\",\n", " 6: \"CANCELLING\",\n", " 7: \"CANCELLED\",\n", " 8: \"PAUSED\",\n", " 9: \"EXPIRED\",\n", "}\n", "\n", "MATCHER_MAPPING = {\n", " \"Exhaustive Matcher\": \"exhaustive\",\n", " \"Sequential Matcher\": \"sequential\",\n", " \"Spatial Matcher\": \"spatial\",\n", " \"Transitive Matcher\": \"transitive\",\n", " \"Vocab Tree Matcher\": \"vocab_tree\",\n", "}\n", "IMAGE_EXTENSIONS = (\".png\", \".jpg\", \".jpeg\", \".gif\", \".bmp\")\n", "GCS_API_ENDPOINT = \"https://storage.cloud.google.com/\"\n", "\n", "# Configure logging\n", "logging.basicConfig(\n", " level=logging.INFO, format=\"%(asctime)s - %(levelname)s - %(message)s\"\n", ")\n", "\n", "# Track unique experiments\n", "unique_experiments = set()\n", "\n", "# Define dataset and table IDs\n", "dataset_id = f\"{PROJECT_ID}.{DATASET_NAME}\"\n", "table_ids = {\n", " \"colmap_data\": f\"{dataset_id}.colmap_data\",\n", " \"training_data\": f\"{dataset_id}.training_data\",\n", " \"rendering_data\": f\"{dataset_id}.rendering_data\",\n", "}\n", "\n", "colmap_table_id = table_ids[\"colmap_data\"]\n", "training_table_id = table_ids[\"training_data\"]\n", "rendering_table_id = table_ids[\"rendering_data\"]\n", "\n", "\n", "def generate_short_unique_identifier():\n", " current_time = str(time.time()).encode(\"utf-8\")\n", " hash_object = hashlib.sha256(current_time)\n", " short_hash = hash_object.hexdigest()[:7]\n", " return short_hash\n", "\n", "\n", "def extract_dataset_id(input_string):\n", " # Define the regex pattern to match the dataset ID\n", " pattern = r\"dataset_\\d{8}_\\d{6}\"\n", "\n", " # Check if the input string itself matches the pattern\n", " if re.fullmatch(pattern, input_string):\n", " return input_string\n", "\n", " # Search for the pattern in the input string\n", " match = re.search(pattern, input_string)\n", "\n", " # If a match is found, return the matched string, otherwise return None\n", " if match:\n", " return match.group(0)\n", " else:\n", " return None\n", "\n", "\n", "def get_job_name_with_datetime(prefix: str) -> str:\n", " return prefix + datetime.now().strftime(\"_%Y%m%d_%H%M%S\")\n", "\n", "\n", "def get_vertex_ai_job_status(job_id: str) -> str:\n", " job = aiplatform.CustomJob.get(job_id)\n", " return job.state\n", "\n", "\n", "def get_vertex_ai_training_job_link(job_id, project_number, location=\"us-central1\"):\n", " base_url = \"https://console.cloud.google.com/ai/platform/locations\"\n", " link = f\"{base_url}/{location}/training/{job_id}?project={project_number}\"\n", " return link\n", "\n", "\n", "def get_vertex_ai_pipeline_run_link(\n", " pipeline_run_id, project_number, location=\"us-central1\"\n", "):\n", " base_url = \"https://console.cloud.google.com/vertex-ai/locations\"\n", " link = f\"{base_url}/{location}/pipelines/runs/{pipeline_run_id}?project={project_number}\"\n", " return link\n", "\n", "\n", "def get_vertex_ai_pipeline_job_status(job_id: str) -> str:\n", " job = aiplatform.PipelineJob.get(job_id)\n", " return job.state\n", "\n", "\n", "def get_bucket_and_blob_name(filepath: str) -> tuple:\n", " gs_suffix = filepath.split(\"gs://\", 1)[1]\n", " return tuple(gs_suffix.split(\"/\", 1))\n", "\n", "\n", "def get_bucket_and_blob_name_https(filepath: str) -> tuple:\n", " gs_suffix = filepath.split(\"https://\", 1)[1]\n", " return tuple(gs_suffix.split(\"/\", 1))\n", "\n", "\n", "def get_bigquery_client():\n", " return bigquery.Client()\n", "\n", "\n", "def is_gcs_path(input_path: str) -> bool:\n", " \"\"\"Checks if the input path is a Google Cloud Storage (GCS) path.\n", "\n", " Args:\n", " input_path: The input path to be checked.\n", "\n", " Returns:\n", " True if the input path is a GCS path, False otherwise.\n", " \"\"\"\n", " return input_path is not None and input_path.startswith(GCS_URI_PREFIX)\n", "\n", "\n", "def create_pending_video(\n", " pending_message=\"PENDING\",\n", " output_path=\"pending_video.mp4\",\n", " width=1280,\n", " height=720,\n", " duration=5,\n", " fps=30,\n", "):\n", " # Define the codec and create VideoWriter object\n", " fourcc = cv2.VideoWriter_fourcc(*\"mp4v\")\n", " out = cv2.VideoWriter(output_path, fourcc, fps, (width, height))\n", "\n", " # Define text properties\n", " font = cv2.FONT_HERSHEY_SIMPLEX\n", " font_scale = 5\n", " font_thickness = 10\n", " text_size = cv2.getTextSize(pending_message, font, font_scale, font_thickness)[0]\n", "\n", " # Calculate the center position\n", " text_x = (width - text_size[0]) // 2\n", " text_y = (height + text_size[1]) // 2\n", "\n", " # Create frames and write them to the video\n", " total_frames = duration * fps\n", " for _ in range(total_frames):\n", " frame = np.zeros((height, width, 3), dtype=np.uint8)\n", " cv2.putText(\n", " frame,\n", " pending_message,\n", " (text_x, text_y),\n", " font,\n", " font_scale,\n", " (255, 255, 255),\n", " font_thickness,\n", " lineType=cv2.LINE_AA,\n", " )\n", " out.write(frame)\n", "\n", " # Release everything if job is finished\n", " out.release()\n", " print(f\"Created pending video at {output_path}\")\n", " return output_path\n", "\n", "\n", "def list_mp4_files(bucket_name, folder_path, job_status):\n", " client = storage.Client()\n", "\n", " # Get all the blobs (files) with .mp4 extension in the specified folder\n", " blobs = client.list_blobs(bucket_name, prefix=folder_path)\n", "\n", " mp4_files = []\n", " try:\n", " for blob in blobs:\n", " if blob.name.endswith(\".mp4\"):\n", " mp4_files.append(blob.name)\n", " else:\n", " mp4_files.append(create_pending_video(job_status, \"pending_video.mp4\"))\n", " except IndexError:\n", " mp4_files.append(create_pending_video(job_status, \"pending_video.mp4\"))\n", "\n", " return mp4_files\n", "\n", "\n", "def list_folders(bucket_name, folder_path):\n", " client = storage.Client()\n", "\n", " # Get all the blobs (files and folders) in the specified folder\n", " blobs = client.list_blobs(bucket_name, prefix=folder_path)\n", "\n", " folders = set()\n", " for blob in blobs:\n", " # Extract the folder path from each blob's name\n", " folder = blob.name.rsplit(\"/\", 1)[0]\n", " folders.add(folder)\n", "\n", " return list(folders)\n", "\n", "\n", "def download_gcs_file_to_local_dir(gcs_uri: str, local_dir: str):\n", " \"\"\"Download a gcs file to a local dir.\n", "\n", " Args:\n", " gcs_uri: A string of file path on GCS.\n", " local_dir: A string of local directory.\n", " \"\"\"\n", " if not is_gcs_path(gcs_uri):\n", " raise ValueError(f\"{gcs_uri} is not a GCS path starting with {GCS_URI_PREFIX}.\")\n", " filename = os.path.basename(gcs_uri)\n", " download_gcs_file_to_local(gcs_uri, os.path.join(local_dir, filename))\n", "\n", "\n", "def download_gcs_file_to_local(gcs_uri: str, local_path: str):\n", " \"\"\"Download a gcs file to a local path.\n", "\n", " Args:\n", " gcs_uri: A string of file path on GCS.\n", " local_path: A string of local file path.\n", " \"\"\"\n", " if not is_gcs_path(gcs_uri):\n", " raise ValueError(f\"{gcs_uri} is not a GCS path starting with {GCS_URI_PREFIX}.\")\n", " client = storage.Client()\n", " os.makedirs(os.path.dirname(local_path), exist_ok=True)\n", " with open(local_path, \"wb\") as f:\n", " client.download_blob_to_file(gcs_uri, f)\n", "\n", "\n", "def list_gcs_bucket_contents() -> dict:\n", " client = storage.Client()\n", " bucket_name = get_bucket_and_blob_name(BUCKET_NAME)[0]\n", " bucket = client.get_bucket(bucket_name)\n", " blobs = bucket.list_blobs()\n", " folder_counts = {}\n", "\n", " for blob in blobs:\n", " folder = blob.name.split(\"/\")[0]\n", " folder_counts[folder] = folder_counts.get(folder, 0) + 1\n", "\n", " return folder_counts\n", "\n", "\n", "def upload_local_dir_to_gcs(\n", " scene_name: str,\n", " local_dir_path: str,\n", " gcs_dir_path: str,\n", " progress=gr.Progress(),\n", " table_id: str = \"\",\n", "):\n", " total_files = len(glob.glob(local_dir_path + \"/**\"))\n", " completed_files = 0\n", "\n", " bucket_name = get_bucket_and_blob_name(BUCKET_NAME)[0]\n", " scene_name = scene_name + \"_\" + generate_short_unique_identifier()\n", " client = storage.Client()\n", " bucket = client.get_bucket(bucket_name)\n", " dataset_folder_name = os.path.basename(gcs_dir_path)\n", "\n", " for local_file in glob.glob(local_dir_path + \"/**\"):\n", " if os.path.isfile(local_file):\n", " filename = local_file[1 + len(local_dir_path) :]\n", " gcs_file_path = os.path.join(gcs_dir_path, filename)\n", " _, blob_name = get_bucket_and_blob_name_https(gcs_file_path)\n", " blob = bucket.blob(blob_name)\n", " blob.upload_from_filename(local_file)\n", " completed_files += 1\n", " progress(completed_files / total_files)\n", " print(f\"Copied {local_file} to {gcs_file_path}.\")\n", "\n", " folder_counts = list_gcs_bucket_contents()\n", " gcs_dataset_path = os.path.join(BUCKET_NAME, dataset_folder_name)\n", " gcs_experiment_path = gcs_dataset_path.replace(\"dataset\", \"experiment\")\n", " row_data = {\n", " \"Experiment_ID\": dataset_folder_name,\n", " \"Scene_Name\": scene_name,\n", " \"Job_Status\": \"Images Uploaded\",\n", " \"Colmap_Job_ID\": \"\",\n", " \"Image_Count\": folder_counts[dataset_folder_name],\n", " \"Created_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"Start_Time\": None,\n", " \"End_Time\": None,\n", " \"GCS_Dataset_Path\": gcs_dataset_path,\n", " \"GCS_Experiment_Path\": gcs_experiment_path,\n", " }\n", "\n", " insert_or_update_row(row_data, table_id)\n", "\n", " return f\"Images uploaded successfully to {gcs_dir_path}.\"\n", "\n", "\n", "def fetch_job_times(job_id: str) -> tuple:\n", " job = aiplatform.CustomJob.get(job_id)\n", " return job.create_time, job.start_time, job.end_time\n", "\n", "\n", "def fetch_pipeline_job_times(job_id: str) -> tuple:\n", " pipeline_job = aiplatform.PipelineJob.get(job_id)\n", " while True:\n", " try:\n", " # Explore other attributes of the pipeline job\n", " pipeline_job_dict = pipeline_job.to_dict()\n", " create_time = pipeline_job_dict[\"createTime\"].replace(\"Z\", \"\")\n", " start_time = pipeline_job_dict[\"startTime\"].replace(\"Z\", \"\")\n", " update_time = pipeline_job_dict[\"updateTime\"].replace(\"Z\", \"\")\n", " break\n", " except KeyError:\n", " time.sleep(10)\n", "\n", " return create_time, start_time, update_time\n", "\n", "\n", "def list_bq_folder_contents_colmap(table_id: str) -> dict:\n", " client = get_bigquery_client()\n", " query = f\"\"\"\n", " SELECT Experiment_ID, Scene_Name, Job_Status, Image_Count, Colmap_Job_ID, Created_Time, Start_Time, End_Time,\n", " Matcher_Type, Camera_Type, Video_Frame_FPS, Max_Num_Features, Use_Hierarchical_Mapper, GCS_Dataset_Path, GCS_Experiment_Path\n", " FROM `{table_id}`\n", " ORDER BY Experiment_ID\n", " \"\"\"\n", " query_job = client.query(query)\n", " results = query_job.result()\n", "\n", " folder_counts = {}\n", "\n", " def process_row(row):\n", " if \"Images Uploaded\" in row.Job_Status or \"NOT STARTED\" in row.Job_Status:\n", " job_status = 0\n", " elif \"nerf-pipeline\" in row.Colmap_Job_ID:\n", " job_status = get_vertex_ai_pipeline_job_status(row.Colmap_Job_ID)\n", " elif \"MANUAL\" in row.Colmap_Job_ID:\n", " job_status = 4\n", " else:\n", " job_status = get_vertex_ai_job_status(row.Colmap_Job_ID)\n", "\n", " updated_job_status = JOB_STATE_MAPPING[int(job_status)]\n", " folder_counts[row.Scene_Name] = {\n", " \"Experiment ID\": row.Experiment_ID,\n", " \"Job Status\": updated_job_status,\n", " \"Image Count\": row.Image_Count,\n", " \"Colmap Job ID\": row.Colmap_Job_ID,\n", " \"Created Time\": (\n", " row.Created_Time.strftime(\"%Y-%m-%d %H:%M:%S\")\n", " if row.Created_Time\n", " else None\n", " ),\n", " \"Start Time\": (\n", " row.Start_Time.strftime(\"%Y-%m-%d %H:%M:%S\") if row.Start_Time else None\n", " ),\n", " \"End Time\": (\n", " row.End_Time.strftime(\"%Y-%m-%d %H:%M:%S\") if row.End_Time else None\n", " ),\n", " \"Matcher Type\": row.Matcher_Type,\n", " \"Camera Type\": row.Camera_Type,\n", " \"Video Frame FPS\": row.Video_Frame_FPS,\n", " \"Max Num Features\": row.Max_Num_Features,\n", " \"Use Hierarchical Mapper\": row.Use_Hierarchical_Mapper,\n", " \"GCS Dataset Path\": row.GCS_Dataset_Path,\n", " \"GCS Experiment Path\": row.GCS_Experiment_Path,\n", " }\n", "\n", " with concurrent.futures.ThreadPoolExecutor() as executor:\n", " executor.map(process_row, results)\n", "\n", " return folder_counts\n", "\n", "\n", "def list_bq_folder_contents_training(table_id: str) -> dict:\n", " client = get_bigquery_client()\n", " query = f\"\"\"\n", " SELECT Experiment_ID, Scene_Name, Job_Status, Image_Count, Colmap_Job_ID, Training_Job_ID, Training_Job_Name, Created_Time, Start_Time, End_Time,\n", " Training_Factor, Training_Max_Steps, ZipNeRF_Gin_Config, CamP_Gin_Config, GCS_Experiment_Path\n", " FROM `{table_id}`\n", " ORDER BY Experiment_ID\n", " \"\"\"\n", " query_job = client.query(query)\n", " results = query_job.result()\n", "\n", " folder_counts = {}\n", "\n", " def process_row(row):\n", " if \"NOT STARTED\" in row.Job_Status:\n", " job_status = 0\n", " elif \"nerf-pipeline\" in row.Training_Job_ID:\n", " job_status = get_vertex_ai_pipeline_job_status(row.Training_Job_ID)\n", " else:\n", " job_status = get_vertex_ai_job_status(row.Training_Job_ID)\n", " updated_job_status = JOB_STATE_MAPPING[int(job_status)]\n", " folder_counts[row.Scene_Name] = {\n", " \"Experiment ID\": row.Experiment_ID,\n", " \"Job Status\": updated_job_status,\n", " \"Image Count\": row.Image_Count,\n", " \"Colmap Job ID\": row.Colmap_Job_ID,\n", " \"Training Job ID\": row.Training_Job_ID,\n", " \"Training Job Name\": row.Training_Job_Name,\n", " \"Created Time\": (\n", " row.Created_Time.strftime(\"%Y-%m-%d %H:%M:%S\")\n", " if row.Created_Time\n", " else None\n", " ),\n", " \"Start Time\": (\n", " row.Start_Time.strftime(\"%Y-%m-%d %H:%M:%S\") if row.Start_Time else None\n", " ),\n", " \"End Time\": (\n", " row.End_Time.strftime(\"%Y-%m-%d %H:%M:%S\") if row.End_Time else None\n", " ),\n", " \"Training Factor\": row.Training_Factor,\n", " \"Training Max Steps\": row.Training_Max_Steps,\n", " \"ZipNeRF Gin Config\": row.ZipNeRF_Gin_Config,\n", " \"CamP Gin Config\": row.CamP_Gin_Config,\n", " \"GCS Experiment Path\": row.GCS_Experiment_Path,\n", " }\n", "\n", " with concurrent.futures.ThreadPoolExecutor() as executor:\n", " executor.map(process_row, results)\n", "\n", " return folder_counts\n", "\n", "\n", "def list_bq_folder_contents_rendering(table_id: str) -> dict:\n", " client = get_bigquery_client()\n", " query = f\"\"\"\n", " SELECT Experiment_ID, Scene_Name, Job_Status, Image_Count, Colmap_Job_ID, Training_Job_ID, Training_Job_Name, Rendering_Job_ID, Rendering_Job_Name, Created_Time, Start_Time, End_Time,\n", " Render_Factor, Render_Resolution_Width, Render_Resolution_Height, Render_FPS, ZipNeRF_Gin_Config, CamP_Gin_Config,\n", " GCS_Keyframes_File, GCS_Render_Path_File, Render_Camtype, Render_Focal, Render_Path_Frames, GCS_Experiment_Path\n", " FROM `{table_id}`\n", " ORDER BY Experiment_ID\n", " \"\"\"\n", " query_job = client.query(query)\n", " results = query_job.result()\n", "\n", " folder_counts = {}\n", "\n", " def process_row(row):\n", " if \"NOT STARTED\" in row.Job_Status:\n", " job_status = 0\n", " elif \"nerf-pipeline\" in row.Rendering_Job_ID:\n", " job_status = get_vertex_ai_pipeline_job_status(row.Rendering_Job_ID)\n", " else:\n", " job_status = get_vertex_ai_job_status(row.Rendering_Job_ID)\n", " updated_job_status = JOB_STATE_MAPPING[int(job_status)]\n", " folder_counts[row.Scene_Name] = {\n", " \"Experiment ID\": row.Experiment_ID,\n", " \"Job Status\": updated_job_status,\n", " \"Image Count\": row.Image_Count,\n", " \"Colmap Job ID\": row.Colmap_Job_ID,\n", " \"Training Job ID\": row.Training_Job_ID,\n", " \"Training Job Name\": row.Training_Job_Name,\n", " \"Rendering Job ID\": row.Rendering_Job_ID,\n", " \"Rendering Job Name\": row.Rendering_Job_Name,\n", " \"Created Time\": (\n", " row.Created_Time.strftime(\"%Y-%m-%d %H:%M:%S\")\n", " if row.Created_Time\n", " else None\n", " ),\n", " \"Start Time\": (\n", " row.Start_Time.strftime(\"%Y-%m-%d %H:%M:%S\") if row.Start_Time else None\n", " ),\n", " \"End Time\": (\n", " row.End_Time.strftime(\"%Y-%m-%d %H:%M:%S\") if row.End_Time else None\n", " ),\n", " \"Render Factor\": row.Render_Factor,\n", " \"Render Resolution Width\": row.Render_Resolution_Width,\n", " \"Render Resolution Height\": row.Render_Resolution_Height,\n", " \"Render FPS\": row.Render_FPS,\n", " \"ZipNeRF Gin Config\": row.ZipNeRF_Gin_Config,\n", " \"CamP Gin Config\": row.CamP_Gin_Config,\n", " \"GCS Keyframes File\": row.GCS_Keyframes_File,\n", " \"GCS Render Path File\": row.GCS_Render_Path_File,\n", " \"Render Camtype\": row.Render_Camtype,\n", " \"Render Focal\": row.Render_Focal,\n", " \"Render Path Frames\": row.Render_Path_Frames,\n", " \"GCS Experiment Path\": row.GCS_Experiment_Path,\n", " }\n", "\n", " with concurrent.futures.ThreadPoolExecutor() as executor:\n", " executor.map(process_row, results)\n", "\n", " return folder_counts\n", "\n", "\n", "def get_bq_folders_dataframe_colmap(table_id: str) -> pd.DataFrame:\n", " try:\n", " folder_counts = list_bq_folder_contents_colmap(table_id)\n", " data = [\n", " {\n", " \"Experiment ID\": info[\"Experiment ID\"],\n", " \"Scene Name\": scene,\n", " \"Job Status\": info[\"Job Status\"],\n", " \"Image Count\": info[\"Image Count\"],\n", " \"Colmap Job ID\": info[\"Colmap Job ID\"],\n", " \"Matcher Type\": info[\"Matcher Type\"],\n", " \"Camera Type\": info[\"Camera Type\"],\n", " \"Video Frame FPS\": info[\"Video Frame FPS\"],\n", " \"Max Num Features\": info[\"Max Num Features\"],\n", " \"Use Hierarchical Mapper\": info[\"Use Hierarchical Mapper\"],\n", " \"GCS Dataset Path\": info[\"GCS Dataset Path\"],\n", " \"GCS Experiment Path\": info[\"GCS Experiment Path\"],\n", " }\n", " for scene, info in folder_counts.items()\n", " ]\n", " return pd.DataFrame(data).sort_values(by=\"Experiment ID\").reset_index(drop=True)\n", " except Exception as e:\n", " logging.info(f\"Exception encountered in {e}.\", exc_info=True)\n", " return pd.DataFrame()\n", "\n", "\n", "def get_bq_folders_dataframe_training(table_id: str) -> pd.DataFrame:\n", " try:\n", " folder_counts = list_bq_folder_contents_training(table_id)\n", " data = [\n", " {\n", " \"Experiment ID\": info[\"Experiment ID\"],\n", " \"Scene Name\": scene,\n", " \"Job Status\": info[\"Job Status\"],\n", " \"Image Count\": info[\"Image Count\"],\n", " \"Colmap Job ID\": info[\"Colmap Job ID\"],\n", " \"Training Job ID\": info[\"Training Job ID\"],\n", " \"Training Job Name\": info[\"Training Job Name\"],\n", " \"Training Factor\": info[\"Training Factor\"],\n", " \"Training Max Steps\": info[\"Training Max Steps\"],\n", " \"ZipNeRF Gin Config\": info[\"ZipNeRF Gin Config\"],\n", " \"CamP Gin Config\": info[\"CamP Gin Config\"],\n", " \"GCS Experiment Path\": info[\"GCS Experiment Path\"],\n", " }\n", " for scene, info in folder_counts.items()\n", " ]\n", " return pd.DataFrame(data).sort_values(by=\"Experiment ID\").reset_index(drop=True)\n", " except Exception as e:\n", " logging.info(f\"Exception encountered in {e}.\", exc_info=True)\n", " return pd.DataFrame()\n", "\n", "\n", "def get_bq_folders_dataframe_rendering(table_id: str) -> pd.DataFrame:\n", " try:\n", " folder_counts = list_bq_folder_contents_rendering(table_id)\n", " data = [\n", " {\n", " \"Experiment ID\": info[\"Experiment ID\"],\n", " \"Scene Name\": scene,\n", " \"Job Status\": info[\"Job Status\"],\n", " \"Image Count\": info[\"Image Count\"],\n", " \"Colmap Job ID\": info[\"Colmap Job ID\"],\n", " \"Training Job ID\": info[\"Training Job ID\"],\n", " \"Training Job Name\": info[\"Training Job Name\"],\n", " \"Rendering Job ID\": info[\"Rendering Job ID\"],\n", " \"Rendering Job Name\": info[\"Rendering Job Name\"],\n", " \"Render Factor\": info[\"Render Factor\"],\n", " \"Render Resolution Width\": info[\"Render Resolution Width\"],\n", " \"Render Resolution Height\": info[\"Render Resolution Height\"],\n", " \"Render FPS\": info[\"Render FPS\"],\n", " \"ZipNeRF Gin Config\": info[\"ZipNeRF Gin Config\"],\n", " \"CamP Gin Config\": info[\"CamP Gin Config\"],\n", " \"GCS Keyframes File\": info[\"GCS Keyframes File\"],\n", " \"GCS Render Path File\": info[\"GCS Render Path File\"],\n", " \"Render Camtype\": info[\"Render Camtype\"],\n", " \"Render Focal\": info[\"Render Focal\"],\n", " \"Render Path Frames\": info[\"Render Path Frames\"],\n", " \"GCS Experiment Path\": info[\"GCS Experiment Path\"],\n", " }\n", " for scene, info in folder_counts.items()\n", " ]\n", " return pd.DataFrame(data).sort_values(by=\"Experiment ID\").reset_index(drop=True)\n", " except Exception as e:\n", " logging.info(f\"Exception encountered in {e}.\", exc_info=True)\n", " return pd.DataFrame()\n", "\n", "\n", "def upload_to_bq_colmap_table(\n", " experiment_id: str,\n", " scene_name: str,\n", " job_status: str,\n", " image_count: int,\n", " colmap_job_id: str,\n", " created_time: str,\n", " start_time: str,\n", " end_time: str,\n", " matcher_type: str,\n", " camera_type: str,\n", " video_frame_fps: int,\n", " max_num_features: int,\n", " use_hierarchical_mapper: bool,\n", " gcs_dataset_path: str,\n", " gcs_experiment_path: str,\n", " table_id: str,\n", "):\n", " row_data = {\n", " \"Experiment_ID\": experiment_id,\n", " \"Scene_Name\": scene_name,\n", " \"Job_Status\": job_status,\n", " \"Image_Count\": int(image_count),\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " \"Created_Time\": (\n", " datetime.strptime(created_time, \"%Y-%m-%d %H:%M:%S\")\n", " if created_time\n", " else None\n", " ),\n", " \"Start_Time\": (\n", " datetime.strptime(start_time, \"%Y-%m-%d %H:%M:%S\") if start_time else None\n", " ),\n", " \"End_Time\": (\n", " datetime.strptime(end_time, \"%Y-%m-%d %H:%M:%S\") if end_time else None\n", " ),\n", " \"Matcher_Type\": matcher_type,\n", " \"Camera_Type\": camera_type,\n", " \"Video_Frame_FPS\": int(video_frame_fps),\n", " \"Max_Num_Features\": int(max_num_features),\n", " \"Use_Hierarchical_Mapper\": use_hierarchical_mapper,\n", " \"GCS_Dataset_Path\": gcs_dataset_path,\n", " \"GCS_Experiment_Path\": gcs_experiment_path,\n", " }\n", " insert_or_update_row(row_data, table_id)\n", "\n", "\n", "def upload_to_bq_training_table(\n", " experiment_id: str,\n", " scene_name: str,\n", " job_status: str,\n", " image_count: int,\n", " colmap_job_id: str,\n", " training_job_id: str,\n", " training_job_name: str,\n", " created_time: str,\n", " start_time: str,\n", " end_time: str,\n", " training_factor: int,\n", " training_max_steps: int,\n", " zipnerf_gin_config: str,\n", " camp_gin_config: str,\n", " gcs_experiment_path: str,\n", " table_id: str,\n", "):\n", " row_data = {\n", " \"Experiment_ID\": experiment_id,\n", " \"Scene_Name\": scene_name,\n", " \"Job_Status\": job_status,\n", " \"Image_Count\": int(image_count),\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " \"Training_Job_ID\": training_job_id,\n", " \"Training_Job_Name\": training_job_name,\n", " \"Created_Time\": (\n", " datetime.strptime(created_time, \"%Y-%m-%d %H:%M:%S\")\n", " if created_time\n", " else None\n", " ),\n", " \"Start_Time\": (\n", " datetime.strptime(start_time, \"%Y-%m-%d %H:%M:%S\") if start_time else None\n", " ),\n", " \"End_Time\": (\n", " datetime.strptime(end_time, \"%Y-%m-%d %H:%M:%S\") if end_time else None\n", " ),\n", " \"Training_Factor\": int(training_factor),\n", " \"Training_Max_Steps\": int(training_max_steps),\n", " \"ZipNeRF_Gin_Config\": zipnerf_gin_config,\n", " \"CamP_Gin_Config\": camp_gin_config,\n", " \"GCS_Experiment_Path\": gcs_experiment_path,\n", " }\n", " insert_or_update_row(row_data, table_id)\n", "\n", "\n", "def upload_to_bq_rendering_table(\n", " experiment_id: str,\n", " scene_name: str,\n", " job_status: str,\n", " image_count: int,\n", " colmap_job_id: str,\n", " training_job_id: str,\n", " training_job_name: str,\n", " rendering_job_id: str,\n", " rendering_job_name: str,\n", " created_time: str,\n", " start_time: str,\n", " end_time: str,\n", " render_factor: int,\n", " render_resolution_width: int,\n", " render_resolution_height: int,\n", " render_fps: int,\n", " zipnerf_gin_config: str,\n", " camp_gin_config: str,\n", " gcs_keyframes_file: str,\n", " gcs_render_path_file: str,\n", " render_camtype: str,\n", " render_focal: float,\n", " render_path_frames: int,\n", " gcs_experiment_path: str,\n", " table_id: str,\n", "):\n", " row_data = {\n", " \"Experiment_ID\": experiment_id,\n", " \"Scene_Name\": scene_name,\n", " \"Job_Status\": job_status,\n", " \"Image_Count\": int(image_count),\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " \"Training_Job_ID\": training_job_id,\n", " \"Training_Job_Name\": training_job_name,\n", " \"Rendering_Job_ID\": rendering_job_id,\n", " \"Rendering_Job_Name\": rendering_job_name,\n", " \"Created_Time\": (\n", " datetime.strptime(created_time, \"%Y-%m-%d %H:%M:%S\")\n", " if created_time\n", " else None\n", " ),\n", " \"Start_Time\": (\n", " datetime.strptime(start_time, \"%Y-%m-%d %H:%M:%S\") if start_time else None\n", " ),\n", " \"End_Time\": (\n", " datetime.strptime(end_time, \"%Y-%m-%d %H:%M:%S\") if end_time else None\n", " ),\n", " \"Render_Factor\": int(render_factor),\n", " \"Render_Resolution_Width\": int(render_resolution_width),\n", " \"Render_Resolution_Height\": int(render_resolution_height),\n", " \"Render_FPS\": int(render_fps),\n", " \"ZipNeRF_Gin_Config\": zipnerf_gin_config,\n", " \"CamP_Gin_Config\": camp_gin_config,\n", " \"GCS_Keyframes_File\": gcs_keyframes_file,\n", " \"GCS_Render_Path_File\": gcs_render_path_file,\n", " \"Render_Camtype\": render_camtype,\n", " \"Render_Focal\": 1,\n", " \"Render_Path_Frames\": int(render_path_frames),\n", " \"GCS_Experiment_Path\": gcs_experiment_path,\n", " }\n", " insert_or_update_row(row_data, table_id)\n", "\n", "\n", "def update_job_info(job_id, table_id, scene_name, experiment_id):\n", " while True:\n", " job_status = get_vertex_ai_job_status(job_id)\n", " create_time, start_time, end_time = fetch_job_times(job_id)\n", " row_data = {\n", " \"Job_Status\": JOB_STATE_MAPPING[job_status],\n", " \"Experiment_ID\": experiment_id,\n", " \"Scene_Name\": scene_name,\n", " \"Created_Time\": create_time,\n", " \"Start_Time\": start_time,\n", " \"End_Time\": end_time,\n", " }\n", " insert_or_update_row(row_data, table_id)\n", " if int(job_status) in [0, 4, 5, 7, 8, 9]:\n", " break\n", " time.sleep(30)\n", "\n", "\n", "def update_pipeline_job_info(job_id, table_id, scene_name, experiment_id, mode=1):\n", " while True:\n", " job_status = get_vertex_ai_pipeline_job_status(job_id)\n", " create_time, start_time, end_time = fetch_pipeline_job_times(job_id)\n", " row_data = {\n", " \"Job_Status\": JOB_STATE_MAPPING[job_status],\n", " \"Experiment_ID\": experiment_id,\n", " \"Scene_Name\": scene_name,\n", " \"Created_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"Start_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " }\n", " if mode == 1:\n", " row_data[\"Colmap_Job_ID\"] = job_id\n", " elif mode == 2:\n", " row_data[\"Colmap_Job_ID\"] = job_id\n", " row_data[\"Training_Job_ID\"] = job_id\n", " else:\n", " row_data[\"Colmap_Job_ID\"] = job_id\n", " row_data[\"Training_Job_ID\"] = job_id\n", " row_data[\"Rendering_Job_ID\"] = job_id\n", " insert_or_update_row(row_data, table_id)\n", " if int(job_status) in [0, 4, 5, 7, 8, 9]:\n", " row_data = {\n", " \"Job_Status\": JOB_STATE_MAPPING[job_status],\n", " \"Experiment_ID\": experiment_id,\n", " \"Scene_Name\": scene_name,\n", " \"End_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " }\n", " insert_or_update_row(row_data, table_id)\n", " break\n", " time.sleep(30)\n", "\n", "\n", "# Function to delete selected row in BigQuery and GCS\n", "def delete_experiment_row(selected_row, table_id):\n", " client = get_bigquery_client()\n", "\n", " # Delete row from BigQuery\n", " query = f\"\"\"\n", " DELETE FROM `{table_id}`\n", " WHERE Experiment_ID = '{selected_row}'\n", " \"\"\"\n", " query_job = client.query(query)\n", " query_job.result()\n", "\n", " return f\"Experiment {selected_row} deleted successfully.\"\n", "\n", "\n", "def create_pipeline(\n", " calibration_job_name: str,\n", " training_job_name: str,\n", " rendering_job_name: str,\n", " calibration_worker_pool_specs,\n", " training_worker_pool_specs,\n", " rendering_worker_pool_specs,\n", " notification_emails=None,\n", "):\n", " @dsl.pipeline\n", " def nerf_pipeline():\n", " notify_email_task = VertexNotificationEmailOp(\n", " recipients=notification_emails\n", " ).set_display_name(\"Notify email\")\n", " with dsl.ExitHandler(notify_email_task, name=\"CamP ZipNeRF Pipeline\"):\n", " camera_pose_task = CustomTrainingJobOp(\n", " display_name=calibration_job_name,\n", " worker_pool_specs=calibration_worker_pool_specs,\n", " )\n", " camera_pose_task.set_display_name(calibration_job_name)\n", "\n", " train_zipnerf_task = CustomTrainingJobOp(\n", " display_name=training_job_name,\n", " worker_pool_specs=training_worker_pool_specs,\n", " ).after(camera_pose_task)\n", " train_zipnerf_task.set_display_name(training_job_name)\n", "\n", " render_zipnerf_task = CustomTrainingJobOp(\n", " display_name=rendering_job_name,\n", " worker_pool_specs=rendering_worker_pool_specs,\n", " ).after(train_zipnerf_task)\n", " render_zipnerf_task.set_display_name(rendering_job_name)\n", "\n", " pipeline_name = \"lightweight_pipeline.yaml\"\n", " compiler.Compiler().compile(pipeline_func=nerf_pipeline, package_path=pipeline_name)\n", " return pipeline_name\n", "\n", "\n", "def run_pipeline(pipeline_name: str):\n", " PIPELINE_ROOT = (\n", " f\"{BUCKET_NAME}/pipeline_root/{datetime.now().strftime('%Y%m%d_%H%M%S')}\"\n", " )\n", " DISPLAY_NAME = get_job_name_with_datetime(\n", " \"NeRFGradio_\" + generate_short_unique_identifier()\n", " )\n", " job = aiplatform.PipelineJob(\n", " display_name=DISPLAY_NAME,\n", " template_path=pipeline_name,\n", " pipeline_root=PIPELINE_ROOT,\n", " )\n", " job.run(sync=False)\n", " return job" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "N28ef3_mLoiL" }, "outputs": [], "source": [ "# @title Pose Estimation Workshop\n", "\n", "\n", "def create_pose_estimation_workshop():\n", " def get_worker_pool_specs(docker_uri, args, machine_type, accelerator_type):\n", " return [\n", " {\n", " \"machine_spec\": {\n", " \"machine_type\": machine_type,\n", " \"accelerator_type\": accelerator_type,\n", " \"accelerator_count\": 8,\n", " },\n", " \"replica_count\": 1,\n", " \"container_spec\": {\n", " \"image_uri\": docker_uri,\n", " \"args\": args,\n", " },\n", " }\n", " ]\n", "\n", " def extract_frames_from_video(video_path: str, dest_dir: str, frame_rate: int = 1):\n", " clip = VideoFileClip(video_path)\n", " duration = clip.duration\n", " total_frames = int(duration * frame_rate)\n", "\n", " for t in range(total_frames):\n", " frame = clip.get_frame(t / frame_rate)\n", " frame_image = Image.fromarray(frame)\n", " frame_image.save(\n", " os.path.join(dest_dir, f\"frame_{t + 1}.jpg\"), format=\"JPEG\", quality=100\n", " )\n", "\n", " def prepare_instance_images(\n", " scene_name: str,\n", " experiment_name: str,\n", " file_collection: List[gr.File],\n", " frame_rate=1,\n", " progress=gr.Progress(),\n", " ):\n", " if not file_collection:\n", " raise gr.Error(\"Please provide a few valid instance images first!\")\n", "\n", " local_tmp_dir = \"/tmp/instance_images\"\n", " if os.path.exists(local_tmp_dir):\n", " shutil.rmtree(local_tmp_dir)\n", " os.makedirs(local_tmp_dir, exist_ok=True)\n", "\n", " total_files = len(file_collection)\n", " completed_files = 0\n", "\n", " for i, file_temp in enumerate(file_collection, start=1):\n", " file_ext = os.path.splitext(file_temp.name)[1].lower()\n", "\n", " if file_ext in [\n", " \".jpg\",\n", " \".jpeg\",\n", " \".png\",\n", " \".bmp\",\n", " \".gif\",\n", " ]: # Image file extensions\n", " image = Image.open(file_temp.name).convert(\"RGB\")\n", " image.save(\n", " os.path.join(local_tmp_dir, f\"image_{i}.jpg\"),\n", " format=\"JPEG\",\n", " quality=100,\n", " )\n", " elif file_ext in [\".mp4\", \".avi\", \".mov\", \".mkv\"]: # Video file extensions\n", " extract_frames_from_video(file_temp.name, local_tmp_dir, frame_rate)\n", "\n", " completed_files += 1\n", " progress(completed_files / total_files)\n", "\n", " instant_image_dir = os.path.join(GCS_API_ENDPOINT, experiment_name)\n", " upload_local_dir_to_gcs(\n", " scene_name, local_tmp_dir, instant_image_dir, table_id=colmap_table_id\n", " )\n", "\n", " def prepare_instance_images_from_gcs(\n", " scene_name: str, experiment_name: str, gcs_folder: str, progress=gr.Progress()\n", " ):\n", "\n", " gcs_folder_path = gcs_folder.replace(\"gs://\", \"\")\n", " path_bucket_name = gcs_folder_path.split(\"/\")[0]\n", " folder_path = gcs_folder_path.replace(path_bucket_name + \"/\", \"\")\n", " output_experiment_name = experiment_name.replace(\"dataset\", \"experiment\")\n", " gcs_experiment_path = os.path.join(BUCKET_NAME, output_experiment_name)\n", "\n", " client = storage.Client()\n", " bucket = client.get_bucket(path_bucket_name)\n", " blobs = bucket.list_blobs(prefix=folder_path)\n", "\n", " # Initialize a set to track unique image names\n", " unique_images = set()\n", " unique_videos = set()\n", "\n", " # Loop through the blobs and add unique image names to the set\n", " for blob in blobs:\n", " # Get the MIME type of the file\n", " mime_type, _ = mimetypes.guess_type(blob.name)\n", " image_name = blob.name.split(\"/\")[-1]\n", " if mime_type and mime_type.startswith(\"image\"):\n", " # Extract the image name (without the folder path)\n", " unique_images.add(image_name)\n", " else:\n", " unique_videos.add(image_name)\n", "\n", " # Return the count of unique images\n", " file_count = len(unique_images) + len(unique_videos)\n", "\n", " scene_name = scene_name + \"_\" + generate_short_unique_identifier()\n", "\n", " row_data = {\n", " \"Experiment_ID\": experiment_name,\n", " \"Scene_Name\": scene_name,\n", " \"Job_Status\": \"Images Uploaded\",\n", " \"Colmap_Job_ID\": \"\",\n", " \"Image_Count\": file_count,\n", " \"Created_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"Start_Time\": None,\n", " \"End_Time\": None,\n", " \"GCS_Dataset_Path\": gcs_folder,\n", " \"GCS_Experiment_Path\": gcs_experiment_path,\n", " }\n", " insert_or_update_row(row_data, colmap_table_id)\n", " return f\"{gcs_folder} dataset successfully inserted to database.\"\n", "\n", " def upload_images_to_gcs(\n", " scene_name,\n", " experiment_name,\n", " file_collection,\n", " gcs_folder,\n", " frame_rate=1,\n", " progress=gr.Progress(),\n", " ):\n", " if gcs_folder:\n", " prepare_instance_images_from_gcs(\n", " scene_name, experiment_name, gcs_folder, progress\n", " )\n", " else:\n", " prepare_instance_images(\n", " scene_name, experiment_name, file_collection, frame_rate, progress\n", " )\n", " return get_bq_folders_dataframe_colmap(colmap_table_id)\n", "\n", " def start_colmap(\n", " selected_row,\n", " matcher_dropdown,\n", " camera_dropdown,\n", " video_frame_fps,\n", " max_num_features,\n", " mapper_dropdown,\n", " machine_dropdown,\n", " ):\n", " print(\"Launching colmap...\")\n", " selected_row = extract_dataset_id(selected_row)\n", " folders_df = get_bq_folders_dataframe_colmap(colmap_table_id)\n", " gcs_dataset_path = folders_df[folders_df[\"Experiment ID\"] == selected_row][\n", " \"GCS Dataset Path\"\n", " ].iloc[0]\n", "\n", " input_image_folder = f\"{BUCKET_NAME}/{selected_row}\"\n", " output_dir = input_image_folder.replace(\"dataset\", \"experiment\")\n", " data_calibration_job_name = get_job_name_with_datetime(\n", " \"cloudnerf_gradio_colmap\"\n", " )\n", " unique_experiments.add(selected_row)\n", "\n", " machine_type = (\n", " \"n1-highmem-64\"\n", " if machine_dropdown == \"NVIDIA_TESLA_V100\"\n", " else \"a2-highgpu-8g\"\n", " )\n", " accelerator_type = machine_dropdown\n", " accelerator_count = 8\n", "\n", " colmap_args = [\n", " \"-use_gpu\",\n", " \"1\",\n", " \"-gcs_dataset_path\",\n", " gcs_dataset_path,\n", " \"-gcs_experiment_path\",\n", " output_dir,\n", " \"-camera\",\n", " camera_dropdown,\n", " \"-matching_strategy\",\n", " MATCHER_MAPPING[matcher_dropdown],\n", " \"-max_num_features\",\n", " str(max_num_features),\n", " \"-use_hierarchical_mapper\",\n", " str(int(mapper_dropdown)),\n", " \"-fps\",\n", " str(video_frame_fps),\n", " ]\n", "\n", " worker_pool_specs = get_worker_pool_specs(\n", " CALIBRATION_DOCKER_URI, colmap_args, machine_type, accelerator_type\n", " )\n", "\n", " data_calibration_custom_job = aiplatform.CustomJob(\n", " display_name=data_calibration_job_name,\n", " project=PROJECT_ID,\n", " worker_pool_specs=worker_pool_specs,\n", " staging_bucket=staging_bucket,\n", " )\n", "\n", " data_calibration_custom_job.run(sync=False)\n", "\n", " scene_name = folders_df[folders_df[\"Experiment ID\"] == selected_row][\n", " \"Scene Name\"\n", " ].iloc[0]\n", " image_count = int(\n", " folders_df[folders_df[\"Experiment ID\"] == selected_row][\"Image Count\"].iloc[\n", " 0\n", " ]\n", " )\n", "\n", " colmap_row_data = {\n", " \"Job_Status\": \"RUNNING\",\n", " \"Experiment_ID\": selected_row,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": \"STARTING\",\n", " \"Created_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"Matcher_Type\": MATCHER_MAPPING[matcher_dropdown],\n", " \"Camera_Type\": camera_dropdown,\n", " \"Video_Frame_FPS\": video_frame_fps,\n", " \"Max_Num_Features\": max_num_features,\n", " \"Use_Hierarchical_Mapper\": mapper_dropdown,\n", " \"GCS_Dataset_Path\": gcs_dataset_path,\n", " \"GCS_Experiment_Path\": output_dir,\n", " \"Machine_Type\": machine_type,\n", " \"Accelerator_Type\": accelerator_type,\n", " \"Accelerator_Count\": accelerator_count,\n", " }\n", "\n", " training_row_data = {\n", " \"Job_Status\": \"NOT STARTED\",\n", " \"Experiment_ID\": selected_row,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": \"STARTING\",\n", " \"Training_Job_ID\": \"NOT STARTED\",\n", " \"Training_Job_Name\": \"\",\n", " \"Created_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"GCS_Experiment_Path\": output_dir,\n", " \"Machine_Type\": machine_type,\n", " \"Accelerator_Type\": accelerator_type,\n", " \"Accelerator_Count\": accelerator_count,\n", " }\n", "\n", " insert_or_update_row(colmap_row_data, colmap_table_id)\n", " insert_or_update_row(training_row_data, training_table_id)\n", "\n", " while True:\n", " try:\n", " if data_calibration_custom_job.resource_name:\n", " colmap_job_id = data_calibration_custom_job.resource_name.split(\n", " \"/\"\n", " )[-1]\n", " colmap_job_status = get_vertex_ai_job_status(colmap_job_id)\n", " colmap_status_thread = threading.Thread(\n", " target=update_job_info,\n", " args=(\n", " str(colmap_job_id),\n", " colmap_table_id,\n", " scene_name,\n", " selected_row,\n", " ),\n", " )\n", " colmap_status_thread.start()\n", " colmap_row_data.update(\n", " {\n", " \"Job_Status\": JOB_STATE_MAPPING[int(colmap_job_status)],\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " \"Start_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " }\n", " )\n", " training_row_data.update(\n", " {\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " }\n", " )\n", " insert_or_update_row(colmap_row_data, colmap_table_id)\n", " insert_or_update_row(training_row_data, training_table_id)\n", " break\n", " except RuntimeError:\n", " pass\n", " return get_bq_folders_dataframe_colmap(colmap_table_id)\n", "\n", " camp_zipnerf_tip_text = \"\"\"\n", " 1. Upload images or videos.\n", " 2. Set parameters for COLMAP.\n", " 3. Select dataset in the table.\n", " 4. Click the **Run Colmap** button.\n", " 5. After the colmap job starts, check the job status at\n", " [Vertex Custom Training](https://console.cloud.google.com/vertex-ai/training/custom-jobs?project={PROJECT_ID}).\n", " \"\"\"\n", "\n", " def create_dropdown(label, options, default):\n", " return gr.Dropdown(options, label=label, interactive=True, value=default)\n", "\n", " def create_textbox(label, default, visible=True):\n", " return gr.Textbox(\n", " label=label, value=default, lines=1, interactive=True, visible=visible\n", " )\n", "\n", " def create_number(label, default):\n", " return gr.Number(label=label, value=default, interactive=True)\n", "\n", " with gr.Blocks() as demo:\n", "\n", " def update_experiment_and_dataset_names(dataset_name, experiment_path):\n", " datetime_suffix = datetime.now().strftime(\"_%Y%m%d_%H%M%S\")\n", " if datetime_suffix not in dataset_name:\n", " dataset_name = \"dataset\" + datetime_suffix\n", " if datetime_suffix not in experiment_path:\n", " experiment_path = os.path.join(\n", " BUCKET_NAME, \"experiment\" + datetime_suffix\n", " )\n", " return dataset_name, experiment_path\n", "\n", " def update_names(scene_name):\n", " dataset_name, experiment_path = update_experiment_and_dataset_names(\n", " scene_name, \"\"\n", " )\n", " return dataset_name, experiment_path\n", "\n", " def delete_selected_experiment(selected_value):\n", " selected_value = extract_dataset_id(selected_value)\n", " delete_experiment_row(selected_value, colmap_table_id)\n", " delete_experiment_row(selected_value, training_table_id)\n", " delete_experiment_row(selected_value, rendering_table_id)\n", " return get_bq_folders_dataframe_colmap(colmap_table_id)\n", "\n", " def on_row_select(folders_df, evt: gr.SelectData):\n", " row_index = evt.index[0]\n", " if 0 <= row_index < len(folders_df):\n", " selected_value = folders_df.iloc[row_index, 0]\n", " colmap_job_id = folders_df[\n", " folders_df[\"Experiment ID\"] == selected_value\n", " ][\"Colmap Job ID\"].iloc[0]\n", " job_status = folders_df[folders_df[\"Experiment ID\"] == selected_value][\n", " \"Job Status\"\n", " ].iloc[0]\n", " colmap_job_link = selected_value\n", " if job_status not in [\"NOT STARTED\"]:\n", " if \"nerf-pipeline\" in colmap_job_id:\n", " colmap_job_link = get_vertex_ai_pipeline_run_link(\n", " colmap_job_id, PROJECT_NUMBER, REGION\n", " )\n", " colmap_job_link = f\"[{selected_value}]({colmap_job_link})\"\n", " elif colmap_job_id == \"MANUAL\":\n", " pass\n", " else:\n", " colmap_job_link = get_vertex_ai_training_job_link(\n", " colmap_job_id, PROJECT_NUMBER, REGION\n", " )\n", " colmap_job_link = f\"[{selected_value}]({colmap_job_link})\"\n", " return colmap_job_link, gr.update(visible=True), gr.update(visible=True)\n", " else:\n", " return \"\", gr.update(visible=False), gr.update(visible=True)\n", "\n", " with gr.Accordion(\"How To Use\", open=False):\n", " gr.Markdown(camp_zipnerf_tip_text)\n", "\n", " with gr.Accordion(\"Datasets\", open=True):\n", " folders_dataframe = gr.Dataframe(\n", " value=get_bq_folders_dataframe_colmap(colmap_table_id),\n", " interactive=False,\n", " )\n", " selected_folder = gr.Markdown()\n", " with gr.Row(equal_height=True):\n", " run_colmap_button = gr.Button(\"RUN COLMAP\", visible=False)\n", " delete_button = gr.Button(\"DELETE\", visible=False)\n", " folders_dataframe.select(\n", " on_row_select,\n", " inputs=[folders_dataframe],\n", " outputs=[selected_folder, run_colmap_button, delete_button],\n", " )\n", "\n", " with gr.Row(equal_height=True):\n", " with gr.Column():\n", " gr.Markdown(\"### UPLOAD NEW DATASET\")\n", " scene_name = create_textbox(\"Scene Name\", \"\")\n", " experiment_name = create_textbox(\n", " \"Experiment Name\",\n", " get_job_name_with_datetime(\"dataset\"),\n", " visible=False,\n", " )\n", " output_dir = create_textbox(\n", " \"Output Directory\",\n", " get_job_name_with_datetime(\"experiment\"),\n", " visible=False,\n", " )\n", " file_collection = gr.File(\n", " label=\"Upload the images or video for your NeRF.\",\n", " file_types=[\"image\", \"video\"],\n", " file_count=\"multiple\",\n", " interactive=True,\n", " visible=True,\n", " )\n", " gcs_folder = create_textbox(\"GCS Folder with images or video\", \"\")\n", " video_frame_fps = create_number(\"Video Frame Extraction FPS\", 4)\n", " upload_images_button = gr.Button(\n", " \"Upload Scene to GCS\", variant=\"primary\"\n", " )\n", " _ = gr.Markdown(visible=False)\n", "\n", " with gr.Column():\n", " gr.Markdown(\"### SET COLMAP SETTINGS\")\n", " matcher_dropdown = create_dropdown(\n", " \"Choose a Matching Algorithm\",\n", " [\n", " \"Exhaustive Matcher\",\n", " \"Sequential Matcher\",\n", " \"Spatial Matcher\",\n", " \"Transitive Matcher\",\n", " \"Vocab Tree Matcher\",\n", " ],\n", " \"Exhaustive Matcher\",\n", " )\n", " camera_dropdown = create_dropdown(\n", " \"Type of Camera Used for Capture\",\n", " [\"OPENCV\", \"OPENCV_FISHEYE\"],\n", " \"OPENCV\",\n", " )\n", " machine_dropdown = create_dropdown(\n", " \"Select Machine Type\",\n", " [\"NVIDIA_TESLA_V100\", \"NVIDIA_TESLA_A100\"],\n", " \"NVIDIA_TESLA_V100\",\n", " )\n", " with gr.Accordion(\"Advanced Settings\", open=False):\n", " max_num_features = create_number(\n", " \"Maximum Number Length of SIFT Descriptor\", 8192\n", " )\n", " mapper_dropdown = create_dropdown(\n", " \"Use Hierarchical Mapper\", [False, True], False\n", " )\n", "\n", " ws_table_id = create_textbox(\n", " \"Workspace Table ID\", colmap_table_id, visible=False\n", " )\n", "\n", " upload_images_button.click(\n", " upload_images_to_gcs,\n", " inputs=[\n", " scene_name,\n", " experiment_name,\n", " file_collection,\n", " gcs_folder,\n", " video_frame_fps,\n", " ],\n", " outputs=[folders_dataframe],\n", " show_progress=True,\n", " concurrency_limit=10,\n", " )\n", "\n", " run_colmap_button.click(\n", " start_colmap,\n", " inputs=[\n", " selected_folder,\n", " matcher_dropdown,\n", " camera_dropdown,\n", " video_frame_fps,\n", " max_num_features,\n", " mapper_dropdown,\n", " machine_dropdown,\n", " ],\n", " outputs=[folders_dataframe],\n", " show_progress=True,\n", " concurrency_limit=10,\n", " )\n", "\n", " delete_button.click(\n", " delete_selected_experiment,\n", " inputs=[selected_folder],\n", " outputs=[folders_dataframe],\n", " concurrency_limit=10,\n", " )\n", "\n", " demo.load(\n", " get_bq_folders_dataframe_colmap,\n", " inputs=[ws_table_id],\n", " outputs=[folders_dataframe],\n", " concurrency_limit=10,\n", " )\n", "\n", " scene_name.change(\n", " update_names, inputs=[scene_name], outputs=[experiment_name, output_dir]\n", " )\n", "\n", " experiment_name.change(\n", " update_experiment_and_dataset_names,\n", " inputs=[experiment_name, output_dir],\n", " outputs=[experiment_name, output_dir],\n", " )\n", "\n", " output_dir.change(\n", " update_experiment_and_dataset_names,\n", " inputs=[experiment_name, output_dir],\n", " outputs=[experiment_name, output_dir],\n", " )\n", "\n", " demo.load(\n", " update_experiment_and_dataset_names,\n", " inputs=[experiment_name, output_dir],\n", " outputs=[experiment_name, output_dir],\n", " )\n", "\n", " return demo, folders_dataframe" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "a0LCCIDpLtWj" }, "outputs": [], "source": [ "# @title Training Workshop\n", "\n", "\n", "def create_training_workshop():\n", " def get_worker_pool_specs(docker_uri, args, machine_type, accelerator_type):\n", " return [\n", " {\n", " \"machine_spec\": {\n", " \"machine_type\": machine_type,\n", " \"accelerator_type\": accelerator_type,\n", " \"accelerator_count\": 8,\n", " },\n", " \"replica_count\": 1,\n", " \"container_spec\": {\n", " \"image_uri\": docker_uri,\n", " \"args\": args,\n", " },\n", " }\n", " ]\n", "\n", " def update_colmap_dataset(selected_row, colmap_gcs_folder):\n", " print(\"Updating pre-processed colmap dataset...\")\n", " training_df = get_bq_folders_dataframe_training(training_table_id)\n", " selected_row = extract_dataset_id(selected_row)\n", " filtered_df = training_df[training_df[\"Experiment ID\"] == selected_row]\n", " if not filtered_df.empty:\n", " scene_name = filtered_df[\"Scene Name\"].iloc[0]\n", " colmap_row_data = {\n", " \"Job_Status\": \"SUCCEEDED\",\n", " \"Scene_Name\": scene_name,\n", " \"Experiment_ID\": selected_row,\n", " \"Colmap_Job_ID\": \"MANUAL\",\n", " \"GCS_Experiment_Path\": colmap_gcs_folder,\n", " \"End_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " }\n", "\n", " training_row_data = {\n", " \"Experiment_ID\": selected_row,\n", " \"Scene_Name\": scene_name,\n", " \"Colmap_Job_ID\": \"MANUAL\",\n", " \"GCS_Experiment_Path\": colmap_gcs_folder,\n", " }\n", "\n", " insert_or_update_row(colmap_row_data, colmap_table_id)\n", " insert_or_update_row(training_row_data, training_table_id)\n", " _ = get_bq_folders_dataframe_colmap(colmap_table_id)\n", " return get_bq_folders_dataframe_training(training_table_id)\n", "\n", " def start_training(\n", " selected_folder,\n", " training_factor,\n", " training_max_steps,\n", " zipnerf_gin_config,\n", " camp_gin_config,\n", " machine_dropdown,\n", " ):\n", " print(\"Launching training...\")\n", " selected_folder = extract_dataset_id(selected_folder)\n", " colmap_df = get_bq_folders_dataframe_colmap(colmap_table_id)\n", " output_colmap_dir_gcs = colmap_df[\n", " colmap_df[\"Experiment ID\"] == selected_folder\n", " ][\"GCS Experiment Path\"].iloc[0]\n", " data_training_job_name = get_job_name_with_datetime(\"cloudnerf_gradio_training\")\n", " unique_experiments.add(selected_folder)\n", "\n", " colmap_job_status = colmap_df[colmap_df[\"Experiment ID\"] == selected_folder][\n", " \"Job Status\"\n", " ].iloc[0]\n", " if colmap_job_status != \"SUCCEEDED\":\n", " gr.Warning(\"Please wait until the colmap job is finished.\")\n", " return get_bq_folders_dataframe_training(training_table_id)\n", "\n", " machine_type = (\n", " \"n1-highmem-64\"\n", " if machine_dropdown == \"NVIDIA_TESLA_V100\"\n", " else \"a2-highgpu-8g\"\n", " )\n", " accelerator_type = machine_dropdown\n", " accelerator_count = 8\n", "\n", " training_args = [\n", " \"-training_job_name\",\n", " data_training_job_name,\n", " \"-gcs_experiment_path\",\n", " output_colmap_dir_gcs,\n", " \"-factor\",\n", " str(training_factor),\n", " \"-max_steps\",\n", " str(training_max_steps),\n", " \"-gin_config_zipnerf\",\n", " zipnerf_gin_config,\n", " \"-gin_config_camp\",\n", " camp_gin_config,\n", " ]\n", "\n", " worker_pool_specs = get_worker_pool_specs(\n", " TRAINING_DOCKER_URI, training_args, machine_type, accelerator_type\n", " )\n", "\n", " data_training_custom_job = aiplatform.CustomJob(\n", " display_name=data_training_job_name,\n", " project=PROJECT_ID,\n", " worker_pool_specs=worker_pool_specs,\n", " staging_bucket=staging_bucket,\n", " )\n", "\n", " data_training_custom_job.run(sync=False)\n", "\n", " filtered_df = colmap_df[colmap_df[\"Experiment ID\"] == selected_folder]\n", "\n", " if not filtered_df.empty:\n", " colmap_job_id = filtered_df[\"Colmap Job ID\"].iloc[0]\n", " else:\n", " colmap_job_id = \"PENDING\"\n", "\n", " if not filtered_df.empty:\n", " colmap_job_status = filtered_df[\"Job Status\"].iloc[0]\n", " else:\n", " colmap_job_status = \"PENDING\"\n", "\n", " if not filtered_df.empty:\n", " scene_name = filtered_df[\"Scene Name\"].iloc[0]\n", " else:\n", " scene_name = \"NOT FOUND\"\n", "\n", " if not filtered_df.empty:\n", " image_count = int(filtered_df[\"Image Count\"].iloc[0])\n", " else:\n", " image_count = 0\n", "\n", " colmap_row_data = {\n", " \"Job_Status\": colmap_job_status,\n", " \"Experiment_ID\": selected_folder,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " \"End_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " }\n", "\n", " training_row_data = {\n", " \"Job_Status\": \"RUNNING\",\n", " \"Experiment_ID\": selected_folder,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " \"Training_Job_ID\": \"STARTING\",\n", " \"Training_Job_Name\": data_training_job_name,\n", " \"Start_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"Training_Factor\": training_factor,\n", " \"Training_Max_Steps\": training_max_steps,\n", " \"ZipNeRF_Gin_Config\": zipnerf_gin_config,\n", " \"CamP_Gin_Config\": camp_gin_config,\n", " \"GCS_Experiment_Path\": output_colmap_dir_gcs,\n", " \"Machine_Type\": machine_type,\n", " \"Accelerator_Type\": accelerator_type,\n", " \"Accelerator_Count\": accelerator_count,\n", " }\n", "\n", " rendering_row_data = {\n", " \"Job_Status\": \"NOT STARTED\",\n", " \"Experiment_ID\": selected_folder,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " \"Training_Job_ID\": \"STARTING\",\n", " \"Training_Job_Name\": data_training_job_name,\n", " \"Rendering_Job_ID\": \"NOT STARTED\",\n", " \"Created_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"GCS_Experiment_Path\": output_colmap_dir_gcs,\n", " \"Machine_Type\": machine_type,\n", " \"Accelerator_Type\": accelerator_type,\n", " \"Accelerator_Count\": accelerator_count,\n", " }\n", "\n", " insert_or_update_row(colmap_row_data, colmap_table_id)\n", " insert_or_update_row(training_row_data, training_table_id)\n", " insert_or_update_row(rendering_row_data, rendering_table_id)\n", "\n", " while True:\n", " try:\n", " if data_training_custom_job.resource_name:\n", " training_job_id = data_training_custom_job.resource_name.split(\"/\")[\n", " -1\n", " ]\n", " training_job_status = get_vertex_ai_job_status(training_job_id)\n", " training_status_thread = threading.Thread(\n", " target=update_job_info,\n", " args=(\n", " str(training_job_id),\n", " training_table_id,\n", " scene_name,\n", " selected_folder,\n", " ),\n", " )\n", " training_status_thread.start()\n", " training_row_data.update(\n", " {\n", " \"Job_Status\": JOB_STATE_MAPPING[int(training_job_status)],\n", " \"Training_Job_ID\": training_job_id,\n", " }\n", " )\n", " rendering_row_data.update(\n", " {\n", " \"Training_Job_ID\": training_job_id,\n", " }\n", " )\n", " insert_or_update_row(training_row_data, training_table_id)\n", " insert_or_update_row(rendering_row_data, rendering_table_id)\n", " break\n", " except RuntimeError:\n", " pass\n", "\n", " return get_bq_folders_dataframe_training(training_table_id)\n", "\n", " training_tip_text = \"\"\"\n", " 1. Upload images or videos in the Pose Estimation tab.\n", " 2. Set parameters for training.\n", " 3. Select dataset in the table.\n", " 4. Click the **Run Training** button.\n", " 5. After the rendering job starts, check the job status at\n", " [Vertex Custom Training](https://console.cloud.google.com/vertex-ai/training/custom-jobs?project={PROJECT_ID}).\n", " \"\"\"\n", "\n", " def create_dropdown(label, options, default):\n", " return gr.Dropdown(options, label=label, interactive=True, value=default)\n", "\n", " def create_textbox(label, default, visible=True):\n", " return gr.Textbox(\n", " label=label, value=default, lines=1, interactive=True, visible=visible\n", " )\n", "\n", " def create_number(label, default):\n", " return gr.Number(label=label, value=default, interactive=True)\n", "\n", " with gr.Blocks() as demo:\n", "\n", " with gr.Accordion(\"How To Use\", open=False):\n", " gr.Markdown(training_tip_text)\n", "\n", " def on_row_select(training_df, evt: gr.SelectData):\n", " row_index = evt.index[0]\n", " if 0 <= row_index < len(training_df):\n", " selected_value = training_df.iloc[row_index, 0]\n", " training_job_id = training_df[\n", " training_df[\"Experiment ID\"] == selected_value\n", " ][\"Training Job ID\"].iloc[0]\n", " job_status = training_df[\n", " training_df[\"Experiment ID\"] == selected_value\n", " ][\"Job Status\"].iloc[0]\n", " training_job_link = selected_value\n", " if job_status not in [\"NOT STARTED\"]:\n", " if \"nerf-pipeline\" in training_job_id:\n", " training_job_link = get_vertex_ai_pipeline_run_link(\n", " training_job_id, PROJECT_NUMBER, REGION\n", " )\n", " training_job_link = f\"[{selected_value}]({training_job_link})\"\n", " elif training_job_id == \"MANUAL\":\n", " pass\n", " else:\n", " training_job_link = get_vertex_ai_training_job_link(\n", " training_job_id, PROJECT_NUMBER, REGION\n", " )\n", " training_job_link = f\"[{selected_value}]({training_job_link})\"\n", " return training_job_link, gr.update(visible=True)\n", " else:\n", " return \"\", gr.update(visible=False)\n", "\n", " with gr.Accordion(\"Colmap Datasets\", open=True):\n", " training_dataframe = gr.Dataframe(\n", " value=get_bq_folders_dataframe_training(training_table_id),\n", " interactive=False,\n", " )\n", " selected_folder = gr.Markdown()\n", " run_training_button = gr.Button(\"RUN TRAINING\", visible=False)\n", " training_dataframe.select(\n", " on_row_select,\n", " inputs=[training_dataframe],\n", " outputs=[selected_folder, run_training_button],\n", " )\n", "\n", " with gr.Row(equal_height=True):\n", " with gr.Column():\n", " gr.Markdown(\"### SET PROCESSED COLMAP DATASET\")\n", " colmap_gcs_folder = create_textbox(\"Enter GCS Experiment Folder\", \"\")\n", " set_processed_colmap_button = gr.Button(\n", " \"Set Processed COLMAP Data\", variant=\"primary\"\n", " )\n", " with gr.Column():\n", " gr.Markdown(\"### SET TRAINING PARAMETERS\")\n", " training_factor = create_dropdown(\"Downscaling Factor\", [0, 2, 4, 8], 4)\n", " training_max_steps = create_number(\n", " \"Maximum Number of Training Steps\", 25000\n", " )\n", " machine_dropdown = create_dropdown(\n", " \"Select Machine Type\",\n", " [\"NVIDIA_TESLA_V100\", \"NVIDIA_TESLA_A100\"],\n", " \"NVIDIA_TESLA_V100\",\n", " )\n", " with gr.Accordion(\"Advanced Settings\", open=False):\n", " zipnerf_gin_config = create_textbox(\n", " \"The ZipNeRF .gin Configuration File\",\n", " \"configs/zipnerf/360_aglo128.gin\",\n", " )\n", " camp_gin_config = create_textbox(\n", " \"The CamP .gin Configuration File\",\n", " \"configs/camp/camera_optim.gin\",\n", " )\n", "\n", " ws_table_id = create_textbox(\n", " \"Workspace Table ID\", training_table_id, visible=False\n", " )\n", "\n", " run_training_button.click(\n", " start_training,\n", " inputs=[\n", " selected_folder,\n", " training_factor,\n", " training_max_steps,\n", " zipnerf_gin_config,\n", " camp_gin_config,\n", " machine_dropdown,\n", " ],\n", " outputs=[training_dataframe],\n", " show_progress=True,\n", " concurrency_limit=10,\n", " )\n", "\n", " set_processed_colmap_button.click(\n", " update_colmap_dataset,\n", " inputs=[selected_folder, colmap_gcs_folder],\n", " outputs=[training_dataframe],\n", " show_progress=True,\n", " concurrency_limit=10,\n", " )\n", "\n", " demo.load(\n", " get_bq_folders_dataframe_training,\n", " inputs=[ws_table_id],\n", " outputs=[training_dataframe],\n", " concurrency_limit=10,\n", " )\n", "\n", " return demo, training_dataframe" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "pEUE531QLtOP" }, "outputs": [], "source": [ "# @title Rendering Workshop\n", "\n", "\n", "def create_rendering_workshop():\n", " def get_worker_pool_specs(docker_uri, args, machine_type, accelerator_type):\n", " return [\n", " {\n", " \"machine_spec\": {\n", " \"machine_type\": machine_type,\n", " \"accelerator_type\": accelerator_type,\n", " \"accelerator_count\": 8,\n", " },\n", " \"replica_count\": 1,\n", " \"container_spec\": {\n", " \"image_uri\": docker_uri,\n", " \"args\": args,\n", " },\n", " }\n", " ]\n", "\n", " def start_rendering(\n", " selected_folder,\n", " gcs_keyframes_file,\n", " render_factor,\n", " render_resolution_width,\n", " render_resolution_height,\n", " render_fps,\n", " zipnerf_gin_config,\n", " camp_gin_config,\n", " gcs_render_path_file,\n", " render_camtype,\n", " render_path_frames,\n", " machine_dropdown,\n", " ):\n", " print(\"Launching rendering...\")\n", " rendering_df = get_bq_folders_dataframe_rendering(rendering_table_id)\n", " training_df = get_bq_folders_dataframe_training(training_table_id)\n", "\n", " training_job_status = training_df[\n", " training_df[\"Experiment ID\"] == selected_folder\n", " ][\"Job Status\"].iloc[0]\n", " if training_job_status != \"SUCCEEDED\":\n", " gr.Warning(\"Please wait until the training job is finished.\")\n", " return rendering_df\n", "\n", " selected_folder = extract_dataset_id(selected_folder)\n", " output_colmap_dir_gcs = rendering_df[\n", " rendering_df[\"Experiment ID\"] == selected_folder\n", " ][\"GCS Experiment Path\"].iloc[0]\n", " training_job_name = rendering_df[\n", " rendering_df[\"Experiment ID\"] == selected_folder\n", " ][\"Training Job Name\"].iloc[0]\n", "\n", " data_rendering_job_name = get_job_name_with_datetime(\n", " \"cloudnerf_gradio_rendering\"\n", " )\n", " unique_experiments.add(selected_folder)\n", " video_resolution = f\"({render_resolution_width}, {render_resolution_height})\"\n", "\n", " machine_type = (\n", " \"n1-highmem-64\"\n", " if machine_dropdown == \"NVIDIA_TESLA_V100\"\n", " else \"a2-highgpu-8g\"\n", " )\n", " accelerator_type = machine_dropdown\n", " accelerator_count = 8\n", "\n", " rendering_args = [\n", " \"-rendering_job_name\",\n", " data_rendering_job_name,\n", " \"-training_job_name\",\n", " training_job_name,\n", " \"-gcs_experiment_path\",\n", " output_colmap_dir_gcs,\n", " \"-render_resolution\",\n", " video_resolution,\n", " \"-render_video_fps\",\n", " str(render_fps),\n", " \"-factor\",\n", " str(render_factor),\n", " \"-gin_config_zipnerf\",\n", " zipnerf_gin_config,\n", " \"-gin_config_camp\",\n", " camp_gin_config,\n", " ]\n", " if gcs_keyframes_file:\n", " rendering_args.append(\"-gcs_keyframes_file\")\n", " rendering_args.append(gcs_keyframes_file)\n", " if render_camtype:\n", " rendering_args.append(\"-render_camtype\")\n", " rendering_args.append(render_camtype)\n", " if gcs_render_path_file and not gcs_keyframes_file:\n", " rendering_args.append(\"-gcs_render_path_file\")\n", " rendering_args.append(gcs_render_path_file)\n", "\n", " worker_pool_specs = get_worker_pool_specs(\n", " RENDERING_DOCKER_URI, rendering_args, machine_type, accelerator_type\n", " )\n", "\n", " data_rendering_custom_job = aiplatform.CustomJob(\n", " display_name=data_rendering_job_name,\n", " project=PROJECT_ID,\n", " worker_pool_specs=worker_pool_specs,\n", " staging_bucket=staging_bucket,\n", " )\n", "\n", " data_rendering_custom_job.run(sync=False)\n", "\n", " filtered_df = training_df[training_df[\"Experiment ID\"] == selected_folder]\n", "\n", " if not filtered_df.empty:\n", " colmap_job_id = filtered_df[\"Colmap Job ID\"].iloc[0]\n", " else:\n", " colmap_job_id = \"PENDING\"\n", "\n", " if not filtered_df.empty:\n", " training_job_id = filtered_df[\"Training Job ID\"].iloc[0]\n", " else:\n", " training_job_id = \"PENDING\"\n", "\n", " if not filtered_df.empty:\n", " training_job_status = filtered_df[\"Job Status\"].iloc[0]\n", " else:\n", " training_job_status = \"PENDING\"\n", "\n", " if not filtered_df.empty:\n", " scene_name = filtered_df[\"Scene Name\"].iloc[0]\n", " else:\n", " scene_name = \"NOT FOUND\"\n", "\n", " if not filtered_df.empty:\n", " image_count = int(filtered_df[\"Image Count\"].iloc[0])\n", " else:\n", " image_count = 0\n", "\n", " training_row_data = {\n", " \"Job_Status\": training_job_status,\n", " \"Experiment_ID\": selected_folder,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " \"Training_Job_ID\": training_job_id,\n", " \"Training_Job_Name\": training_job_name,\n", " \"End_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"GCS_Experiment_Path\": output_colmap_dir_gcs,\n", " }\n", "\n", " rendering_row_data = {\n", " \"Job_Status\": \"RUNNING\",\n", " \"Experiment_ID\": selected_folder,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " \"Training_Job_ID\": training_job_id,\n", " \"Training_Job_Name\": training_job_name,\n", " \"Rendering_Job_ID\": \"STARTING\",\n", " \"Rendering_Job_Name\": data_rendering_job_name,\n", " \"Start_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"Render_Factor\": render_factor,\n", " \"Render_Resolution_Width\": render_resolution_width,\n", " \"Render_Resolution_Height\": render_resolution_height,\n", " \"Render_FPS\": render_fps,\n", " \"ZipNeRF_Gin_Config\": zipnerf_gin_config,\n", " \"CamP_Gin_Config\": camp_gin_config,\n", " \"GCS_Keyframes_File\": gcs_keyframes_file,\n", " \"GCS_Render_Path_File\": gcs_render_path_file,\n", " \"Render_Camtype\": render_camtype,\n", " \"Render_Path_Frames\": render_path_frames,\n", " \"GCS_Experiment_Path\": output_colmap_dir_gcs,\n", " \"Machine_Type\": machine_type,\n", " \"Accelerator_Type\": accelerator_type,\n", " \"Accelerator_Count\": accelerator_count,\n", " }\n", "\n", " insert_or_update_row(training_row_data, training_table_id)\n", " insert_or_update_row(rendering_row_data, rendering_table_id)\n", "\n", " while True:\n", " try:\n", " if data_rendering_custom_job.resource_name:\n", " rendering_job_id = data_rendering_custom_job.resource_name.split(\n", " \"/\"\n", " )[-1]\n", " rendering_job_status = get_vertex_ai_job_status(rendering_job_id)\n", " rendering_status_thread = threading.Thread(\n", " target=update_job_info,\n", " args=(\n", " str(rendering_job_id),\n", " rendering_table_id,\n", " scene_name,\n", " selected_folder,\n", " ),\n", " )\n", " rendering_status_thread.start()\n", " row_data = {\n", " \"Job_Status\": JOB_STATE_MAPPING[int(rendering_job_status)],\n", " \"Experiment_ID\": selected_folder,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": colmap_job_id,\n", " \"Training_Job_ID\": training_job_id,\n", " \"Training_Job_Name\": training_job_name,\n", " \"Rendering_Job_ID\": rendering_job_id,\n", " \"Rendering_Job_Name\": data_rendering_job_name,\n", " \"GCS_Experiment_Path\": output_colmap_dir_gcs,\n", " }\n", " insert_or_update_row(row_data, rendering_table_id)\n", " break\n", " except RuntimeError:\n", " pass\n", "\n", " return get_bq_folders_dataframe_rendering(rendering_table_id)\n", "\n", " rendering_tip_text = \"\"\"\n", " 1. Upload images or videos in the Pose Estimation tab.\n", " 2. Set parameters for rendering.\n", " 3. Select dataset in the table.\n", " 4. Click the **Run Rendering** button.\n", " 5. After the rendering job starts, check the job status at\n", " [Vertex Custom Training](https://console.cloud.google.com/vertex-ai/training/custom-jobs?project={PROJECT_ID}).\n", " \"\"\"\n", "\n", " def create_dropdown(label, options, default):\n", " return gr.Dropdown(options, label=label, interactive=True, value=default)\n", "\n", " def create_textbox(label, default, visible=True):\n", " return gr.Textbox(\n", " label=label, value=default, lines=1, interactive=True, visible=visible\n", " )\n", "\n", " def create_number(label, default):\n", " return gr.Number(label=label, value=default, interactive=True)\n", "\n", " def selected_video_output(selected_value):\n", " rendering_df = get_bq_folders_dataframe_rendering(rendering_table_id)\n", " selected_value = extract_dataset_id(selected_value)\n", " gcs_experiment_path = rendering_df[\n", " rendering_df[\"Experiment ID\"] == selected_value\n", " ][\"GCS Experiment Path\"].iloc[0]\n", " rendering_job_name = rendering_df[\n", " rendering_df[\"Experiment ID\"] == selected_value\n", " ][\"Rendering Job Name\"].iloc[0]\n", " rendering_job_status = rendering_df[\n", " rendering_df[\"Experiment ID\"] == selected_value\n", " ][\"Job Status\"].iloc[0]\n", " experiment_path_no_prefix = gcs_experiment_path.replace(\"gs://\", \"\")\n", " bucket_name = experiment_path_no_prefix.split(\"/\")[0]\n", " folder_path = os.path.join(*experiment_path_no_prefix.split(\"/\")[1:])\n", " if rendering_job_status == \"SUCCEEDED\":\n", " gcs_video_path = os.path.join(\n", " folder_path, \"render\", rendering_job_name, \"path_videos\", \"videos\"\n", " )\n", " try:\n", " color_video_path = list_mp4_files(\n", " bucket_name, gcs_video_path, rendering_job_status\n", " )[1]\n", " remote_gcs_video_path = os.path.join(bucket_name, color_video_path)\n", " remote_gcs_video_path = \"gs://\" + remote_gcs_video_path\n", " download_gcs_file_to_local_dir(remote_gcs_video_path, \"/tmp/\")\n", " video_filename = os.path.basename(remote_gcs_video_path)\n", " temp_video_file_path = f\"/tmp/{video_filename}\"\n", " except IndexError:\n", " temp_video_file_path = create_pending_video(\n", " rendering_job_status, \"/tmp/pending.mp4\"\n", " )\n", " return temp_video_file_path\n", "\n", " def on_row_select(rendering_df, evt: gr.SelectData):\n", " row_index = evt.index[0]\n", " if 0 <= row_index < len(rendering_df):\n", " selected_value = rendering_df.iloc[row_index, 0]\n", " rendering_job_status = rendering_df[\n", " rendering_df[\"Experiment ID\"] == selected_value\n", " ][\"Job Status\"].iloc[0]\n", " (\n", " selected_video_output(selected_value)\n", " if rendering_job_status == \"SUCCEEDED\"\n", " else None\n", " )\n", " selected_video.visible = rendering_job_status == \"SUCCEEDED\"\n", " rendering_job_id = rendering_df[\n", " rendering_df[\"Experiment ID\"] == selected_value\n", " ][\"Rendering Job ID\"].iloc[0]\n", " rendering_job_link = selected_value\n", " if rendering_job_status not in [\"NOT STARTED\"]:\n", " if rendering_job_id == \"MANUAL\":\n", " pass\n", " elif \"nerf-pipeline\" in rendering_job_id:\n", " rendering_job_link = get_vertex_ai_pipeline_run_link(\n", " rendering_job_id, PROJECT_NUMBER, REGION\n", " )\n", " rendering_job_link = f\"[{selected_value}]({rendering_job_link})\"\n", " else:\n", " rendering_job_link = get_vertex_ai_training_job_link(\n", " rendering_job_id, PROJECT_NUMBER, REGION\n", " )\n", " rendering_job_link = f\"[{selected_value}]({rendering_job_link})\"\n", " return selected_value, rendering_job_link, gr.update(visible=True)\n", " else:\n", " return (\n", " gr.update(visible=False),\n", " gr.update(visible=False),\n", " gr.update(visible=False),\n", " )\n", "\n", " with gr.Blocks() as demo:\n", " with gr.Accordion(\"How To Use\", open=False):\n", " gr.Markdown(rendering_tip_text)\n", "\n", " with gr.Accordion(\"Trained Checkpoints\", open=True):\n", " rendering_dataframe = gr.Dataframe(\n", " value=get_bq_folders_dataframe_rendering(rendering_table_id),\n", " interactive=False,\n", " )\n", " run_rendering_button = gr.Button(\"RUN RENDERING\", visible=False)\n", " selected_value = gr.Textbox(visible=False)\n", " selected_value_link = gr.Markdown()\n", " selected_video = gr.Interface(\n", " fn=selected_video_output,\n", " inputs=[selected_value],\n", " outputs=\"video\",\n", " submit_btn=\"Play\",\n", " visible=False,\n", " )\n", " rendering_dataframe.select(\n", " on_row_select,\n", " inputs=[rendering_dataframe],\n", " outputs=[selected_value, selected_value_link, run_rendering_button],\n", " )\n", "\n", " with gr.Row(equal_height=True):\n", " with gr.Column():\n", " gr.Markdown(\"### SET RENDERING PARAMETERS\")\n", " gcs_keyframes_file = create_textbox(\"GCS Keyframe File\", \"\")\n", " render_factor = create_dropdown(\"Downscaling Factor\", [0, 2, 4, 8], 4)\n", " render_resolution_width = create_number(\"Video Resolution Width\", 1280)\n", " render_resolution_height = create_number(\"Video Resolution Height\", 720)\n", " render_fps = create_number(\"Video FPS\", 30)\n", " machine_dropdown = create_dropdown(\n", " \"Select Machine Type\",\n", " [\"NVIDIA_TESLA_V100\", \"NVIDIA_TESLA_A100\"],\n", " \"NVIDIA_TESLA_V100\",\n", " )\n", " with gr.Accordion(\"Advanced Settings\", open=False):\n", " zipnerf_gin_config = create_textbox(\n", " \"The ZipNeRF .gin Configuration File\",\n", " \"configs/zipnerf/360_aglo128.gin\",\n", " )\n", " camp_gin_config = create_textbox(\n", " \"The CamP .gin Configuration File\",\n", " \"configs/camp/camera_optim.gin\",\n", " )\n", " gcs_render_path_file = create_textbox(\n", " \"The GCS Render Path .npy File\", \"\"\n", " )\n", " render_camtype = create_textbox(\n", " \"The Render Camera Type\", \"perspective\"\n", " )\n", " render_path_frames = create_number(\n", " \"The Number of Frame along the Render Path\", 120\n", " )\n", "\n", " ws_table_id = create_textbox(\n", " \"Workspace Table ID\", rendering_table_id, visible=False\n", " )\n", "\n", " run_rendering_button.click(\n", " start_rendering,\n", " inputs=[\n", " selected_value,\n", " gcs_keyframes_file,\n", " render_factor,\n", " render_resolution_width,\n", " render_resolution_height,\n", " render_fps,\n", " zipnerf_gin_config,\n", " camp_gin_config,\n", " gcs_render_path_file,\n", " render_camtype,\n", " render_path_frames,\n", " machine_dropdown,\n", " ],\n", " outputs=[rendering_dataframe],\n", " show_progress=True,\n", " concurrency_limit=10,\n", " )\n", "\n", " demo.load(\n", " get_bq_folders_dataframe_rendering,\n", " inputs=[ws_table_id],\n", " outputs=[rendering_dataframe],\n", " concurrency_limit=10,\n", " )\n", "\n", " return demo, rendering_dataframe" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "tvPGALYMLtEo" }, "outputs": [], "source": [ "# @title Pipeline Workshop\n", "\n", "\n", "def create_pipeline_workshop():\n", " def get_worker_pool_specs(docker_uri, args, machine_type, accelerator_type):\n", " return [\n", " {\n", " \"machine_spec\": {\n", " \"machine_type\": machine_type,\n", " \"accelerator_type\": accelerator_type,\n", " \"accelerator_count\": 8,\n", " },\n", " \"replica_count\": 1,\n", " \"container_spec\": {\n", " \"image_uri\": docker_uri,\n", " \"args\": args,\n", " },\n", " }\n", " ]\n", "\n", " def process_emails(email_input):\n", " # Split the input string by commas and strip any whitespace\n", " email_list = [email.strip() for email in email_input.split(\",\")]\n", " # You can add further validation here if needed\n", " return email_list\n", "\n", " def start_pipeline(\n", " selected_row,\n", " matcher_dropdown,\n", " camera_dropdown,\n", " video_frame_fps,\n", " max_num_features,\n", " mapper_dropdown,\n", " training_factor,\n", " training_max_steps,\n", " zipnerf_gin_config,\n", " camp_gin_config,\n", " gcs_keyframes_file,\n", " render_factor,\n", " render_resolution_width,\n", " render_resolution_height,\n", " render_fps,\n", " gcs_render_path_file,\n", " render_camtype,\n", " render_path_frames,\n", " machine_dropdown,\n", " email_list_input,\n", " ):\n", "\n", " print(\"Launching pipeline...\")\n", "\n", " selected_row = extract_dataset_id(selected_row)\n", " folders_df = get_bq_folders_dataframe_colmap(colmap_table_id)\n", "\n", " if not email_list_input:\n", " gr.Warning(\"Please provide at least an email address.\")\n", " return folders_df\n", "\n", " gcs_dataset_path_query = folders_df[\n", " folders_df[\"Experiment ID\"] == selected_row\n", " ][\"GCS Dataset Path\"]\n", " gcs_dataset_path = (\n", " gcs_dataset_path_query.iloc[0] if not gcs_dataset_path_query.empty else None\n", " )\n", "\n", " input_image_folder = f\"{BUCKET_NAME}/{selected_row}\"\n", " output_dir = input_image_folder.replace(\"dataset\", \"experiment\")\n", " data_calibration_job_name = get_job_name_with_datetime(\n", " \"cloudnerf_gradio_colmap\"\n", " )\n", " data_training_job_name = get_job_name_with_datetime(\"cloudnerf_gradio_training\")\n", " data_rendering_job_name = get_job_name_with_datetime(\n", " \"cloudnerf_gradio_rendering\"\n", " )\n", " video_resolution = f\"({render_resolution_width}, {render_resolution_height})\"\n", " unique_experiments.add(selected_row)\n", "\n", " machine_type = (\n", " \"n1-highmem-64\"\n", " if machine_dropdown == \"NVIDIA_TESLA_V100\"\n", " else \"a2-highgpu-8g\"\n", " )\n", " accelerator_type = machine_dropdown\n", " accelerator_count = 8\n", "\n", " calibration_args = [\n", " \"-use_gpu\",\n", " \"1\",\n", " \"-gcs_dataset_path\",\n", " gcs_dataset_path,\n", " \"-gcs_experiment_path\",\n", " output_dir,\n", " \"-camera\",\n", " camera_dropdown,\n", " \"-matching_strategy\",\n", " MATCHER_MAPPING[matcher_dropdown],\n", " \"-max_num_features\",\n", " str(max_num_features),\n", " \"-use_hierarchical_mapper\",\n", " str(int(mapper_dropdown)),\n", " \"-fps\",\n", " str(video_frame_fps),\n", " ]\n", "\n", " training_args = [\n", " \"-training_job_name\",\n", " data_training_job_name,\n", " \"-gcs_experiment_path\",\n", " output_dir,\n", " \"-factor\",\n", " str(training_factor),\n", " \"-max_steps\",\n", " str(training_max_steps),\n", " \"-gin_config_zipnerf\",\n", " zipnerf_gin_config,\n", " \"-gin_config_camp\",\n", " camp_gin_config,\n", " ]\n", "\n", " rendering_args = [\n", " \"-rendering_job_name\",\n", " data_rendering_job_name,\n", " \"-training_job_name\",\n", " data_training_job_name,\n", " \"-gcs_experiment_path\",\n", " output_dir,\n", " \"-render_resolution\",\n", " video_resolution,\n", " \"-render_video_fps\",\n", " str(render_fps),\n", " \"-factor\",\n", " str(render_factor),\n", " \"-gin_config_zipnerf\",\n", " zipnerf_gin_config,\n", " \"-gin_config_camp\",\n", " camp_gin_config,\n", " ]\n", "\n", " if gcs_keyframes_file:\n", " rendering_args.append(\"-gcs_keyframes_file\")\n", " rendering_args.append(gcs_keyframes_file)\n", " if render_camtype:\n", " rendering_args.append(\"-render_camtype\")\n", " rendering_args.append(render_camtype)\n", " if gcs_render_path_file and not gcs_keyframes_file:\n", " rendering_args.append(\"-gcs_render_path_file\")\n", " rendering_args.append(gcs_render_path_file)\n", "\n", " calibration_worker_pool_specs = get_worker_pool_specs(\n", " CALIBRATION_DOCKER_URI, calibration_args, machine_type, accelerator_type\n", " )\n", " training_worker_pool_specs = get_worker_pool_specs(\n", " TRAINING_DOCKER_URI, training_args, machine_type, accelerator_type\n", " )\n", " rendering_worker_pool_specs = get_worker_pool_specs(\n", " RENDERING_DOCKER_URI, rendering_args, machine_type, accelerator_type\n", " )\n", "\n", " scene_name = (\n", " folders_df[folders_df[\"Experiment ID\"] == selected_row][\"Scene Name\"].iloc[\n", " 0\n", " ]\n", " if not folders_df[folders_df[\"Experiment ID\"] == selected_row][\n", " \"Scene Name\"\n", " ].empty\n", " else \"NOT FOUND\"\n", " )\n", " image_count = (\n", " int(\n", " folders_df[folders_df[\"Experiment ID\"] == selected_row][\n", " \"Image Count\"\n", " ].iloc[0]\n", " )\n", " if not folders_df[folders_df[\"Experiment ID\"] == selected_row][\n", " \"Image Count\"\n", " ].empty\n", " else 0\n", " )\n", "\n", " email_list_processed = process_emails(email_list_input)\n", " nerf_pipeline_run = create_pipeline(\n", " data_calibration_job_name,\n", " data_training_job_name,\n", " data_rendering_job_name,\n", " calibration_worker_pool_specs,\n", " training_worker_pool_specs,\n", " rendering_worker_pool_specs,\n", " email_list_processed,\n", " )\n", "\n", " pipeline_job = run_pipeline(nerf_pipeline_run)\n", "\n", " while True:\n", " try:\n", " pipeline_job_id = pipeline_job.resource_name.split(\"/\")[-1]\n", " for i, table_id in enumerate(\n", " [colmap_table_id, training_table_id, rendering_table_id]\n", " ):\n", " thread = threading.Thread(\n", " target=update_pipeline_job_info,\n", " args=(\n", " str(pipeline_job_id),\n", " table_id,\n", " scene_name,\n", " selected_row,\n", " i + 1,\n", " ),\n", " )\n", " thread.start()\n", "\n", " colmap_row_data = {\n", " \"Job_Status\": JOB_STATE_MAPPING[int(pipeline_job.state)],\n", " \"Pipeline_State\": pipeline_job.state,\n", " \"Pipeline_Resource_Name\": pipeline_job.resource_name,\n", " \"Experiment_ID\": selected_row,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": pipeline_job_id,\n", " \"Created_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"Matcher_Type\": MATCHER_MAPPING[matcher_dropdown],\n", " \"Camera_Type\": camera_dropdown,\n", " \"Video_Frame_FPS\": video_frame_fps,\n", " \"Max_Num_Features\": max_num_features,\n", " \"Use_Hierarchical_Mapper\": mapper_dropdown,\n", " \"GCS_Dataset_Path\": gcs_dataset_path,\n", " \"GCS_Experiment_Path\": output_dir,\n", " \"Machine_Type\": machine_type,\n", " \"Accelerator_Type\": accelerator_type,\n", " \"Accelerator_Count\": accelerator_count,\n", " }\n", "\n", " training_row_data = {\n", " \"Job_Status\": JOB_STATE_MAPPING[int(pipeline_job.state)],\n", " \"Pipeline_State\": pipeline_job.state,\n", " \"Pipeline_Resource_Name\": pipeline_job.resource_name,\n", " \"Experiment_ID\": selected_row,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": pipeline_job_id,\n", " \"Training_Job_ID\": pipeline_job_id,\n", " \"Training_Job_Name\": data_training_job_name,\n", " \"Start_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"Training_Factor\": training_factor,\n", " \"Training_Max_Steps\": training_max_steps,\n", " \"ZipNeRF_Gin_Config\": zipnerf_gin_config,\n", " \"CamP_Gin_Config\": camp_gin_config,\n", " \"GCS_Experiment_Path\": output_dir,\n", " \"Machine_Type\": machine_type,\n", " \"Accelerator_Type\": accelerator_type,\n", " \"Accelerator_Count\": accelerator_count,\n", " }\n", "\n", " rendering_row_data = {\n", " \"Job_Status\": JOB_STATE_MAPPING[int(pipeline_job.state)],\n", " \"Pipeline_State\": pipeline_job.state,\n", " \"Pipeline_Resource_Name\": pipeline_job.resource_name,\n", " \"Experiment_ID\": selected_row,\n", " \"Scene_Name\": scene_name,\n", " \"Image_Count\": image_count,\n", " \"Colmap_Job_ID\": pipeline_job_id,\n", " \"Training_Job_ID\": pipeline_job_id,\n", " \"Training_Job_Name\": data_training_job_name,\n", " \"Rendering_Job_ID\": pipeline_job_id,\n", " \"Rendering_Job_Name\": data_rendering_job_name,\n", " \"Start_Time\": datetime.now().strftime(\"%Y-%m-%dT%H:%M:%S\"),\n", " \"Render_Factor\": render_factor,\n", " \"Render_Resolution_Width\": render_resolution_width,\n", " \"Render_Resolution_Height\": render_resolution_height,\n", " \"Render_FPS\": render_fps,\n", " \"ZipNeRF_Gin_Config\": zipnerf_gin_config,\n", " \"CamP_Gin_Config\": camp_gin_config,\n", " \"GCS_Keyframes_File\": gcs_keyframes_file,\n", " \"GCS_Render_Path_File\": gcs_render_path_file,\n", " \"Render_Camtype\": render_camtype,\n", " \"Render_Path_Frames\": render_path_frames,\n", " \"GCS_Experiment_Path\": output_dir,\n", " \"Machine_Type\": machine_type,\n", " \"Accelerator_Type\": accelerator_type,\n", " \"Accelerator_Count\": accelerator_count,\n", " }\n", " insert_or_update_row(colmap_row_data, colmap_table_id)\n", " insert_or_update_row(training_row_data, training_table_id)\n", " insert_or_update_row(rendering_row_data, rendering_table_id)\n", " break\n", "\n", " except RuntimeError as e:\n", " print(e)\n", "\n", " folders_df = get_bq_folders_dataframe_colmap(colmap_table_id)\n", " return folders_df\n", "\n", " camp_zipnerf_pipeline_tip_text = \"\"\"\n", " 1. Upload images or videos in the Pose Estimation tab.\n", " 2. Set parameters for the pipeline.\n", " 3. Select dataset in the table.\n", " 4. Click the **Run Full Pipeline** button.\n", " 5. After the training job starts, check the pipeline jobs status at\n", " [Vertex Custom Training](https://console.cloud.google.com/vertex-ai/training/custom-jobs?project={PROJECT_ID}).\n", " \"\"\"\n", "\n", " with gr.Blocks() as demo:\n", "\n", " def on_row_select(folders_df, evt: gr.SelectData):\n", " row_index = evt.index[0]\n", " if 0 <= row_index < len(folders_df):\n", " selected_value = folders_df.iloc[row_index, 0]\n", " colmap_job_id = folders_df[\n", " folders_df[\"Experiment ID\"] == selected_value\n", " ][\"Colmap Job ID\"].iloc[0]\n", " job_status = folders_df[folders_df[\"Experiment ID\"] == selected_value][\n", " \"Job Status\"\n", " ].iloc[0]\n", " colmap_job_link = selected_value\n", " if job_status not in [\"NOT STARTED\"]:\n", " if \"nerf-pipeline\" in colmap_job_id:\n", " colmap_job_link = get_vertex_ai_pipeline_run_link(\n", " colmap_job_id, PROJECT_NUMBER, REGION\n", " )\n", " colmap_job_link = f\"[{selected_value}]({colmap_job_link})\"\n", " elif colmap_job_id == \"MANUAL\":\n", " pass\n", " else:\n", " colmap_job_link = get_vertex_ai_training_job_link(\n", " colmap_job_id, PROJECT_NUMBER, REGION\n", " )\n", " colmap_job_link = f\"[{selected_value}]({colmap_job_link})\"\n", " return colmap_job_link, gr.update(visible=True), gr.update(visible=True)\n", " else:\n", " return \"\", gr.update(visible=False), gr.update(visible=True)\n", "\n", " with gr.Accordion(\"How To Use\", open=False):\n", " gr.Markdown(camp_zipnerf_pipeline_tip_text)\n", "\n", " with gr.Accordion(\"Datasets\", open=True):\n", " folders_dataframe = gr.Dataframe(\n", " value=get_bq_folders_dataframe_colmap(colmap_table_id),\n", " interactive=False,\n", " )\n", " selected_row = gr.Markdown(label=\"Selected Experiment\", value=\"\")\n", " run_pipeline_button = gr.Button(\"RUN FULL PIPELINE\", visible=False)\n", " folders_dataframe.select(\n", " on_row_select,\n", " inputs=[folders_dataframe],\n", " outputs=[selected_row, run_pipeline_button],\n", " )\n", "\n", " def create_dropdown(label, options, default):\n", " return gr.Dropdown(options, label=label, interactive=True, value=default)\n", "\n", " def create_textbox(label, default, visible=True):\n", " return gr.Textbox(\n", " label=label, value=default, lines=1, interactive=True, visible=visible\n", " )\n", "\n", " def create_number(label, default):\n", " return gr.Number(label=label, value=default, interactive=True)\n", "\n", " gr.Markdown(\"### SET PIPELINE PARAMETERS\")\n", " with gr.Accordion(\"Set COLMAP Parameters\", open=False):\n", " with gr.Row(equal_height=True):\n", " with gr.Column():\n", " matcher_dropdown = create_dropdown(\n", " \"Choose a Matching Algorithm\",\n", " [\n", " \"Exhaustive Matcher\",\n", " \"Sequential Matcher\",\n", " \"Spatial Matcher\",\n", " \"Transitive Matcher\",\n", " \"Vocab Tree Matcher\",\n", " ],\n", " \"Exhaustive Matcher\",\n", " )\n", " camera_dropdown = create_dropdown(\n", " \"Type of Camera Used for Capture\",\n", " [\"OPENCV\", \"OPENCV_FISHEYE\"],\n", " \"OPENCV\",\n", " )\n", " with gr.Accordion(\"Advanced Settings\", open=False):\n", " video_frame_fps = create_number(\"Video Frame Extraction FPS\", 4)\n", " max_num_features = create_number(\n", " \"Maximum Number Length of SIFT Descriptor\", 8192\n", " )\n", " mapper_dropdown = create_dropdown(\n", " \"Use Hierarchical Mapper\", [False, True], False\n", " )\n", "\n", " with gr.Accordion(\"Set Training Parameters\", open=False):\n", " with gr.Row(equal_height=True):\n", " with gr.Column():\n", " training_factor = create_dropdown(\n", " \"Downscaling Factor\", [0, 2, 4, 8], 4\n", " )\n", " training_max_steps = create_number(\n", " \"Maximum Number of Training Steps\", 25000\n", " )\n", " with gr.Accordion(\"Advanced Settings\", open=False):\n", " zipnerf_gin_config = create_textbox(\n", " \"The ZipNeRF .gin Configuration File\",\n", " \"configs/zipnerf/360_aglo128.gin\",\n", " )\n", " camp_gin_config = create_textbox(\n", " \"The CamP .gin Configuration File\",\n", " \"configs/camp/camera_optim.gin\",\n", " )\n", "\n", " with gr.Accordion(\"Set Rendering Parameters\", open=False):\n", " with gr.Row(equal_height=True):\n", " with gr.Column():\n", " gcs_keyframes_file = create_textbox(\"GCS Keyframe File\", \"\")\n", " render_factor = create_dropdown(\n", " \"Downscaling Factor\", [0, 2, 4, 8], 4\n", " )\n", " render_resolution_width = create_number(\n", " \"Video Resolution Width\", 1280\n", " )\n", " render_resolution_height = create_number(\n", " \"Video Resolution Height\", 720\n", " )\n", " render_fps = create_number(\"Video FPS\", 30)\n", " machine_dropdown = create_dropdown(\n", " \"Select Machine Type\",\n", " [\"NVIDIA_TESLA_V100\", \"NVIDIA_TESLA_A100\"],\n", " \"NVIDIA_TESLA_V100\",\n", " )\n", " with gr.Accordion(\"Advanced Settings\", open=False):\n", " gcs_render_path_file = create_textbox(\n", " \"The GCS Render Path .npy File\", \"\"\n", " )\n", " render_camtype = create_textbox(\n", " \"The Render Camera Type\", \"perspective\"\n", " )\n", " render_path_frames = create_number(\n", " \"The Number of Frame along the Render Path\", 120\n", " )\n", "\n", " with gr.Row(equal_height=True):\n", " _ = gr.Textbox(label=\"\", interactive=False, visible=False)\n", " email_input = gr.Textbox(\n", " label=\"Set Notification Emails\",\n", " placeholder=\"example1@example.com, example2@example.com\",\n", " )\n", "\n", " ws_table_id = create_textbox(\n", " \"Workspace Table ID\", colmap_table_id, visible=False\n", " )\n", "\n", " run_pipeline_button.click(\n", " start_pipeline,\n", " inputs=[\n", " selected_row,\n", " matcher_dropdown,\n", " camera_dropdown,\n", " video_frame_fps,\n", " max_num_features,\n", " mapper_dropdown,\n", " training_factor,\n", " training_max_steps,\n", " zipnerf_gin_config,\n", " camp_gin_config,\n", " gcs_keyframes_file,\n", " render_factor,\n", " render_resolution_width,\n", " render_resolution_height,\n", " render_fps,\n", " gcs_render_path_file,\n", " render_camtype,\n", " render_path_frames,\n", " machine_dropdown,\n", " email_input,\n", " ],\n", " outputs=[folders_dataframe],\n", " show_progress=True,\n", " concurrency_limit=10,\n", " )\n", "\n", " demo.load(\n", " get_bq_folders_dataframe_colmap,\n", " inputs=[ws_table_id],\n", " outputs=[folders_dataframe],\n", " concurrency_limit=10,\n", " )\n", " return demo, folders_dataframe" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "JcNhRe1-L2GT" }, "outputs": [], "source": [ "# @title Application Main\n", "\n", "\n", "css = \"\"\"\n", " .gradio-container {\n", " width: 90% !important;\n", " color: #000 !important;\n", " }\n", "\"\"\"\n", "\n", "with gr.Blocks(\n", " css=css, theme=gr.themes.Default(primary_hue=\"orange\", secondary_hue=\"blue\")\n", ") as demo:\n", " gr.Markdown(\"# Model Garden Playground for CamP ZipNeRF\")\n", "\n", " ws_colmap_table_id = gr.Textbox(\n", " value=colmap_table_id,\n", " lines=1,\n", " interactive=False,\n", " visible=False,\n", " )\n", " ws_training_table_id = gr.Textbox(\n", " value=training_table_id,\n", " lines=1,\n", " interactive=False,\n", " visible=False,\n", " )\n", " ws_rendering_table_id = gr.Textbox(\n", " value=rendering_table_id,\n", " lines=1,\n", " interactive=False,\n", " visible=False,\n", " )\n", "\n", " with gr.Tabs():\n", " with gr.TabItem(\"Pose Estimation\") as tab1:\n", " (\n", " pose_estimation_workshop,\n", " folders_dataframe_colmap,\n", " ) = create_pose_estimation_workshop()\n", " gr.on(\n", " [tab1.select],\n", " get_bq_folders_dataframe_colmap,\n", " inputs=[ws_colmap_table_id],\n", " outputs=[folders_dataframe_colmap],\n", " )\n", " with gr.TabItem(\"Training\") as tab2:\n", " training_workshop, folders_dataframe_training = create_training_workshop()\n", " gr.on(\n", " [tab2.select],\n", " get_bq_folders_dataframe_training,\n", " inputs=[ws_training_table_id],\n", " outputs=[folders_dataframe_training],\n", " )\n", " with gr.TabItem(\"Rendering\") as tab3:\n", " (\n", " rendering_workshop,\n", " folders_dataframe_rendering,\n", " ) = create_rendering_workshop()\n", " gr.on(\n", " [tab3.select],\n", " get_bq_folders_dataframe_rendering,\n", " inputs=[ws_rendering_table_id],\n", " outputs=[folders_dataframe_rendering],\n", " )\n", " with gr.TabItem(\"Pipeline\") as tab4:\n", " pipeline_workshop, folders_dataframe_pipeline = create_pipeline_workshop()\n", " gr.on(\n", " [tab4.select],\n", " get_bq_folders_dataframe_colmap,\n", " inputs=[ws_colmap_table_id],\n", " outputs=[folders_dataframe_pipeline],\n", " )\n", "\n", "\n", "show_debug_logs = True\n", "demo.queue(max_size=10)\n", "demo.launch(\n", " share=True, inline=False, debug=show_debug_logs, show_error=True, max_threads=10\n", ")" ] } ], "metadata": { "colab": { "name": "model_garden_camp_zipnerf_gradio.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }