notebooks/community/model_garden/model_garden_nvidia_cosmos_deployment.ipynb (639 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "1gcBBbBCW_CV"
},
"outputs": [],
"source": [
"# Copyright 2025 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wKzYxAA1W_CV"
},
"source": [
"# Vertex AI Model Garden - Nvidia Cosmos 1.0\n",
"\n",
"<table><tbody><tr>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fcommunity%2Fmodel_garden%2Fmodel_garden_nvidia_cosmos_deployment.ipynb\">\n",
" <img alt=\"Google Cloud Colab Enterprise logo\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" width=\"32px\"><br> Run in Colab Enterprise\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_nvidia_cosmos_deployment.ipynb\">\n",
" <img alt=\"GitHub logo\" src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" width=\"32px\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</tr></tbody></table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2WwEeH8BW_CV"
},
"source": [
"## Overview\n",
"\n",
"This notebook demonstrates deploying Nvidia Cosmos world foundation models (WFM) on Vertex AI for online prediction.\n",
" - [nvidia/Cosmos-1.0-Diffusion-7B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Text2World)\n",
" - [nvidia/Cosmos-1.0-Diffusion-14B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Text2World)\n",
" - [nvidia/Cosmos-1.0-Diffusion-7B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Video2World)\n",
" - [nvidia/Cosmos-1.0-Diffusion-14B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Video2World)\n",
"\n",
"### Objective\n",
"\n",
"- Upload the model to [Model Registry](https://cloud.google.com/vertex-ai/docs/model-registry/introduction).\n",
"- Deploy the model on [Endpoint](https://cloud.google.com/vertex-ai/docs/predictions/using-private-endpoints).\n",
"- Run online predictions for `text-to-world` and `video-to-world`.\n",
"\n",
"### Costs\n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing), [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TAKAyLQvW_CV"
},
"source": [
"## Run the notebook"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "sGzHHcL3W_CV"
},
"outputs": [],
"source": [
"# @title Setup Google Cloud project\n",
"\n",
"# @markdown 1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
"\n",
"# @markdown 2. **[Optional]** Set region. If not set, the region will be set automatically according to Colab Enterprise environment.\n",
"\n",
"REGION = \"\" # @param {type:\"string\"}\n",
"\n",
"# @markdown 3. If you want to run predictions with A100 80GB or H100 GPUs, we recommend using the regions listed below. **NOTE:** Make sure you have associated quota in selected regions. Click the links to see your current quota for each GPU type: [Nvidia A100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_a100_80gb_gpus), [Nvidia H100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_h100_gpus). You can request for quota following the instructions at [\"Request a higher quota\"](https://cloud.google.com/docs/quota/view-manage#requesting_higher_quota).\n",
"\n",
"# @markdown > | Machine Type | Accelerator Type | Recommended Regions |\n",
"# @markdown | ----------- | ----------- | ----------- |\n",
"# @markdown | a2-ultragpu-1g | 1 NVIDIA_A100_80GB | us-central1, us-east4, europe-west4, asia-southeast1, us-east4 |\n",
"# @markdown | a3-highgpu-2g | 2 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n",
"# @markdown | a3-highgpu-4g | 4 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n",
"# @markdown | a3-highgpu-8g | 8 NVIDIA_H100_80GB | us-central1, europe-west4, us-west1, asia-southeast1 |\n",
"\n",
"import importlib\n",
"import os\n",
"\n",
"from google.cloud import aiplatform\n",
"from IPython.display import HTML\n",
"\n",
"# Get the default cloud project id.\n",
"PROJECT_ID = os.environ[\"GOOGLE_CLOUD_PROJECT\"]\n",
"\n",
"# Get the default region for launching jobs.\n",
"if not REGION:\n",
" REGION = os.environ[\"GOOGLE_CLOUD_REGION\"]\n",
"\n",
"# Enable the Vertex AI API and Compute Engine API, if not already.\n",
"print(\"Enabling Vertex AI API and Compute Engine API.\")\n",
"! gcloud services enable aiplatform.googleapis.com compute.googleapis.com\n",
"\n",
"# Initialize Vertex AI API.\n",
"print(\"Initializing Vertex AI API.\")\n",
"aiplatform.init(project=PROJECT_ID, location=REGION)\n",
"\n",
"# Gets the default SERVICE_ACCOUNT.\n",
"shell_output = ! gcloud projects describe $PROJECT_ID\n",
"project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n",
"SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n",
"print(\"Using this default Service Account:\", SERVICE_ACCOUNT)\n",
"\n",
"! gcloud config set project $PROJECT_ID\n",
"\n",
"models, endpoints = {}, {}\n",
"\n",
"! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n",
"\n",
"common_util = importlib.import_module(\n",
" \"vertex-ai-samples.community-content.vertex_model_garden.model_oss.notebook_util.common_util\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "q36QziORW_CV"
},
"outputs": [],
"source": [
"# @title Deploy the [Text2World] model to Vertex for online predictions\n",
"\n",
"# @markdown This section uploads the [Text2World] model to Model Registry and deploys it on the Endpoint with the specified accelerator.\n",
"\n",
"# @markdown The deployment process takes approximately 15-30 minutes to complete.\n",
"# @markdown A valid HF_TOKEN is required for model deployment.\n",
"# @markdown Follow the instructions at [Hugging Face Token Guide](https://huggingface.co/docs/hub/en/security-tokens) to obtain your HF_TOKEN.\n",
"# @markdown Additionally, ensure you have access to the model by following the instructions on its Hugging Face model card page.\n",
"\n",
"HF_TOKEN = \"\" # @param {type:\"string\", isTemplate: true}\n",
"if not HF_TOKEN:\n",
" print(\"Error: HF_TOKEN is required to deploy the model.\")\n",
"\n",
"# @markdown The inference timeout is set to 30 minutes, as the video generation process can take a long time.\n",
"INFERENCE_TIMEOUT_SECS = 1800\n",
"model_id = \"nvidia/Cosmos-1.0-Diffusion-7B-Text2World\" # @param [\"nvidia/Cosmos-1.0-Diffusion-7B-Text2World\", \"nvidia/Cosmos-1.0-Diffusion-14B-Text2World\"]\n",
"task = \"text-to-world\"\n",
"\n",
"accelerator_type = \"NVIDIA_H100_80GB\" # @param [\"NVIDIA_H100_80GB\", \"NVIDIA_A100_80GB\"]\n",
"\n",
"machine_type_map = {\n",
" \"NVIDIA_A100_80GB\": \"a2-ultragpu-1g\",\n",
" \"NVIDIA_H100_80GB\": \"a3-highgpu-2g\",\n",
"}\n",
"\n",
"machine_type = machine_type_map.get(accelerator_type)\n",
"accelerator_count = 1\n",
"\n",
"if accelerator_type == \"NVIDIA_H100_80GB\":\n",
" machine_type = \"a3-highgpu-2g\"\n",
" accelerator_count = 2\n",
"\n",
"\n",
"# The pre-built serving docker image. It contains serving scripts and models.\n",
"SERVE_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/pytorch-cosmos:20250314\"\n",
"\n",
"\n",
"def deploy_model(model_id, task, machine_type, accelerator_type, accelerator_count):\n",
" \"\"\"Create a Vertex AI Endpoint and deploy the specified model to the endpoint.\"\"\"\n",
" common_util.check_quota(\n",
" project_id=PROJECT_ID,\n",
" region=REGION,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" is_for_training=False,\n",
" )\n",
"\n",
" model_name = model_id\n",
"\n",
" endpoint = aiplatform.Endpoint.create(\n",
" display_name=f\"{model_name}-endpoint\",\n",
" dedicated_endpoint_enabled=True,\n",
" sync=True,\n",
" inference_timeout=INFERENCE_TIMEOUT_SECS,\n",
" )\n",
" serving_env = {\n",
" \"MODEL_ID\": model_id,\n",
" \"TASK\": task,\n",
" \"DEPLOY_SOURCE\": \"notebook\",\n",
" \"HUGGING_FACE_HUB_TOKEN\": HF_TOKEN,\n",
" \"OFFLOAD_NETWORK\": \"false\",\n",
" \"OFFLOAD_TOKENIZER\": \"false\",\n",
" \"OFFLOAD_TEXT_ENCODER_MODEL\": \"false\",\n",
" \"OFFLOAD_GUARDRAIL_MODELS\": \"true\",\n",
" \"OFFLOAD_PROMPT_UPSAMPLER\": \"true\",\n",
" }\n",
"\n",
" # Also offload the text encoder model for 14B models, to avoid CUDA OOM issue.\n",
" if model_id.lower().includes(\"14b\"):\n",
" serving_env[\"OFFLOAD_TEXT_ENCODER_MODEL\"] = \"true\"\n",
"\n",
" model = aiplatform.Model.upload(\n",
" display_name=model_name,\n",
" serving_container_image_uri=SERVE_DOCKER_URI,\n",
" serving_container_ports=[7080],\n",
" serving_container_predict_route=\"/predict\",\n",
" serving_container_health_route=\"/health\",\n",
" serving_container_environment_variables=serving_env,\n",
" model_garden_source_model_name=\"publishers/nvidia/models/cosmos\",\n",
" )\n",
"\n",
" model.deploy(\n",
" endpoint=endpoint,\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" deploy_request_timeout=1800,\n",
" service_account=SERVICE_ACCOUNT,\n",
" system_labels={\"NOTEBOOK_NAME\": \"model_garden_nvidia_cosmos_deployment.ipynb\"},\n",
" )\n",
" return model, endpoint\n",
"\n",
"\n",
"models[\"model\"], endpoints[\"endpoint\"] = deploy_model(\n",
" model_id=model_id,\n",
" task=task,\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
")\n",
"\n",
"print(\"endpoint_name:\", endpoints[\"endpoint\"].name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "TKJsEJoeW_CV"
},
"outputs": [],
"source": [
"# @title [Text2World] Predict\n",
"\n",
"# @markdown Once deployment succeeds, you can send requests to the endpoint with text prompts. The inference takes:\n",
"\n",
"# @markdown - ~800s with 1 A100 80GB GPU.\n",
"# @markdown\n",
"# @markdown - ~420s with 2 H100 80GB GPU\n",
"\n",
"# @markdown Example:\n",
"# @markdown ```json\n",
"# @markdown {\n",
"# @markdown \"instances\":[\n",
"# @markdown {\n",
"# @markdown \"text\":\"A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves.\",\n",
"# @markdown }\n",
"# @markdown ],\n",
"# @markdown \"parameters\": {\n",
"# @markdown \"negative_prompt\": \"\",\n",
"# @markdown \"guidance\": 7.0,\n",
"# @markdown \"num_steps\": 30,\n",
"# @markdown \"height\": 704,\n",
"# @markdown \"width\": 1280,\n",
"# @markdown \"fps\": 24,\n",
"# @markdown \"num_video_frames\": 121,\n",
"# @markdown \"seed\": 42\n",
"# @markdown }\n",
"# @markdown }\n",
"# @markdown }\n",
"# @markdown ```\n",
"\n",
"# @markdown You can adjust the parameters below to use your own text prompt.\n",
"# @markdown The `negative_prompt` parameter is optional. If not specified, a default value will be used.\n",
"# @markdown You can find the default value here: [Inference Utils (Line 104)](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/inference/inference_utils.py#L104).\n",
"# @markdown\n",
"# @markdown For inference tasks exceeding 10 minutes, we recommend using CURL for predictions. Refer to the following sections for detailed instructions.\n",
"\n",
"text = \"A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves.\" # @param {type: \"string\"}\n",
"\n",
"instances = [{\"text\": text}]\n",
"parameters = {\n",
" \"negative_prompt\": \"\",\n",
" \"guidance\": 7.0,\n",
" \"num_steps\": 30,\n",
" \"height\": 704,\n",
" \"width\": 1280,\n",
" \"fps\": 24,\n",
" \"num_video_frames\": 121,\n",
" \"seed\": 42,\n",
"}\n",
"\n",
"\n",
"response = endpoints[\"endpoint\"].predict(\n",
" instances=instances, parameters=parameters, use_dedicated_endpoint=True\n",
")\n",
"\n",
"video_bytes = response.predictions[0][\"output\"]\n",
"\n",
"video_html = f\"\"\"\n",
"<video width=\"1280\" height=\"704\" controls>\n",
"<source src=\"data:video/mp4;base64,{video_bytes}\" type=\"video/mp4\">\n",
"Your browser does not support the video tag.\n",
"</video>\n",
"\"\"\" # Assumes MP4. Change type if needed (e.g., video/webm)\n",
"\n",
"display(HTML(video_html))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "3TurWPvt8wSf"
},
"outputs": [],
"source": [
"# @title Deploy the [Video2World] model to Vertex for online predictions\n",
"\n",
"# @markdown This section uploads the [Video2World] model to Model Registry and deploys it on the Endpoint with the specified accelerator.\n",
"\n",
"# @markdown The deployment process takes approximately 15-30 minutes to complete.\n",
"# @markdown A valid HF_TOKEN is required for model deployment.\n",
"# @markdown Follow the instructions at [Hugging Face Token Guide](https://huggingface.co/docs/hub/en/security-tokens) to obtain your HF_TOKEN.\n",
"# @markdown Additionally, ensure you have access to the model by following the instructions on its Hugging Face model card page.\n",
"\n",
"HF_TOKEN = \"\" # @param {type:\"string\", isTemplate: true}\n",
"if not HF_TOKEN:\n",
" print(\"Error: HF_TOKEN is required to deploy the model.\")\n",
"# @markdown The inference timeout is set to 30 minutes, as the video generation process can take a long time.\n",
"INFERENCE_TIMEOUT_SECS = 1800\n",
"\n",
"model_id = \"nvidia/Cosmos-1.0-Diffusion-7B-Video2World\" # @param [\"nvidia/Cosmos-1.0-Diffusion-7B-Video2World\", \"nvidia/Cosmos-1.0-Diffusion-14B-Video2World\"]\n",
"task = \"video-to-world\"\n",
"\n",
"accelerator_type = \"NVIDIA_H100_80GB\" # @param [\"NVIDIA_H100_80GB\", \"NVIDIA_A100_80GB\"]\n",
"\n",
"machine_type_map = {\n",
" \"NVIDIA_A100_80GB\": \"a2-ultragpu-1g\",\n",
" \"NVIDIA_H100_80GB\": \"a3-highgpu-2g\",\n",
"}\n",
"\n",
"machine_type = machine_type_map.get(accelerator_type)\n",
"accelerator_count = 1\n",
"\n",
"if accelerator_type == \"NVIDIA_H100_80GB\":\n",
" machine_type = \"a3-highgpu-2g\"\n",
" accelerator_count = 2\n",
"\n",
"\n",
"# The pre-built serving docker image. It contains serving scripts and models.\n",
"SERVE_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/pytorch-cosmos:20250314\"\n",
"\n",
"\n",
"def deploy_model(model_id, task, machine_type, accelerator_type, accelerator_count):\n",
" \"\"\"Create a Vertex AI Endpoint and deploy the specified model to the endpoint.\"\"\"\n",
" common_util.check_quota(\n",
" project_id=PROJECT_ID,\n",
" region=REGION,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" is_for_training=False,\n",
" )\n",
"\n",
" model_name = model_id\n",
"\n",
" endpoint = aiplatform.Endpoint.create(\n",
" display_name=f\"{model_name}-endpoint\",\n",
" dedicated_endpoint_enabled=True,\n",
" sync=True,\n",
" inference_timeout=INFERENCE_TIMEOUT_SECS,\n",
" )\n",
" serving_env = {\n",
" \"MODEL_ID\": model_id,\n",
" \"TASK\": task,\n",
" \"DEPLOY_SOURCE\": \"notebook\",\n",
" \"HUGGING_FACE_HUB_TOKEN\": HF_TOKEN,\n",
" \"OFFLOAD_NETWORK\": \"false\",\n",
" \"OFFLOAD_TOKENIZER\": \"false\",\n",
" \"OFFLOAD_TEXT_ENCODER_MODEL\": \"false\",\n",
" \"OFFLOAD_GUARDRAIL_MODELS\": \"true\",\n",
" \"OFFLOAD_PROMPT_UPSAMPLER\": \"true\",\n",
" }\n",
"\n",
" # Also offload the text encoder model for 14B models, to avoid CUDA OOM issue.\n",
" if model_id.lower().includes(\"14b\"):\n",
" serving_env[\"OFFLOAD_TEXT_ENCODER_MODEL\"] = \"true\"\n",
"\n",
" model = aiplatform.Model.upload(\n",
" display_name=model_name,\n",
" serving_container_image_uri=SERVE_DOCKER_URI,\n",
" serving_container_ports=[7080],\n",
" serving_container_predict_route=\"/predict\",\n",
" serving_container_health_route=\"/health\",\n",
" serving_container_environment_variables=serving_env,\n",
" model_garden_source_model_name=\"publishers/nvidia/models/cosmos\",\n",
" )\n",
"\n",
" model.deploy(\n",
" endpoint=endpoint,\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" deploy_request_timeout=1800,\n",
" service_account=SERVICE_ACCOUNT,\n",
" system_labels={\"NOTEBOOK_NAME\": \"model_garden_nvidia_cosmos_deployment.ipynb\"},\n",
" )\n",
" return model, endpoint\n",
"\n",
"\n",
"models[\"model\"], endpoints[\"endpoint\"] = deploy_model(\n",
" model_id=model_id,\n",
" task=task,\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
")\n",
"\n",
"print(\"endpoint_name:\", endpoints[\"endpoint\"].name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "ajOOy8Qv8wSf"
},
"outputs": [],
"source": [
"# @title [Video2World] Predict\n",
"\n",
"# @markdown Once deployment succeeds, you can send requests to the endpoint with text prompts. The inference takes:\n",
"\n",
"# @markdown - ~400s with 1 A100 GPU.\n",
"# @markdown\n",
"# @markdown - ~400s with 2 H100 GPU\n",
"\n",
"# @markdown Example:\n",
"# @markdown ```json\n",
"# @markdown {\n",
"# @markdown \"instances\": [\n",
"# @markdown {\n",
"# @markdown \"gcs_uri\": \"gs://vertex-model-garden-public-us/cosmos/video2world_input0.jpg\",\n",
"# @markdown \"num_input_frames\": 1\n",
"# @markdown }\n",
"# @markdown ],\n",
"# @markdown \"parameters\": {\n",
"# @markdown \"negative_prompt\": \"\",\n",
"# @markdown \"guidance\": 7.0,\n",
"# @markdown \"num_steps\": 25,\n",
"# @markdown \"height\": 704,\n",
"# @markdown \"width\": 1280,\n",
"# @markdown \"fps\": 24,\n",
"# @markdown \"num_video_frames\": 121,\n",
"# @markdown \"seed\": 42\n",
"# @markdown }\n",
"# @markdown }\n",
"# @markdown ```\n",
"\n",
"# @markdown You can adjust the parameters below to use your own video.\n",
"# @markdown The model also supports single-image input by setting `num_input_frames = 1`.\n",
"# @markdown Note that `num_input_frames` should match the actual number of frames in your video.\n",
"# @markdown The `negative_prompt` parameter is optional. If not specified, a default value will be used.\n",
"# @markdown You can find the default value here: [Inference Utils (Line 104)](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/inference/inference_utils.py#L104).\n",
"\n",
"# @markdown\n",
"# @markdown For inference tasks exceeding 10 minutes, we recommend using CURL for predictions. Refer to the following sections for detailed instructions.\n",
"\n",
"gcs_uri = \"gs://vertex-model-garden-public-us/cosmos/video2world_input0.jpg\" # @param {type: \"string\"}\n",
"num_input_frames = 1 # @param {type: \"integer\"}\n",
"negative_prompt = \"\" # @param {type: \"string\"}\n",
"\n",
"instances = [{\"gcs_uri\": gcs_uri, \"num_input_frames\": num_input_frames}]\n",
"parameters = {\n",
" \"negative_prompt\": negative_prompt,\n",
" \"guidance\": 7.0,\n",
" \"num_steps\": 25,\n",
" \"height\": 704,\n",
" \"width\": 1280,\n",
" \"fps\": 24,\n",
" \"num_video_frames\": 121,\n",
" \"seed\": 42,\n",
"}\n",
"\n",
"\n",
"response = endpoints[\"endpoint\"].predict(\n",
" instances=instances, parameters=parameters, use_dedicated_endpoint=True\n",
")\n",
"\n",
"video_bytes = response.predictions[0][\"output\"]\n",
"\n",
"video_html = f\"\"\"\n",
"<video width=\"1280\" height=\"704\" controls>\n",
"<source src=\"data:video/mp4;base64,{video_bytes}\" type=\"video/mp4\">\n",
"Your browser does not support the video tag.\n",
"</video>\n",
"\"\"\" # Assumes MP4. Change type if needed (e.g., video/webm)\n",
"\n",
"display(HTML(video_html))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "pzZo5t_mqNDy"
},
"outputs": [],
"source": [
"# @title Predict with CURL for long-running prediction tasks\n",
"\n",
"# @markdown For inference tasks exceeding 10 minutes, we recommend using CURL for predictions.\n",
"\n",
"os.environ[\"ENDPOINT_ID\"] = endpoints[\"endpoint\"].name\n",
"os.environ[\"PROJECT_ID\"] = project_number\n",
"os.environ[\"REGION\"] = REGION"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "2FOfZRLbqNDy"
},
"outputs": [],
"source": [
"%%bash\n",
"\n",
"# Leverage CURL in shell for predictions, especially for long-running tasks (exceeding 10 minutes). \n",
"ENDPOINT_URL=\"https://${ENDPOINT_ID}.${REGION}-${PROJECT_ID}.prediction.vertexai.goog/v1/projects/${PROJECT_ID}/locations/${REGION}/endpoints/${ENDPOINT_ID}:predict\"\n",
"TEXT=\"A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves.\"\n",
"DATA='{\"instances\": [{\"text\":\"'${TEXT}'\"}], \"parameters\": {\"negative_prompt\":\"\", \"guidance\":7.0,\"num_steps\":35,\"height\":704,\"width\":1280,\"fps\":24,\"num_video_frames\":121,\"seed\":42}}'\n",
"\n",
"curl \\\n",
" -X POST \\\n",
" -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n",
" -H \"Content-Type: application/json\" \\\n",
" \"${ENDPOINT_URL}\" \\\n",
" -d \"${DATA}\" > /content/t2w_response.json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "HAPiswf9qNDy"
},
"outputs": [],
"source": [
"import json\n",
"\n",
"with open(\"/content/t2w_response.json\", \"r\") as f:\n",
" response_data = json.load(f)\n",
"\n",
"video_bytes = response_data[\"predictions\"][0][\"output\"]\n",
"print(video_bytes)\n",
"\n",
"video_html = f\"\"\"\n",
"<video width=\"1280\" height=\"704\" controls>\n",
"<source src=\"data:video/mp4;base64,{video_bytes}\" type=\"video/mp4\">\n",
"Your browser does not support the video tag.\n",
"</video>\n",
"\"\"\" # Assumes MP4. Change type if needed (e.g., video/webm)\n",
"\n",
"display(HTML(video_html))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "42leJGJFW_CV"
},
"outputs": [],
"source": [
"# @title Clean up resources\n",
"# @markdown Delete the experiment models and endpoints to recycle the resources\n",
"# @markdown and avoid unnecessary continuous charges that may incur.\n",
"\n",
"# Undeploy model and delete endpoint.\n",
"for endpoint in endpoints.values():\n",
" endpoint.delete(force=True)\n",
"\n",
"# Delete models.\n",
"for model in models.values():\n",
" model.delete()"
]
}
],
"metadata": {
"colab": {
"name": "model_garden_nvidia_cosmos_deployment.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}