notebooks/community/model_garden/model_garden_reservations_spotvm.ipynb (1,089 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "SgQ6t5bqZVlH", "metadata": { "cellView": "form", "id": "SgQ6t5bqZVlH" }, "outputs": [], "source": [ "# Copyright 2024 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "id": "99c1c3fc2ca5", "metadata": { "id": "99c1c3fc2ca5" }, "source": [ "# Vertex AI Model Garden - Using SpotVM and Reservations to Deploy a Vertex AI Llama-3.1 Endpoint\n", "\n", "<table><tbody><tr>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fcommunity%2Fmodel_garden%2Fmodel_garden_reservations_spotvm.ipynb\">\n", " <img alt=\"Google Cloud Colab Enterprise logo\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" width=\"32px\"><br> Run in Colab Enterprise\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_reservations_spotvm.ipynb\">\n", " <img alt=\"GitHub logo\" src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" width=\"32px\"><br> View on GitHub\n", " </a>\n", " </td>\n", "</tr></tbody></table>" ] }, { "cell_type": "markdown", "id": "f9-tJ6RfDLIs", "metadata": { "id": "f9-tJ6RfDLIs" }, "source": [ "### Overview\n", "\n", "This notebook provides a comprehensive, step-by-step guide to leveraging [Spot VMs](https://cloud.google.com/compute/docs/instances/spot) and [Reservations](https://cloud.google.com/vertex-ai/docs/predictions/use-reservations#get-predictions) for deploying a fully managed Vertex AI endpoint. The process involves configuring a reservation—a dedicated pool of compute resources—that can help ensure cost stability and resource availability for your inference workloads. You will learn how to create, view, and manage these reservations to control how your endpoints consume underlying resources. By following these instructions, you will gain a deep understanding of how the Vertex AI ecosystem can be tuned to your workload requirements, achieving an optimal balance of cost-effectiveness and reliability.\n", "\n", "This tutorial will cover how to:\n", "1. **Deploy an Endpoint Using a Spot VM**: Deploy endpoints automatically on preempted resources (if capacity is available).\n", "2. **Create a Single-Project Reservation:** Establish a dedicated pool of compute resources reserved solely for your current project.\n", "3. **Grant Permissions to Google Cloud Services:** Ensure that the necessary Google Cloud services can access and utilize these reservations securely and transparently.\n", "4. **Deploy a Vertex AI Endpoint Using Reservations:** Harness the full potential of reserved resources to deploy an endpoint that benefits from predictable performance and cost stability.\n", "\n", "Upon completion, you will not only know how to set up and use reservations for a Vertex AI endpoint, but you will also possess the insights needed to adapt these techniques to a variety of production scenarios. For an even broader understanding of reservations and to explore additional reservation configurations, refer to the [Compute Engine Reservations Overview](https://cloud.google.com/compute/docs/instances/reservations-overview).\n", "\n", "\n", "### Objective\n", "\n", "In this tutorial, we will utilize the `Meta-Llama-3.1-8B` model running on [vLLM](https://github.com/vllm-project/vllm) as a concrete example. This allows you to experiment with state-of-the-art language modeling within a well-defined environment. Throughout the process, we will delve into every aspect of setting up, managing, and cleaning up a complete end-to-end deployment pipeline using Vertex AI and reservations.\n", "\n", "By following along, you will learn how to:\n", "\n", "- **Set Up a Google Cloud Project:** Configure your environment and ensure that all prerequisites—such as APIs, billing, and IAM roles—are properly in place.\n", "- **Configure Deployment Utilities:** Prepare and manage essential tools and scripts that streamline endpoint creation, testing, and maintenance.\n", "- **Deploy an Endpoint Using a Spot VM:** Achieve cost savings by running inference workloads on preemptible resources while maintaining service integrity.\n", "- **Create and Manage Reservations:** Establish a dedicated pool of compute resources, ensuring that your endpoints can maintain consistent performance without competing for capacity.\n", "- **View and Verify Reservations:** Inspect your reservations to confirm that resources are correctly allocated and ready for consumption by your endpoints.\n", "- **Consume Reservations as Instances:** Utilize the reserved resources to run your endpoints, guaranteeing predictable performance and capacity.\n", "- **Deploy Endpoints with `ANY_RESERVATION` and `SPECIFIC_RESERVATION` Policies:** Gain granular control over how your endpoints source their compute resources, whether by tapping into any available reservation or by targeting a specific one for tighter control.\n", "- **Delete Reservations and Endpoints:** Cleanly remove resources to maintain a tidy and cost-efficient environment, ensuring that unused capacity does not incur ongoing costs.\n", "\n", "By the end of this tutorial, you will have developed a thorough, in-depth understanding of how to combine Spot VMs, reservations, and Vertex AI to create a flexible, efficient, and cost-conscious inference infrastructure.\n", "\n", "\n", "### Costs\n", "\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "* Vertex AI\n", "* Cloud Storage\n", "\n", "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing), [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage." ] }, { "cell_type": "code", "execution_count": null, "id": "fH_dOPly4Jkg", "metadata": { "cellView": "form", "id": "fH_dOPly4Jkg" }, "outputs": [], "source": [ "# @title # Setup Google Cloud Project and Shared Reservation\n", "#\n", "# @markdown 1. **Enable Billing for Your Project:** Confirm that billing is active for your chosen Google Cloud project. Without an active billing account, resources such as GPUs and Spot VMs cannot be provisioned. If you haven’t done this yet, follow the instructions here: [Enable Billing for Your Project](https://cloud.google.com/billing/docs/how-to/modify-project).\n", "#\n", "# @markdown 2. **Set Deployment `PROJECT_ID` and `REGION`:** When setting up your environment, ensure that your Vertex AI endpoint and any associated reservations are in the same region and under projects that belong to the same organization. This alignment helps streamline IAM policies and resource sharing.\n", "#\n", "# @markdown 3. **Shared Reservation Requirement:** If you plan to use a reservation created in a separate project (e.g., `SHARED_PROJECT_ID`) within the same organization, you must grant appropriate permissions to the Vertex AI Principal Service Accounts (P4SAs) from both projects. This ensures your endpoint in `PROJECT_ID` can use a reservation from `SHARED_PROJECT_ID`.\n", "#\n", "# @markdown - The P4SA from the primary project hosting the endpoint (in `PROJECT_ID`) must have `roles/compute.viewer` in that project.\n", "# @markdown - The P4SA from the project where the reservation resides (`SHARED_PROJECT_ID`) must also be granted `roles/compute.viewer` in that shared project.\n", "# @markdown - This cross-project permission enables your endpoint’s underlying infrastructure to \"see\" and utilize the reservation capacity in the shared project.\n", "#\n", "# @markdown 4. **Recommended Regions for Specialized GPUs:**\n", "# @markdown If you want to run predictions with A100 80GB or H100 GPUs, we recommend using the regions listed below. **NOTE:** Make sure you have associated quota in selected regions. Click the links to see your current quota for each GPU type: [Nvidia A100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_a100_80gb_gpus), [Nvidia H100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_h100_gpus).\n", "#\n", "# @markdown > | Machine Type | Accelerator Type | Recommended Regions |\n", "# @markdown > | ------------------- | ---------------------- | ------------------ |\n", "# @markdown > | a2-ultragpu-1g | 1 NVIDIA_A100_80GB | us-central1, us-east4, europe-west4, asia-southeast1, us-east4 |\n", "# @markdown > | a3-highgpu-2g | 2 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n", "# @markdown > | a3-highgpu-4g | 4 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n", "# @markdown > | a3-highgpu-8g | 8 NVIDIA_H100_80GB | us-central1, us-east5, europe-west4, us-west1, asia-southeast1 |\n", "\n", "PROJECT_ID = \"\" # @param {type:\"string\"}\n", "SHARED_PROJECT_ID = \"\" # @param {type:\"string\"}\n", "\n", "BUCKET_URI = \"gs://\" # @param {type:\"string\"}\n", "\n", "REGION = \"\" # @param {type:\"string\"}\n", "\n", "# Import the necessary packages\n", "\n", "! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n", "\n", "import datetime\n", "import importlib\n", "import os\n", "import uuid\n", "from typing import Tuple\n", "\n", "from google.cloud import aiplatform\n", "\n", "common_util = importlib.import_module(\n", " \"vertex-ai-samples.community-content.vertex_model_garden.model_oss.notebook_util.common_util\"\n", ")\n", "\n", "models, endpoints = {}, {}\n", "\n", "# Enable the Vertex AI API and Compute Engine API, if not already.\n", "print(\"Enabling Vertex AI API and Compute Engine API.\")\n", "! gcloud services enable aiplatform.googleapis.com compute.googleapis.com\n", "\n", "# Cloud Storage bucket for storing the experiment artifacts.\n", "# A unique GCS bucket will be created for the purpose of this notebook. If you\n", "# prefer using your own GCS bucket, change the value yourself below.\n", "now = datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\")\n", "BUCKET_NAME = \"/\".join(BUCKET_URI.split(\"/\")[:3])\n", "\n", "if BUCKET_URI is None or BUCKET_URI.strip() == \"\" or BUCKET_URI == \"gs://\":\n", " BUCKET_URI = f\"gs://{PROJECT_ID}-tmp-{now}-{str(uuid.uuid4())[:4]}\"\n", " BUCKET_NAME = \"/\".join(BUCKET_URI.split(\"/\")[:3])\n", " ! gsutil mb -l {REGION} {BUCKET_URI}\n", "else:\n", " assert BUCKET_URI.startswith(\"gs://\"), \"BUCKET_URI must start with `gs://`.\"\n", " shell_output = ! gsutil ls -Lb {BUCKET_NAME} | grep \"Location constraint:\" | sed \"s/Location constraint://\"\n", " bucket_region = shell_output[0].strip().lower()\n", " if bucket_region != REGION:\n", " raise ValueError(\n", " \"Bucket region %s is different from notebook region %s\"\n", " % (bucket_region, REGION)\n", " )\n", "print(f\"Using this GCS Bucket: {BUCKET_URI}\")\n", "\n", "STAGING_BUCKET = os.path.join(BUCKET_URI, \"temporal\")\n", "\n", "# Initialize Vertex AI API.\n", "print(\"Initializing Vertex AI API.\")\n", "aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=STAGING_BUCKET)\n", "\n", "# Gets the default SERVICE_ACCOUNT .\n", "shell_output = ! gcloud projects describe $PROJECT_ID\n", "project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n", "SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n", "print(\"Using this default Service Account:\", SERVICE_ACCOUNT)\n", "\n", "# Get the P4SA email for the current project\n", "P4SA_SERVICE_ACCOUNT = (\n", " f\"service-{project_number}@gcp-sa-aiplatform.iam.gserviceaccount.com\"\n", ")\n", "print(\"Current P4SA Service Account:\", P4SA_SERVICE_ACCOUNT)\n", "\n", "# Get the P4SA email for the shared project\n", "shell_output = ! gcloud projects describe $SHARED_PROJECT_ID\n", "shared_project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n", "SHARED_P4SA_SERVICE_ACCOUNT = (\n", " f\"service-{shared_project_number}@gcp-sa-aiplatform.iam.gserviceaccount.com\"\n", ")\n", "print(\"Shared P4SA Service Account:\", SHARED_P4SA_SERVICE_ACCOUNT)\n", "\n", "# grant compute.viewer role to the current P4SA\n", "command = f\"gcloud projects add-iam-policy-binding {PROJECT_ID} --member=serviceAccount:{P4SA_SERVICE_ACCOUNT} --role=roles/compute.viewer\"\n", "! {command}\n", "command = f\"gcloud projects add-iam-policy-binding {SHARED_PROJECT_ID} --member=serviceAccount:{P4SA_SERVICE_ACCOUNT} --role=roles/compute.viewer\"\n", "! {command}\n", "\n", "# grant compute.viewer role to the shared P4SA\n", "command = f\"gcloud projects add-iam-policy-binding {PROJECT_ID} --member=serviceAccount:{SHARED_P4SA_SERVICE_ACCOUNT} --role=roles/compute.viewer\"\n", "! {command}\n", "command = f\"gcloud projects add-iam-policy-binding {SHARED_PROJECT_ID} --member=serviceAccount:{SHARED_P4SA_SERVICE_ACCOUNT} --role=roles/compute.viewer\"\n", "! {command}\n", "\n", "! gcloud config set project $PROJECT_ID\n", "\n", "print(f\"Using Project ID: {PROJECT_ID}\")\n", "print(f\"Using Shared Project ID: {SHARED_PROJECT_ID}\")\n", "print(f\"Using Region: {REGION}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "ZVycGG8x49eh", "metadata": { "cellView": "form", "id": "ZVycGG8x49eh" }, "outputs": [], "source": [ "# @title **Llama-3.1 vLLM Endpoint Deployment Utility Functions**\n", "\n", "# @markdown This section introduces utility functions to facilitate deploying the `Llama-3.1-8B` model to a Vertex AI endpoint using the [vLLM](https://docs.vllm.ai/en/latest/models/supported_models.html) runtime environment. The tools provided here will help streamline the process of loading models, configuring serving parameters, and integrating seamlessly with Vertex AI predictions. By abstracting away much of the complexity, these utilities empower you to:\n", "\n", "# @markdown - **Provision Efficient Inference Runtimes:**\n", "# @markdown Take advantage of vLLM’s optimized serving environment, which is designed to handle large language model inference at scale. The library’s focus on latency reduction, memory efficiency, and throughput enables you to achieve superior performance with fewer system resources.\n", "\n", "# @markdown - **Customize Model Behavior for Your Use Case:**\n", "# @markdown Adjust parameters such as prompt handling, token generation strategies, and memory management policies directly through the utility functions, ensuring that your deployment meets the unique requirements of your application—be it real-time dialogue systems, multi-language support, summarization tasks, or any other LLM-powered workflow.\n", "\n", "# @markdown ### **About Meta’s Llama 3.1 Collections**\n", "\n", "# @markdown [Meta’s Llama 3.1 series](https://huggingface.co/meta-llama/Llama-3.1-8B) comprises a set of cutting-edge multilingual LLMs, pretrained and instruction-tuned to excel in a wide range of tasks. They support robust conversational capabilities, succinct summarization, and intelligent retrieval from diverse data sources. Whether you’re building interactive chatbots, processing large volumes of content, or generating domain-specific reports, these models serve as a powerful starting point.\n", "\n", "# @markdown ### **Hugging Face User Access Tokens**\n", "\n", "# @markdown To access the Llama 3.1 models and other resources hosted on Hugging Face, you will need a **Read Access Token**. This token ensures you have the necessary permissions to download models and related artifacts while maintaining proper security controls.\n", "\n", "# @markdown **Follow these steps to obtain and use your Hugging Face token:**\n", "\n", "# @markdown 1. **Generate a Read Access Token:**\n", "# @markdown - Visit your [Hugging Face account settings](https://huggingface.co/settings/tokens).\n", "# @markdown - Click on **Create new token**, assign it a **Read** role (no more permissions than necessary), and generate the token.\n", "# @markdown - Store this token in a safe and secure location, as it provides direct access to the resources you’ll need.\n", "\n", "# @markdown 2. **Use the Token for Authentication:**\n", "# @markdown - Within this notebook or your scripting environment, supply the token to authenticate yourself.\n", "# @markdown - This ensures that any model downloads or asset retrieval from private or restricted repositories occur smoothly and securely.\n", "\n", "# @markdown Maintaining minimal, read-only permissions helps prevent accidental exposures or misuse of your credentials. For more details on configuring Hugging Face tokens, refer to the platform’s official documentation and best practices.\n", "\n", "# @markdown **Provide Hugging Face TOKEN to Download Models:**\n", "\n", "HF_TOKEN = \"\" # @param {type:\"string\"}\n", "\n", "# The pre-built serving docker images.\n", "VLLM_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/pytorch-vllm-serve:20241001_0916_RC00\"\n", "\n", "\n", "def deploy_model_vllm(\n", " model_name: str,\n", " model_id: str,\n", " service_account: str,\n", " base_model_id: str = None,\n", " machine_type: str = \"g2-standard-8\",\n", " accelerator_type: str = \"NVIDIA_L4\",\n", " accelerator_count: int = 1,\n", " gpu_memory_utilization: float = 0.9,\n", " max_model_len: int = 4096,\n", " dtype: str = \"auto\",\n", " enable_trust_remote_code: bool = False,\n", " enforce_eager: bool = False,\n", " enable_lora: bool = False,\n", " max_loras: int = 1,\n", " max_cpu_loras: int = 8,\n", " use_dedicated_endpoint: bool = False,\n", " max_num_seqs: int = 256,\n", " model_type: str = None,\n", " reservation_name: str = None,\n", " reservation_affinity_type: str = None,\n", " reservation_project: str = None,\n", " reservation_zone: str = None,\n", " is_spot: bool = False,\n", ") -> Tuple[aiplatform.Model, aiplatform.Endpoint]:\n", " \"\"\"Deploys trained models with vLLM into Vertex AI.\"\"\"\n", " endpoint = aiplatform.Endpoint.create(\n", " display_name=f\"{model_name}-endpoint\",\n", " dedicated_endpoint_enabled=use_dedicated_endpoint,\n", " )\n", "\n", " if not base_model_id:\n", " base_model_id = model_id\n", "\n", " # See https://docs.vllm.ai/en/latest/models/engine_args.html for a list of possible arguments with descriptions.\n", " vllm_args = [\n", " \"python\",\n", " \"-m\",\n", " \"vllm.entrypoints.api_server\",\n", " \"--host=0.0.0.0\",\n", " \"--port=8080\",\n", " f\"--model={model_id}\",\n", " f\"--tensor-parallel-size={accelerator_count}\",\n", " \"--swap-space=16\",\n", " f\"--gpu-memory-utilization={gpu_memory_utilization}\",\n", " f\"--max-model-len={max_model_len}\",\n", " f\"--dtype={dtype}\",\n", " f\"--max-loras={max_loras}\",\n", " f\"--max-cpu-loras={max_cpu_loras}\",\n", " f\"--max-num-seqs={max_num_seqs}\",\n", " \"--disable-log-stats\",\n", " ]\n", "\n", " if enable_trust_remote_code:\n", " vllm_args.append(\"--trust-remote-code\")\n", "\n", " if enforce_eager:\n", " vllm_args.append(\"--enforce-eager\")\n", "\n", " if enable_lora:\n", " vllm_args.append(\"--enable-lora\")\n", "\n", " if model_type:\n", " vllm_args.append(f\"--model-type={model_type}\")\n", "\n", " env_vars = {\n", " \"MODEL_ID\": base_model_id,\n", " \"DEPLOY_SOURCE\": \"notebook\",\n", " }\n", "\n", " # HF_TOKEN is not a compulsory field and may not be defined.\n", " try:\n", " if HF_TOKEN:\n", " env_vars[\"HF_TOKEN\"] = HF_TOKEN\n", " except NameError:\n", " pass\n", "\n", " model = aiplatform.Model.upload(\n", " display_name=model_name,\n", " serving_container_image_uri=VLLM_DOCKER_URI,\n", " serving_container_args=vllm_args,\n", " serving_container_ports=[8080],\n", " serving_container_predict_route=\"/generate\",\n", " serving_container_health_route=\"/ping\",\n", " serving_container_environment_variables=env_vars,\n", " serving_container_shared_memory_size_mb=(16 * 1024), # 16 GB\n", " serving_container_deployment_timeout=7200,\n", " )\n", " print(\n", " f\"Deploying {model_name} on {machine_type} with {accelerator_count} {accelerator_type} GPU(s).\"\n", " )\n", "\n", " deploy_args = {\n", " \"endpoint\": endpoint,\n", " \"machine_type\": machine_type,\n", " \"accelerator_type\": accelerator_type,\n", " \"accelerator_count\": accelerator_count,\n", " \"deploy_request_timeout\": 1800,\n", " \"service_account\": service_account,\n", " }\n", "\n", " if is_spot:\n", " deploy_args[\"min_replica_count\"] = 1\n", " deploy_args[\"max_replica_count\"] = 1\n", " deploy_args[\"spot\"] = True\n", " deploy_args[\"sync\"] = True\n", "\n", " if reservation_affinity_type:\n", " deploy_args[\"reservation_affinity_type\"] = reservation_affinity_type\n", "\n", " if reservation_name:\n", " deploy_args[\n", " \"reservation_affinity_key\"\n", " ] = \"compute.googleapis.com/reservation-name\"\n", " deploy_args[\"reservation_affinity_values\"] = [\n", " f\"projects/{reservation_project}/zones/{reservation_zone}/reservations/{reservation_name}\"\n", " ]\n", "\n", " model.deploy(**deploy_args)\n", "\n", " print(\"endpoint_name:\", endpoint.name)\n", "\n", " return model, endpoint" ] }, { "cell_type": "markdown", "id": "oCmuA7jsvdiO", "metadata": { "id": "oCmuA7jsvdiO" }, "source": [ "### Spot VM" ] }, { "cell_type": "code", "execution_count": null, "id": "Thr-3N33PbUQ", "metadata": { "cellView": "form", "id": "Thr-3N33PbUQ" }, "outputs": [], "source": [ "# @title Spot VM Vertex AI Endpoint Deployment\n", "\n", "# @markdown **What are Spot VMs?** [Spot VMs](https://cloud.google.com/compute/docs/instances/spot) are spare compute instances offered by Google Cloud at significantly discounted rates. Unlike standard on-demand VMs, Spot VMs provide lower prices—often as much as 60-91% off the regular cost for most machine types and GPUs—making them extremely cost-effective for certain workloads. However, the trade-off is that these VMs can be preempted (stopped) by Google Cloud at any time if the capacity is needed elsewhere. As a result, Spot VMs are best suited for workloads that are resilient to interruptions.\n", "\n", "# @markdown **Stockouts and Resource Availability:** Even with the correct quotas, you may encounter **stockouts**, which occur when the requested resources (such as a specific VM family, shape, or disk type) are temporarily unavailable. This situation can lead to delays or increased costs if you opt for alternative resource configurations. For more insights into handling capacity constraints and stockouts, refer to the [Capacity, Quota, and Stockouts resource guide](https://www.googlecloudcommunity.com/gc/Community-Blogs/Managing-Capacity-Quota-and-Stockouts-in-the-Cloud-Concepts-and/ba-p/464770#toc-hId-1635110264).\n", "\n", "# @markdown **Mitigating Stockouts with Spot VMs and Reservations:** If a particular VM type or resource is experiencing shortages, consider alternative strategies:\n", "\n", "# @markdown - **Use Spot VMs:** Spot VMs fill idle capacity at discounted prices. If a stockout prevents you from acquiring standard VMs, a Spot VM can serve as a cost-effective and readily available fallback. While preemptions can occur, if your model inference or training jobs can tolerate being paused or restarted, this approach can greatly reduce compute costs.\n", "\n", "# @markdown - **Use Reservations:** Another way to ensure predictable resource availability is to use [Reservations](https://cloud.google.com/vertex-ai/docs/predictions/use-reservations#get-predictions), which guarantee that certain resources remain allocated for your workloads. Although not as cost-effective as Spot VMs, reservations can alleviate the uncertainty caused by stockouts, ensuring that you always have enough capacity for your deployments.\n", "\n", "# @markdown **When to Choose Spot VMs:** Spot VMs are ideal for jobs and tasks that are:\n", "\n", "# @markdown - **Fault-Tolerant and Interruptible:** If your workloads can handle interruptions—such as batch processing jobs that can resume after a delay or distributed training jobs that can adjust to losing certain workers—a Spot VM’s lower cost can result in significant savings over time.\n", "\n", "# @markdown - **Not Strictly Latency-Sensitive:** For workflows where occasional preemptions do not severely impact business outcomes, Spot VMs are a strategic choice.\n", "\n", "# @markdown **Cost and Billing Model:** One attractive aspect of Spot VMs is that you're billed only for the actual compute time used. You do not pay for:\n", "\n", "# @markdown - Time spent waiting in a job queue or time lost due to preemptions.\n", "\n", "# @markdown This means that if a Spot VM is interrupted, you’re not charged for downtime, making the overall cost model more favorable for price-sensitive workloads.\n", "\n", "# @markdown **Learn More:** To understand the pricing structure, including discounts and comparisons to standard on-demand VMs, see the [Spot VM Pricing Guide](https://cloud.google.com/compute/docs/instances/spot#pricing).\n", "\n", "# @markdown For strategies on handling preemptions within Vertex AI, including how to design your workflows and code to gracefully manage interruptions, consult the [Preemption Handling Documentation](https://cloud.google.com/vertex-ai/docs/predictions/use-spot-vms#preemption-handling).\n", "\n", "# @markdown By combining Spot VMs with robust workflow planning and reservations, you can strike an optimal balance between cost savings, reliability, and performance in your Vertex AI endpoint deployments.\n", "\n", "# @markdown **Set the model to deploy:**\n", "\n", "base_model_name = \"Meta-Llama-3.1-8B\" # @param [\"Meta-Llama-3.1-8B\"]\n", "hf_model_id = \"meta-llama/\" + base_model_name\n", "\n", "if \"8b\" in base_model_name.lower():\n", " accelerator_type = \"NVIDIA_L4\"\n", " machine_type = \"g2-standard-12\"\n", " accelerator_count = 1\n", " max_loras = 5\n", "else:\n", " raise ValueError(\n", " f\"Recommended GPU setting not found for: {accelerator_type} and {base_model_name}.\"\n", " )\n", "\n", "common_util.check_quota(\n", " project_id=PROJECT_ID,\n", " region=REGION,\n", " accelerator_type=accelerator_type,\n", " accelerator_count=accelerator_count,\n", " is_for_training=False,\n", ")\n", "\n", "gpu_memory_utilization = 0.95\n", "max_model_len = 8192 # Maximum context length.\n", "\n", "models[\"vllm_gpu_spotvm\"], endpoints[\"vllm_gpu_spotvm\"] = deploy_model_vllm(\n", " model_name=common_util.get_job_name_with_datetime(prefix=\"llama3_1-serve-spotvm\"),\n", " model_id=hf_model_id,\n", " base_model_id=hf_model_id,\n", " service_account=SERVICE_ACCOUNT,\n", " machine_type=machine_type,\n", " accelerator_type=accelerator_type,\n", " accelerator_count=accelerator_count,\n", " gpu_memory_utilization=gpu_memory_utilization,\n", " max_model_len=max_model_len,\n", " max_loras=max_loras,\n", " enforce_eager=True,\n", " enable_lora=True,\n", " use_dedicated_endpoint=False,\n", " model_type=\"llama3.1\",\n", " is_spot=True,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "xcOBmRNwq9o-", "metadata": { "cellView": "form", "id": "xcOBmRNwq9o-" }, "outputs": [], "source": [ "# @title Raw predict with SpotVM Endpoint\n", "\n", "# @markdown Once deployment succeeds, you can send requests to the endpoint with text prompts. Sampling parameters supported by vLLM can be found [here](https://docs.vllm.ai/en/latest/dev/sampling_params.html).\n", "\n", "# @markdown Example:\n", "\n", "# @markdown ```\n", "# @markdown Human: What is a car?\n", "# @markdown Assistant: A car, or a motor car, is a road-connected human-transportation system used to move people or goods from one place to another. The term also encompasses a wide range of vehicles, including motorboats, trains, and aircrafts. Cars typically have four wheels, a cabin for passengers, and an engine or motor. They have been around since the early 19th century and are now one of the most popular forms of transportation, used for daily commuting, shopping, and other purposes.\n", "# @markdown ```\n", "# @markdown Additionally, you can moderate the generated text with Vertex AI. See [Moderate text documentation](https://cloud.google.com/natural-language/docs/moderating-text) for more details.\n", "\n", "# Loads an existing endpoint instance using the endpoint name:\n", "# - Using `endpoint_name = endpoint.name` allows us to get the\n", "# endpoint name of the endpoint `endpoint` created in the cell\n", "# above.\n", "# - Alternatively, you can set `endpoint_name = \"1234567890123456789\"` to load\n", "# an existing endpoint with the ID 1234567890123456789.\n", "# You may uncomment the code below to load an existing endpoint.\n", "\n", "# endpoint_name = \"\" # @param {type:\"string\"}\n", "# aip_endpoint_name = (\n", "# f\"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{endpoint_name}\"\n", "# )\n", "# endpoint = aiplatform.Endpoint(aip_endpoint_name)\n", "\n", "prompt = \"What is a car?\" # @param {type: \"string\"}\n", "max_tokens = 50 # @param {type:\"integer\"}\n", "temperature = 1.0 # @param {type:\"number\"}\n", "top_p = 1.0 # @param {type:\"number\"}\n", "top_k = 1 # @param {type:\"integer\"}\n", "raw_response = True # @param {type:\"boolean\"}\n", "\n", "# Overrides parameters for inferences.\n", "instances = [\n", " {\n", " \"prompt\": prompt,\n", " \"max_tokens\": max_tokens,\n", " \"temperature\": temperature,\n", " \"top_p\": top_p,\n", " \"top_k\": top_k,\n", " \"raw_response\": raw_response,\n", " },\n", "]\n", "response = endpoints[\"vllm_gpu_spotvm\"].predict(instances=instances)\n", "\n", "for prediction in response.predictions:\n", " print(prediction)\n", "\n", "# @markdown Click \"Show Code\" to see more details." ] }, { "cell_type": "markdown", "id": "xnE7CyZKvz3S", "metadata": { "id": "xnE7CyZKvz3S" }, "source": [ "### Reservations\n" ] }, { "cell_type": "code", "execution_count": null, "id": "lphVkydjb0n4", "metadata": { "cellView": "form", "id": "lphVkydjb0n4" }, "outputs": [], "source": [ "# @title Set Up Reservations for Vertex AI Predictions\n", "\n", "# @markdown ### Why Use Reservations?\n", "\n", "# @markdown In addition to using Spot VMs, another robust strategy to mitigate capacity issues and avoid stockouts is to leverage resource **reservations**. A reservation is a powerful Compute Engine feature that guarantees the availability of certain machine and accelerator resources within a specific zone, ensuring your Vertex AI endpoints can scale predictably. Unlike relying on transient or best-effort resources, reservations grant you a higher level of certainty that the infrastructure you need will be ready when you need it.\n", "\n", "# @markdown **Key Advantages of Reservations:**\n", "\n", "# @markdown 1. **Predictable Capacity:**\n", "# @markdown By reserving capacity ahead of time, you ensure that resources (such as specific GPU types) remain available. This is particularly valuable when deploying models at scale or handling peak workloads that demand consistent performance and availability.\n", "\n", "# @markdown 2. **Simplified Scaling and Migration:**\n", "# @markdown Reservations facilitate scaling up your deployments without capacity surprises. They also help with planned migrations or transitioning to new hardware configurations, minimizing downtime and resource contention.\n", "\n", "# @markdown 3. **Disaster Recovery Preparedness:**\n", "# @markdown In the event of failures or the need for rapid failover, having pre-reserved resources enables you to spin up new endpoints quickly in the designated zone. This capability enhances the resiliency and reliability of your Vertex AI services.\n", "\n", "# @markdown ### Current Support for GPU Reservations in Vertex AI\n", "\n", "# @markdown It’s crucial to note that, as of now, **only GPU reservations are supported in Vertex AI**. This limitation means that if your workloads rely on GPU-accelerated inference, reservations can ensure that the required GPUs are consistently available. If you’re running GPU-intensive machine learning tasks like large language model inference, image recognition, or video processing, GPU reservations can be a game changer in maintaining stable and predictable performance.\n", "\n", "# @markdown For more details on managing reservations, including best practices and advanced configurations, consult the [Compute Engine Reservations Overview](https://cloud.google.com/compute/docs/instances/reservations-overview).\n", "\n", "# @markdown ### Configuring Your Deployment to Use Reservations\n", "\n", "# @markdown You have the flexibility to configure Vertex AI endpoints to consume either a **specific reservation** or **any available reservation**, depending on your operational requirements and the degree of control you wish to maintain.\n", "\n", "# @markdown - **`ANY_RESERVATION`:**\n", "# @markdown If you choose this option, the endpoint will use any suitable reservation available in the specified project and region. This approach is simpler and may be suitable if you have multiple reservations and don’t need fine-grained resource management.\n", "\n", "# @markdown - **`SPECIFIC_RESERVATION`:**\n", "# @markdown By specifying the exact reservation name, you ensure that the endpoint always pulls from a predefined pool of resources. This method is ideal when you need strict control over the hardware configuration—such as ensuring a particular GPU type and count—or when you have distinct reservations allocated for different use cases or departments.\n", "\n", "# @markdown ### Important Considerations\n", "\n", "# @markdown **Same Project, Same Region, Same Zone:**\n", "# @markdown As emphasized earlier, ensure that both your Vertex AI endpoint and the reservation exist in the **same project** and **same region**. Additionally, the reservation’s zone must fall within that region. This alignment prevents configuration issues and ensures reliable, low-latency communication between the endpoint and the reserved resources.\n", "\n", "# @markdown **Matching Resource Configurations:**\n", "# @markdown The `RES_MACHINE_TYPE` and `RES_ACCELERATOR_TYPE` you specify in your Vertex AI endpoint deployment command must exactly match the configuration defined in the reservation. If these configurations differ, the endpoint may not be able to utilize the reserved capacity, leading to stockouts or fallback to alternative resource pools.\n", "\n", "# @markdown ### Parameters for Reservation-Based Deployment\n", "\n", "# @markdown When configuring your deployment, use the parameters below and replace their placeholders with values specific to your environment:\n", "# @markdown - **`RES_MACHINE_TYPE`:** [The machine type](https://cloud.google.com/compute/docs/accelerator-optimized-machines) you plan to use for the endpoint (e.g., `n1-standard-4`). Ensure that this machine type aligns with what’s defined in your reservation.\n", "# @markdown - **`RES_ACCELERATOR_TYPE`:** The type of [GPU (or other accelerators)](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec) your model requires (e.g., `nvidia-tesla-t4`). Confirm this type matches the accelerator configuration in your reservation.\n", "# @markdown - **`RES_ACCELERATOR_COUNT`:** The number of accelerators per instance (e.g., `1`). Adjust this to match your model’s inference or training needs.\n", "# @markdown - **`RES_PROJECT_ID`:** Your Google Cloud project ID. Reservations must be created in and consumed from this project.\n", "# @markdown - **`RES_ZONE`:** The region (e.g., `us-central1`) where your reservation is located. Remember that the Vertex AI endpoint and reservations must share the same region.\n", "# @markdown - **`RESERVATION_NAME`:** The name of your GPU reservation. Refer to this by name when using `SPECIFIC_RESERVATION` mode to guarantee the endpoint consumes the correct reserved resources.\n", "\n", "# @markdown By following these guidelines and properly setting up your reservation parameters, you ensure that your Vertex AI endpoints benefit from stable resource availability. This approach reduces the risk of unexpected resource shortages and helps maintain high service quality, even as demand fluctuates.\n", "\n", "\n", "import time\n", "\n", "from googleapiclient import discovery\n", "from oauth2client.client import GoogleCredentials\n", "\n", "# Authenticate and build service\n", "credentials = GoogleCredentials.get_application_default()\n", "service = discovery.build(\"compute\", \"v1\", credentials=credentials)\n", "\n", "\n", "# Function to wait for operation to complete\n", "def wait_for_zonal_operation(service, project, zone, operation, delete=False):\n", " print(\"Waiting for operation to finish...\")\n", " while True:\n", " result = (\n", " service.zoneOperations()\n", " .get(project=project, zone=zone, operation=operation)\n", " .execute()\n", " )\n", "\n", " if result[\"status\"] == \"DONE\":\n", " if \"error\" in result:\n", " print(\"Error during operation:\", result[\"error\"])\n", " return result\n", " else:\n", " if not delete:\n", " print(\"Reservation created successfully.\")\n", " else:\n", " print(\"Reservation deleted successfully.\")\n", " return result\n", " time.sleep(1)\n", "\n", "\n", "def create_reservation(\n", " res_project_id,\n", " res_zone,\n", " res_name,\n", " res_machine_type,\n", " res_accelerator_type,\n", " res_accelerator_count,\n", " shared_project_id,\n", "):\n", " \"\"\"\n", " Create a reservation in Google Cloud Platform, with optional sharing.\n", "\n", " Args:\n", " res_project_id (str): Project ID.\n", " res_zone (str): Zone where the reservation will be created.\n", " res_name (str): Name of the reservation.\n", " res_machine_type (str): Machine type for the reservation.\n", " res_accelerator_type (str): Accelerator type for the reservation.\n", " res_accelerator_count (int): Number of accelerators.\n", " shared_project_id (str): ID of the project to share the reservation with (required if shared=True).\n", "\n", " Returns:\n", " dict: Final result of the operation.\n", " \"\"\"\n", " # Define reservation\n", " reservation_body = {\n", " \"name\": res_name,\n", " \"specificReservation\": {\n", " \"count\": 1,\n", " \"instanceProperties\": {\n", " \"machineType\": res_machine_type,\n", " \"guestAccelerators\": [\n", " {\n", " \"acceleratorType\": res_accelerator_type,\n", " \"acceleratorCount\": res_accelerator_count,\n", " }\n", " ],\n", " },\n", " },\n", " \"specificReservationRequired\": True,\n", " }\n", "\n", " if not shared_project_id:\n", " raise ValueError(\"shared_project_id must be provided.\")\n", " else:\n", " reservation_body[\"shareSettings\"] = {\n", " \"shareType\": \"SPECIFIC_PROJECTS\",\n", " \"projectMap\": {shared_project_id: {\"projectId\": shared_project_id}},\n", " }\n", "\n", " # Create reservation\n", " request = service.reservations().insert(\n", " project=res_project_id, zone=res_zone, body=reservation_body\n", " )\n", "\n", " response = request.execute()\n", "\n", " # Wait for the operation to complete\n", " operation_name = response[\"name\"]\n", " return wait_for_zonal_operation(service, res_project_id, res_zone, operation_name)\n", "\n", "\n", "def delete_reservation(project_id, zone, name):\n", " \"\"\"\n", " Delete a reservation for a specific project in Google Cloud Platform.\n", "\n", " Args:\n", " res_project_id (str): Project ID.\n", " zone (str): Zone where the reservation exists.\n", " res_name (str): Name of the reservation to delete.\n", "\n", " Returns:\n", " dict: Final result of the operation.\n", " \"\"\"\n", " # Authenticate and build service\n", " credentials = GoogleCredentials.get_application_default()\n", " service = discovery.build(\"compute\", \"v1\", credentials=credentials)\n", "\n", " # Delete the reservation\n", " request = service.reservations().delete(\n", " project=project_id, zone=zone, reservation=name\n", " )\n", "\n", " response = request.execute()\n", "\n", " # Wait for the operation to complete\n", " operation_name = response[\"name\"]\n", " return wait_for_zonal_operation(service, project_id, zone, operation_name, True)" ] }, { "cell_type": "code", "execution_count": null, "id": "ApJwCojDFML_", "metadata": { "cellView": "form", "id": "ApJwCojDFML_" }, "outputs": [], "source": [ "# @title Create A New Shared Reservation for `ANY_RESERVATION` Deployment Use Case.\n", "\n", "# @markdown It's important to note that the **deployment machine specifications and accelerator type must match the reservation machine specifications**. This ensures optimal performance and resource allocation when deploying your model.\n", "\n", "# @markdown Provide the following arguments:\n", "\n", "rev_names = []\n", "\n", "reservation_zone = \"a\" # @param {type:\"string\"}\n", "RES_ZONE = f\"{REGION}-{reservation_zone}\"\n", "\n", "RESERVATION_NAME = \"shared-reservation-1\" # @param {type:\"string\"}\n", "RESERVATION_NAME = f\"{PROJECT_ID}-{RESERVATION_NAME}\"\n", "RES_MACHINE_TYPE = \"g2-standard-12\" # @param {type:\"string\"}\n", "RES_ACCELERATOR_TYPE = \"nvidia-l4\" # @param {type:\"string\"}\n", "RES_ACCELERATOR_COUNT = 1 # @param {type:\"integer\"}\n", "rev_names.append(RESERVATION_NAME)\n", "\n", "create_reservation(\n", " res_project_id=PROJECT_ID,\n", " res_zone=RES_ZONE,\n", " res_name=RESERVATION_NAME,\n", " res_machine_type=RES_MACHINE_TYPE,\n", " res_accelerator_type=RES_ACCELERATOR_TYPE,\n", " res_accelerator_count=RES_ACCELERATOR_COUNT,\n", " shared_project_id=SHARED_PROJECT_ID,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "HDxOtt6xGdlE", "metadata": { "cellView": "form", "id": "HDxOtt6xGdlE" }, "outputs": [], "source": [ "# @title Create A New Shared Reservation for `SPECIFIC_RESERVATION` Deployment Use Case.\n", "\n", "# @markdown It's important to note that the **deployment machine specifications and accelerator type must match the reservation machine specifications**. This ensures optimal performance and resource allocation when deploying your model.\n", "\n", "# @markdown Provide the following arguments:\n", "\n", "rev_names = []\n", "\n", "reservation_zone = \"a\" # @param {type:\"string\"}\n", "RES_ZONE = f\"{REGION}-{reservation_zone}\"\n", "\n", "RESERVATION_NAME = \"shared-reservation-2\" # @param {type:\"string\"}\n", "RESERVATION_NAME = f\"{PROJECT_ID}-{RESERVATION_NAME}\"\n", "rev_names.append(RESERVATION_NAME)\n", "\n", "create_reservation(\n", " res_project_id=PROJECT_ID,\n", " res_zone=RES_ZONE,\n", " res_name=RESERVATION_NAME,\n", " res_machine_type=RES_MACHINE_TYPE,\n", " res_accelerator_type=RES_ACCELERATOR_TYPE,\n", " res_accelerator_count=RES_ACCELERATOR_COUNT,\n", " shared_project_id=SHARED_PROJECT_ID,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "ugbPI18Vu8N3", "metadata": { "cellView": "form", "id": "ugbPI18Vu8N3" }, "outputs": [], "source": [ "# @title Retrieve Newly Created Reservation\n", "\n", "# @markdown Viewing reservations is useful to get an overview of all the reservations in your project, or review the configuration details of a reservation. If you want to view a shared reservation, then you can only view it using the owner project.\n", "\n", "# @markdown Note that `RES_PROJECT_ID` and `RES_REGION` could be different from the `PROJECT_ID` and `REGION` used in this notebook.\n", "\n", "from google.cloud import compute_v1\n", "from google.cloud.compute_v1.services.reservations.pagers import ListPager\n", "\n", "\n", "def list_compute_reservation(project_id: str, zone: str = \"us-central1-a\") -> ListPager:\n", " \"\"\"\n", " Lists all compute reservations in a specified Google Cloud project and zone.\n", " Args:\n", " project_id (str): The ID of the Google Cloud project.\n", " zone (str): The zone of the reservations.\n", " Returns:\n", " ListPager: A pager object containing the list of reservations.\n", " \"\"\"\n", "\n", " client = compute_v1.ReservationsClient()\n", "\n", " reservations_list = client.list(\n", " project=project_id,\n", " zone=zone,\n", " )\n", "\n", " for reservation in reservations_list:\n", " print(\"Name: \", reservation.name)\n", " print(\n", " \"Machine type: \",\n", " reservation.specific_reservation.instance_properties.machine_type,\n", " )\n", "\n", " return reservations_list\n", "\n", "\n", "list_compute_reservation(project_id=PROJECT_ID, zone=RES_ZONE)" ] }, { "cell_type": "code", "execution_count": null, "id": "Z3ZB3LgcC_KR", "metadata": { "cellView": "form", "id": "Z3ZB3LgcC_KR" }, "outputs": [], "source": [ "# @title Deploy Llama-3.1 Endpoint with `SPECIFIC_RESERVATION`\n", "\n", "# @markdown Prior to deploying the endpoint, in the Google Cloud console, go to the [Reservations page](https://console.cloud.google.com/compute/reservations).\n", "# @markdown - Click on the newly created reservation.\n", "# @markdown - Enable **Share with other Google services** in the reservation basic information panel.\n", "# @markdown - Deploy Endpoint with the `SPECIFIC_RESERVATION` created in previous cell.\n", "hf_model_id = \"meta-llama/Meta-Llama-3.1-8B\"\n", "\n", "MACHINE_TYPE = \"g2-standard-12\"\n", "ACCELERATOR_TYPE = \"NVIDIA_L4\"\n", "ACCELERATOR_COUNT = 1\n", "\n", "(\n", " models[\"vllm_gpu_specific_reserve\"],\n", " endpoints[\"vllm_gpu_specific_reserve\"],\n", ") = deploy_model_vllm(\n", " model_name=common_util.get_job_name_with_datetime(\n", " prefix=f\"llama3_1-serve-specific-{RESERVATION_NAME}\"\n", " ),\n", " model_id=hf_model_id,\n", " base_model_id=hf_model_id,\n", " service_account=SERVICE_ACCOUNT,\n", " machine_type=MACHINE_TYPE,\n", " accelerator_type=ACCELERATOR_TYPE,\n", " accelerator_count=ACCELERATOR_COUNT,\n", " model_type=\"llama3.1\",\n", " reservation_name=RESERVATION_NAME,\n", " reservation_affinity_type=\"SPECIFIC_RESERVATION\",\n", " reservation_project=PROJECT_ID,\n", " reservation_zone=RES_ZONE,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "RIKpW1JcJfZe", "metadata": { "cellView": "form", "id": "RIKpW1JcJfZe" }, "outputs": [], "source": [ "# @title Test `SPECIFIC_RESERVATION` Endpoint with Raw Predict\n", "\n", "# @markdown Once deployment succeeds, you can send requests to the endpoint with text prompts. Sampling parameters supported by vLLM can be found [here](https://docs.vllm.ai/en/latest/dev/sampling_params.html).\n", "\n", "prompt = \"What is a car?\" # @param {type: \"string\"}\n", "max_tokens = 50 # @param {type:\"integer\"}\n", "temperature = 1.0 # @param {type:\"number\"}\n", "top_p = 1.0 # @param {type:\"number\"}\n", "top_k = 1 # @param {type:\"integer\"}\n", "raw_response = True # @param {type:\"boolean\"}\n", "\n", "# Overrides parameters for inferences.\n", "instances = [\n", " {\n", " \"prompt\": prompt,\n", " \"max_tokens\": max_tokens,\n", " \"temperature\": temperature,\n", " \"top_p\": top_p,\n", " \"top_k\": top_k,\n", " \"raw_response\": raw_response,\n", " },\n", "]\n", "response = endpoints[\"vllm_gpu_specific_reserve\"].predict(instances=instances)\n", "\n", "for prediction in response.predictions:\n", " print(prediction)\n", "\n", "# @markdown Click \"Show Code\" to see more details." ] }, { "cell_type": "code", "execution_count": null, "id": "2-KSzov1TbvD", "metadata": { "cellView": "form", "id": "2-KSzov1TbvD" }, "outputs": [], "source": [ "# @title Deploy Llama-3.1 Endpoint with `ANY_RESERVATION`\n", "# @markdown Prior to deploying the endpoint, in the Google Cloud console, go to the [Reservations page](https://console.cloud.google.com/compute/reservations).\n", "# @markdown - Click on the newly created reservation.\n", "# @markdown - Enable **Share with other Google services** in the reservation basic information panel.\n", "# @markdown - Deploy Endpoint with the `ANY_RESERVATION`.\n", "\n", "hf_model_id = \"meta-llama/Meta-Llama-3.1-8B\"\n", "\n", "models[\"vllm_gpu_any_reserve\"], endpoints[\"vllm_gpu_any_reserve\"] = deploy_model_vllm(\n", " model_name=common_util.get_job_name_with_datetime(\n", " prefix=f\"llama3_1-serve-any-{RESERVATION_NAME}\"\n", " ),\n", " model_id=hf_model_id,\n", " base_model_id=hf_model_id,\n", " service_account=SERVICE_ACCOUNT,\n", " machine_type=MACHINE_TYPE,\n", " accelerator_type=ACCELERATOR_TYPE,\n", " accelerator_count=ACCELERATOR_COUNT,\n", " model_type=\"llama3.1\",\n", " reservation_affinity_type=\"ANY_RESERVATION\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "Rxlc1s1Trslo", "metadata": { "cellView": "form", "id": "Rxlc1s1Trslo" }, "outputs": [], "source": [ "# @title Test `ANY_RESERVATION` Endpoint with Raw Predict\n", "\n", "# @markdown Once deployment succeeds, you can send requests to the endpoint with text prompts. Sampling parameters supported by vLLM can be found [here](https://docs.vllm.ai/en/latest/dev/sampling_params.html).\n", "\n", "prompt = \"What is a car?\" # @param {type: \"string\"}\n", "max_tokens = 50 # @param {type:\"integer\"}\n", "temperature = 1.0 # @param {type:\"number\"}\n", "top_p = 1.0 # @param {type:\"number\"}\n", "top_k = 1 # @param {type:\"integer\"}\n", "raw_response = True # @param {type:\"boolean\"}\n", "\n", "# Overrides parameters for inferences.\n", "instances = [\n", " {\n", " \"prompt\": prompt,\n", " \"max_tokens\": max_tokens,\n", " \"temperature\": temperature,\n", " \"top_p\": top_p,\n", " \"top_k\": top_k,\n", " \"raw_response\": raw_response,\n", " },\n", "]\n", "response = endpoints[\"vllm_gpu_any_reserve\"].predict(instances=instances)\n", "\n", "for prediction in response.predictions:\n", " print(prediction)\n", "\n", "# @markdown Click \"Show Code\" to see more details." ] }, { "cell_type": "markdown", "id": "bV5Yjkgav9BZ", "metadata": { "id": "bV5Yjkgav9BZ" }, "source": [ "### Delete the models, endpoints and reservations\n" ] }, { "cell_type": "code", "execution_count": null, "id": "qsks36cOH9rb", "metadata": { "cellView": "form", "id": "qsks36cOH9rb" }, "outputs": [], "source": [ "# @markdown Delete the experiment models and endpoints to recycle the resources\n", "# @markdown and avoid unnecessary continuous charges that may incur.\n", "\n", "# @markdown If you no longer need a reservation, then delete it to stop incurring charges for its reserved resources. If you no longer need a shared reservation, then you can only delete it using the owner project.\n", "\n", "# Undeploy model and delete endpoint.\n", "for endpoint in endpoints.values():\n", " endpoint.delete(force=True)\n", "\n", "# Delete models.\n", "for model in models.values():\n", " model.delete()\n", "\n", "delete_bucket = False # @param {type:\"boolean\"}\n", "if delete_bucket:\n", " ! gsutil -m rm -r $BUCKET_NAME\n", "\n", "for rev_name in rev_names:\n", " delete_reservation(project_id=PROJECT_ID, zone=RES_ZONE, name=rev_name)" ] } ], "metadata": { "colab": { "name": "model_garden_reservations_spotvm.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }