notebooks/community/model_garden/model_garden_pytorch_codellama.ipynb (571 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "7d9bbf86da5e"
},
"outputs": [],
"source": [
"# Copyright 2025 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "99c1c3fc2ca5"
},
"source": [
"# Vertex AI Model Garden - Code LLaMA Deployment\n",
"\n",
"<table><tbody><tr>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/instances\">\n",
" <img alt=\"Workbench logo\" src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" width=\"32px\"><br> Run in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fcommunity%2Fmodel_garden%2Fmodel_garden_pytorch_codellama.ipynb\">\n",
" <img alt=\"Google Cloud Colab Enterprise logo\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" width=\"32px\"><br> Run in Colab Enterprise\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_codellama.ipynb\">\n",
" <img alt=\"GitHub logo\" src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" width=\"32px\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</tr></tbody></table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3de7470326a2"
},
"source": [
"## Overview\n",
"\n",
"This notebook demonstrates how to deploy pretrained Code LLaMA models using [vLLM](https://github.com/vllm-project/vllm). It also shows how to evaluate the Code LLaMA models using EleutherAI's Language Model Evaluation Harness (lm-evaluation-harness) with Vertex CustomJob.\n",
"\n",
"### Objective\n",
"\n",
"- Deploy pre-trained Code LLaMA models with [vLLM](https://github.com/vllm-project/vllm) with best serving throughput.\n",
"\n",
"### File a bug\n",
"\n",
"File a bug on [GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/issues/new) if you encounter any issue with the notebook.\n",
"\n",
"### Costs\n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing), [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bIGGC0o3vJdV"
},
"source": [
"## Run the notebook"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "855d6b96f291"
},
"outputs": [],
"source": [
"# @title Setup Google Cloud project\n",
"\n",
"# @markdown 1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
"\n",
"# @markdown 2. **[Optional]** Set region. If not set, the region will be set automatically according to Colab Enterprise environment.\n",
"\n",
"REGION = \"\" # @param {type:\"string\"}\n",
"\n",
"# @markdown 3. If you want to run predictions with A100 80GB or H100 GPUs, we recommend using the regions listed below. **NOTE:** Make sure you have associated quota in selected regions. Click the links to see your current quota for each GPU type: [Nvidia A100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_a100_80gb_gpus), [Nvidia H100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_h100_gpus). You can request for quota following the instructions at [\"Request a higher quota\"](https://cloud.google.com/docs/quota/view-manage#requesting_higher_quota).\n",
"\n",
"# @markdown > | Machine Type | Accelerator Type | Recommended Regions |\n",
"# @markdown | ----------- | ----------- | ----------- |\n",
"# @markdown | a2-ultragpu-1g | 1 NVIDIA_A100_80GB | us-central1, us-east4, europe-west4, asia-southeast1, us-east4 |\n",
"# @markdown | a3-highgpu-2g | 2 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n",
"# @markdown | a3-highgpu-4g | 4 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n",
"# @markdown | a3-highgpu-8g | 8 NVIDIA_H100_80GB | us-central1, europe-west4, us-west1, asia-southeast1 |\n",
"\n",
"# Import the necessary packages\n",
"\n",
"# Upgrade Vertex AI SDK.\n",
"! pip3 install --upgrade --quiet 'google-cloud-aiplatform>=1.84.0'\n",
"# ! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n",
"\n",
"import importlib\n",
"import os\n",
"from typing import Tuple\n",
"\n",
"from google.cloud import aiplatform\n",
"\n",
"if os.environ.get(\"VERTEX_PRODUCT\") != \"COLAB_ENTERPRISE\":\n",
" ! pip install --upgrade tensorflow\n",
"! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n",
"\n",
"common_util = importlib.import_module(\n",
" \"vertex-ai-samples.community-content.vertex_model_garden.model_oss.notebook_util.common_util\"\n",
")\n",
"\n",
"LABEL = \"vllm_gpu\"\n",
"models, endpoints = {}, {}\n",
"\n",
"\n",
"# Get the default cloud project id.\n",
"PROJECT_ID = os.environ[\"GOOGLE_CLOUD_PROJECT\"]\n",
"\n",
"# Get the default region for launching jobs.\n",
"if not REGION:\n",
" REGION = os.environ[\"GOOGLE_CLOUD_REGION\"]\n",
"\n",
"# Initialize Vertex AI API.\n",
"print(\"Initializing Vertex AI API.\")\n",
"aiplatform.init(project=PROJECT_ID, location=REGION)\n",
"\n",
"! gcloud config set project $PROJECT_ID\n",
"import vertexai\n",
"\n",
"vertexai.init(\n",
" project=PROJECT_ID,\n",
" location=REGION,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "Jwn4PcTf4EMt"
},
"outputs": [],
"source": [
"# @title Access pretrained Code LLaMA models\n",
"\n",
"# @markdown The original models from Meta are converted into the HuggingFace format for serving in Vertex AI.\n",
"\n",
"# @markdown Accept the model agreement to access the models:\n",
"# @markdown 1. Open the [Code LLaMA model card](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/137).\n",
"# @markdown 2. Review and accept the agreement in the pop-up window on the model card page. If you have previously accepted the model agreement, there will not be a pop-up window on the model card page and this step is not needed.\n",
"# @markdown 3. A Cloud Storage bucket (starting with ‘gs://’) containing Code LLaMA pretrained and finetuned models will be shared under the “Documentation” section and its “Get started” subsection.\n",
"\n",
"# This path will be shared once click the agreement in Code LLaMA model card\n",
"# as described in the `Access pretrained Code LLaMA models` section.\n",
"VERTEX_AI_MODEL_GARDEN_CODE_LLAMA = \"\" # @param {type: \"string\"}\n",
"assert (\n",
" VERTEX_AI_MODEL_GARDEN_CODE_LLAMA\n",
"), \"Kindly click the agreement of Code LLaMA in Vertex AI Model Garden, and get the GCS path of Code LLaMA model artifacts.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "03d504bcd60b"
},
"outputs": [],
"source": [
"# @title Deploy pretrained Code LLaMA (vLLM)\n",
"# @markdown This section deploys prebuilt Code LLaMA models with [vLLM](https://github.com/vllm-project/vllm) on the Endpoint. Code LLaMA model weights are stored in bfloat16 precision. L4 or A100 GPUs are needed for vLLM serving at bfloat16 precision. V100 GPUs can be used with vLLM serving at float16 precision. Changing the precision from bfloat16 to float16 can result in a change in performance, and this change can be an increase and a decrease. However, the performance change should be small (within 5%).\n",
"\n",
"# @markdown L4s GPUs are used for demonstration. Note that V100 serving generally offers better throughput and latency performance than L4 serving, while L4 serving is generally more cost efficient than V100 serving. The serving efficiency of V100 and L4 GPUs is inferior to that of A100 GPUs, but V100 and L4 GPUs are nevertheless good serving solutions if you do not have A100 quota. The model deployment step will take 15 minutes to 1 hour to complete.\n",
"\n",
"# @markdown The vLLM project is an highly optimized LLM serving framework which can increase serving throughput a lot. The higher QPS you have, the more benefits you get using vLLM.\n",
"\n",
"# @markdown Set the model name.\n",
"model_name = \"CodeLlama-7b-Instruct-hf\" # @param [\"CodeLlama-7b-hf\", \"CodeLlama-7b-Python-hf\", \"CodeLlama-7b-Instruct-hf\", \"CodeLlama-13b-hf\", \"CodeLlama-13b-Python-hf\", \"CodeLlama-13b-Instruct-hf\", \"CodeLlama-34b-hf\", \"CodeLlama-34b-Python-hf\", \"CodeLlama-34b-Instruct-hf\", \"CodeLlama-70b-hf\", \"CodeLlama-70b-Python-hf\", \"CodeLlama-70b-Instruct-hf\"]\n",
"version_id = model_name.lower()\n",
"PUBLISHER_MODEL_NAME = f\"publishers/meta/models/codellama-7b-hf@{version_id}\"\n",
"\n",
"# The pre-built serving docker image.\n",
"VLLM_DOCKER_URI = \"us-docker.pkg.dev/vertex-ai/vertex-vision-model-garden-dockers/pytorch-vllm-serve:20240620_1616_RC00\"\n",
"\n",
"model_id = os.path.join(VERTEX_AI_MODEL_GARDEN_CODE_LLAMA, model_name)\n",
"\n",
"# @markdown Set use_dedicated_endpoint to False if you don't want to use [dedicated endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#create-dedicated-endpoint). Note that [dedicated endpoint does not support VPC Service Controls](https://cloud.google.com/vertex-ai/docs/predictions/choose-endpoint-type), uncheck the box if you are using VPC-SC.\n",
"use_dedicated_endpoint = True # @param {type:\"boolean\"}\n",
"\n",
"\n",
"def deploy_model_vllm(\n",
" model_name: str,\n",
" model_id: str,\n",
" publisher: str,\n",
" publisher_model_id: str,\n",
" base_model_id: str = None,\n",
" machine_type: str = \"g2-standard-8\",\n",
" accelerator_type: str = \"NVIDIA_L4\",\n",
" accelerator_count: int = 1,\n",
" gpu_memory_utilization: float = 0.9,\n",
" max_model_len: int = 4096,\n",
" dtype: str = \"auto\",\n",
" enable_trust_remote_code: bool = False,\n",
" enforce_eager: bool = False,\n",
" enable_lora: bool = False,\n",
" enable_chunked_prefill: bool = False,\n",
" enable_prefix_cache: bool = False,\n",
" host_prefix_kv_cache_utilization_target: float = 0.0,\n",
" max_loras: int = 1,\n",
" max_cpu_loras: int = 8,\n",
" use_dedicated_endpoint: bool = False,\n",
" max_num_seqs: int = 256,\n",
" model_type: str = None,\n",
" enable_llama_tool_parser: bool = False,\n",
") -> Tuple[aiplatform.Model, aiplatform.Endpoint]:\n",
" \"\"\"Deploys trained models with vLLM into Vertex AI.\"\"\"\n",
" endpoint = aiplatform.Endpoint.create(\n",
" display_name=f\"{model_name}-endpoint\",\n",
" dedicated_endpoint_enabled=use_dedicated_endpoint,\n",
" )\n",
"\n",
" if not base_model_id:\n",
" base_model_id = model_id\n",
"\n",
" # See https://docs.vllm.ai/en/latest/models/engine_args.html for a list of possible arguments with descriptions.\n",
" vllm_args = [\n",
" \"python\",\n",
" \"-m\",\n",
" \"vllm.entrypoints.api_server\",\n",
" \"--host=0.0.0.0\",\n",
" \"--port=8080\",\n",
" f\"--model={model_id}\",\n",
" f\"--tensor-parallel-size={accelerator_count}\",\n",
" \"--swap-space=16\",\n",
" f\"--gpu-memory-utilization={gpu_memory_utilization}\",\n",
" f\"--max-model-len={max_model_len}\",\n",
" f\"--dtype={dtype}\",\n",
" f\"--max-loras={max_loras}\",\n",
" f\"--max-cpu-loras={max_cpu_loras}\",\n",
" f\"--max-num-seqs={max_num_seqs}\",\n",
" \"--disable-log-stats\",\n",
" ]\n",
"\n",
" if enable_trust_remote_code:\n",
" vllm_args.append(\"--trust-remote-code\")\n",
"\n",
" if enforce_eager:\n",
" vllm_args.append(\"--enforce-eager\")\n",
"\n",
" if enable_lora:\n",
" vllm_args.append(\"--enable-lora\")\n",
"\n",
" if enable_chunked_prefill:\n",
" vllm_args.append(\"--enable-chunked-prefill\")\n",
"\n",
" if enable_prefix_cache:\n",
" vllm_args.append(\"--enable-prefix-caching\")\n",
"\n",
" if 0 < host_prefix_kv_cache_utilization_target < 1:\n",
" vllm_args.append(\n",
" f\"--host-prefix-kv-cache-utilization-target={host_prefix_kv_cache_utilization_target}\"\n",
" )\n",
"\n",
" if model_type:\n",
" vllm_args.append(f\"--model-type={model_type}\")\n",
"\n",
" if enable_llama_tool_parser:\n",
" vllm_args.append(\"--enable-auto-tool-choice\")\n",
" vllm_args.append(\"--tool-call-parser=vertex-llama-3\")\n",
"\n",
" env_vars = {\n",
" \"MODEL_ID\": base_model_id,\n",
" \"DEPLOY_SOURCE\": \"notebook\",\n",
" }\n",
"\n",
" # HF_TOKEN is not a compulsory field and may not be defined.\n",
" try:\n",
" if HF_TOKEN:\n",
" env_vars[\"HF_TOKEN\"] = HF_TOKEN\n",
" except NameError:\n",
" pass\n",
"\n",
" model = aiplatform.Model.upload(\n",
" display_name=model_name,\n",
" serving_container_image_uri=VLLM_DOCKER_URI,\n",
" serving_container_args=vllm_args,\n",
" serving_container_ports=[8080],\n",
" serving_container_predict_route=\"/generate\",\n",
" serving_container_health_route=\"/ping\",\n",
" serving_container_environment_variables=env_vars,\n",
" serving_container_shared_memory_size_mb=(16 * 1024), # 16 GB\n",
" serving_container_deployment_timeout=7200,\n",
" model_garden_source_model_name=(\n",
" f\"publishers/{publisher}/models/{publisher_model_id}\"\n",
" ),\n",
" )\n",
" print(\n",
" f\"Deploying {model_name} on {machine_type} with {accelerator_count} {accelerator_type} GPU(s).\"\n",
" )\n",
" model.deploy(\n",
" endpoint=endpoint,\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" deploy_request_timeout=1800,\n",
" system_labels={\n",
" \"NOTEBOOK_NAME\": \"model_garden_pytorch_codellama.ipynb\",\n",
" \"NOTEBOOK_ENVIRONMENT\": common_util.get_deploy_source(),\n",
" },\n",
" )\n",
" print(\"endpoint_name:\", endpoint.name)\n",
"\n",
" return model, endpoint\n",
"\n",
"\n",
"# @markdown Find Vertex AI prediction supported accelerators and regions at https://cloud.google.com/vertex-ai/docs/predictions/configure-compute.\n",
"accelerator_type = \"NVIDIA_L4\" # @param [\"NVIDIA_L4\", \"NVIDIA_TESLA_V100\", \"NVIDIA_TESLA_A100\"]\n",
"\n",
"# Note that a larger max_model_len will require more GPU memory.\n",
"max_model_len = 2048\n",
"\n",
"if \"7b\" in model_name:\n",
" # Sets A100 (40G) to deploy 7B models.\n",
" if accelerator_type == \"NVIDIA_TESLA_A100\":\n",
" machine_type = \"a2-highgpu-1g\"\n",
" accelerator_count = 1\n",
" vllm_precision = \"bfloat16\"\n",
" # Sets 1 V100 (16G) to deploy 7B models..\n",
" elif accelerator_type == \"NVIDIA_TESLA_V100\":\n",
" machine_type = \"n1-standard-8\"\n",
" accelerator_count = 1\n",
" vllm_precision = \"float16\"\n",
" # Sets 1 L4 (24G) to deploy 7B models.\n",
" elif accelerator_type == \"NVIDIA_L4\":\n",
" machine_type = \"g2-standard-12\"\n",
" accelerator_count = 1\n",
" vllm_precision = \"bfloat16\"\n",
" else:\n",
" raise ValueError(\n",
" f\"Recommended GPU setting not found for: {accelerator_type} and {model_name}.\"\n",
" )\n",
"elif \"13b\" in model_name:\n",
" # Sets A100 (40G) to deploy 13B models.\n",
" if accelerator_type == \"NVIDIA_TESLA_A100\":\n",
" machine_type = \"a2-highgpu-1g\"\n",
" accelerator_count = 1\n",
" vllm_precision = \"bfloat16\"\n",
" # Sets 2 V100 (16G) to deploy 13B models.\n",
" elif accelerator_type == \"NVIDIA_TESLA_V100\":\n",
" machine_type = \"n1-standard-16\"\n",
" accelerator_count = 2\n",
" vllm_precision = \"float16\"\n",
" # Sets 2 L4 (24G) to deploy 13B models.\n",
" elif accelerator_type == \"NVIDIA_L4\":\n",
" machine_type = \"g2-standard-24\"\n",
" accelerator_count = 2\n",
" vllm_precision = \"bfloat16\"\n",
" else:\n",
" raise ValueError(\n",
" f\"Recommended GPU setting not found for: {accelerator_type} and {model_name}.\"\n",
" )\n",
"elif \"34b\" in model_name:\n",
" # Sets 2 A100 (40G) to deploy 34B models.\n",
" if accelerator_type == \"NVIDIA_TESLA_A100\":\n",
" machine_type = \"a2-highgpu-2g\"\n",
" accelerator_count = 2\n",
" vllm_precision = \"bfloat16\"\n",
" # Sets 8 V100 (16G) to deploy 34B models.\n",
" elif accelerator_type == \"NVIDIA_TESLA_V100\":\n",
" machine_type = \"n1-standard-32\"\n",
" accelerator_count = 8\n",
" vllm_precision = \"float16\"\n",
" # Sets 4 L4 (24G) to deploy 34B models.\n",
" elif accelerator_type == \"NVIDIA_L4\":\n",
" machine_type = \"g2-standard-48\"\n",
" accelerator_count = 4\n",
" vllm_precision = \"bfloat16\"\n",
" else:\n",
" raise ValueError(\n",
" f\"Recommended GPU setting not found for: {accelerator_type} and {model_name}.\"\n",
" )\n",
"elif \"70b\" in model_name:\n",
" # Sets 4 A100 (40G) to deploy 70B models.\n",
" if accelerator_type == \"NVIDIA_TESLA_A100\":\n",
" machine_type = \"a2-highgpu-4g\"\n",
" accelerator_count = 4\n",
" vllm_precision = \"bfloat16\"\n",
" # Sets 8 L4 (24G) to deploy 70B models.\n",
" elif accelerator_type == \"NVIDIA_L4\":\n",
" machine_type = \"g2-standard-96\"\n",
" accelerator_count = 8\n",
" vllm_precision = \"bfloat16\"\n",
" else:\n",
" raise ValueError(\n",
" f\"Recommended GPU setting not found for: {accelerator_type} and {model_name}.\"\n",
" )\n",
"\n",
"# Check quota for the selected GPU type and region.\n",
"common_util.check_quota(\n",
" project_id=PROJECT_ID,\n",
" region=REGION,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" is_for_training=False,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "UBirHaZJRHzI"
},
"outputs": [],
"source": [
"# @title [Option 1] Deploy with Model Garden SDK\n",
"\n",
"# @markdown Deploy with Gen AI model-centric SDK. This section uploads the prebuilt model to Model Registry and deploys it to a Vertex AI Endpoint. It takes 15 minutes to 1 hour to finish depending on the size of the model. See [use open models with Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/open-models/use-open-models) for documentation on other use cases.\n",
"from vertexai.preview import model_garden\n",
"\n",
"model = model_garden.OpenModel(PUBLISHER_MODEL_NAME)\n",
"endpoints[LABEL] = model.deploy(\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" use_dedicated_endpoint=use_dedicated_endpoint,\n",
" accept_eula=True, # Accept the End User License Agreement (EULA) on the model card before deploy. Otherwise, the deployment will be forbidden.\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "OW0ykbnLRHzI"
},
"outputs": [],
"source": [
"# @title [Option 2] Deploy with customized configs\n",
"\n",
"models[\"vllm_gpu\"], endpoints[\"vllm_gpu\"] = deploy_model_vllm(\n",
" model_name=common_util.get_job_name_with_datetime(prefix=\"code-llama-serve-vllm\"),\n",
" model_id=model_id,\n",
" publisher=\"meta\",\n",
" publisher_model_id=\"codellama-7b-hf\",\n",
" machine_type=machine_type,\n",
" accelerator_type=accelerator_type,\n",
" accelerator_count=accelerator_count,\n",
" max_model_len=max_model_len,\n",
" dtype=vllm_precision,\n",
" use_dedicated_endpoint=use_dedicated_endpoint,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "E4Y1QjghSE5r"
},
"outputs": [],
"source": [
"# @title Prediction with endpoint\n",
"# @markdown Once deployment succeeds, you can send requests to the endpoint with text prompts. Sampling parameters supported by vLLM can be found [here](https://docs.vllm.ai/en/latest/dev/sampling_params.html).\n",
"\n",
"# Loads an existing endpoint instance using the endpoint name:\n",
"# - Using `endpoint_name = endpoint.name` allows us to get the\n",
"# endpoint name of the endpoint `endpoint` created in the cell\n",
"# above.\n",
"# - Alternatively, you can set `endpoint_name = \"1234567890123456789\"` to load\n",
"# an existing endpoint with the ID 1234567890123456789.\n",
"# You may uncomment the code below to load an existing endpoint.\n",
"\n",
"# endpoint_name = \"\" # @param {type:\"string\"}\n",
"# aip_endpoint_name = (\n",
"# f\"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{endpoint_name}\"\n",
"# )\n",
"# endpoints[\"vllm_gpu\"] = aiplatform.Endpoint(aip_endpoint_name)\n",
"\n",
"prompt = \"Write a function to list n Fibonacci numbers in Python.\" # @param {type: \"string\"}\n",
"max_tokens = 500 # @param {type:\"integer\"}\n",
"temperature = 1.0 # @param {type:\"number\"}\n",
"top_p = 1.0 # @param {type:\"number\"}\n",
"top_k = 1 # @param {type:\"integer\"}\n",
"# @markdown Set `raw_response` to `True` to obtain the raw model output. Set `raw_response` to `False` to apply additional formatting in the structure of `\"Prompt:\\n{prompt.strip()}\\nOutput:\\n{output}\"`.\n",
"raw_response = True # @param {type:\"boolean\"}\n",
"\n",
"instances = [\n",
" {\n",
" \"prompt\": prompt,\n",
" \"max_tokens\": max_tokens,\n",
" \"temperature\": temperature,\n",
" \"top_p\": top_p,\n",
" \"top_k\": top_k,\n",
" \"raw_response\": raw_response,\n",
" },\n",
"]\n",
"response = endpoints[\"vllm_gpu\"].predict(\n",
" instances=instances, use_dedicated_endpoint=use_dedicated_endpoint\n",
")\n",
"\n",
"# \"<|file_separator|>\" is the end of the file token.\n",
"for prediction in response.predictions:\n",
" print(prediction.split(\"<|file_separator|>\")[0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "911406c1561e"
},
"outputs": [],
"source": [
"# @title Clean up resources\n",
"# @markdown Delete the experiment models and endpoints to recycle the resources\n",
"# @markdown and avoid unnecessary continuous charges that may incur.\n",
"\n",
"# Undeploy model and delete endpoint.\n",
"for endpoint in endpoints.values():\n",
" endpoint.delete(force=True)\n",
"\n",
"# Delete models.\n",
"for model in models.values():\n",
" model.delete()"
]
}
],
"metadata": {
"colab": {
"name": "model_garden_pytorch_codellama.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}