notebooks/community/model_garden/model_garden_ollama_deployment.ipynb (449 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "20qcPG1PmFUM" }, "outputs": [], "source": [ "# Copyright 2025 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "QXYOa1odnikj" }, "source": [ "# Vertex AI Model Garden - Ollama Deployment\n", "\n", "<table><tbody><tr>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fcommunity%2Fmodel_garden%2Fmodel_garden_ollama_deployment.ipynb\">\n", " <img alt=\"Google Cloud Colab Enterprise logo\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" width=\"32px\"><br> Run in Colab Enterprise\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_ollama_deployment.ipynb\">\n", " <img alt=\"GitHub logo\" src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" width=\"32px\"><br> View on GitHub\n", " </a>\n", " </td>\n", "</tr></tbody></table>" ] }, { "cell_type": "markdown", "metadata": { "id": "cbDI9ag4oR4C" }, "source": [ "## Overview\n", "\n", "This notebook demonstrates how to deploy GPT-Generated Unified Format (GGUF) models with Vertex Model Garden released Ollama serving dockers, which are mainly based on [Ollama](https://github.com/ollama/ollama/tree/main).\n", "\n", "\n", "### Objective\n", "\n", "- Deploy the deepseek-r1 1.5b and 671b GGUF models with Ollama\n", "- Send prediction request to the deployed endpoint\n", "\n", "### File a bug\n", "\n", "File a bug on [GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/issues/new) if you encounter any issue with the notebook.\n", "\n", "### Costs\n", "\n", "This tutorial uses billable components of Google Cloud:\n", "\n", "* Vertex AI\n", "\n", "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage." ] }, { "cell_type": "markdown", "metadata": { "id": "hQJWRopioSKT" }, "source": [ "## Before you begin" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "J_jmxcIZoSxU" }, "outputs": [], "source": [ "# @title Setup Google Cloud project\n", "\n", "# Upgrade Vertex AI SDK.\n", "! git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git\n", "\n", "import importlib\n", "import os\n", "from typing import Tuple\n", "\n", "from google.cloud import aiplatform\n", "\n", "# @markdown 1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n", "\n", "# @markdown 2. **[Optional]** Set region. If not set, the region will be set automatically according to Colab Enterprise environment.\n", "\n", "REGION = \"\" # @param {type:\"string\"}\n", "\n", "# @markdown 3. If you want to run predictions with A100 80GB or H100 GPUs, we recommend using the regions listed below. **NOTE:** Make sure you have associated quota in selected regions. Click the links to see your current quota for each GPU type: [Nvidia A100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_a100_80gb_gpus), [Nvidia H100 80GB](https://console.cloud.google.com/iam-admin/quotas?metric=aiplatform.googleapis.com%2Fcustom_model_serving_nvidia_h100_gpus). You can request for quota following the instructions at [\"Request a higher quota\"](https://cloud.google.com/docs/quota/view-manage#requesting_higher_quota).\n", "\n", "# @markdown > | Machine Type | Accelerator Type | Recommended Regions |\n", "# @markdown | ----------- | ----------- | ----------- |\n", "# @markdown | a2-ultragpu-1g | 1 NVIDIA_A100_80GB | us-central1, us-east4, europe-west4, asia-southeast1, us-east4 |\n", "# @markdown | a3-highgpu-2g | 2 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n", "# @markdown | a3-highgpu-4g | 4 NVIDIA_H100_80GB | us-west1, asia-southeast1, europe-west4 |\n", "# @markdown | a3-highgpu-8g | 8 NVIDIA_H100_80GB | us-central1, europe-west4, us-west1, asia-southeast1 |\n", "\n", "common_util = importlib.import_module(\n", " \"vertex-ai-samples.community-content.vertex_model_garden.model_oss.notebook_util.common_util\"\n", ")\n", "\n", "models, endpoints = {}, {}\n", "\n", "\n", "# Get the default cloud project id.\n", "PROJECT_ID = os.environ[\"GOOGLE_CLOUD_PROJECT\"]\n", "\n", "# Get the default region for launching jobs.\n", "if not REGION:\n", " REGION = os.environ[\"GOOGLE_CLOUD_REGION\"]\n", "\n", "# Initialize Vertex AI API.\n", "print(\"Initializing Vertex AI API.\")\n", "aiplatform.init(project=PROJECT_ID, location=REGION)\n", "\n", "! gcloud config set project $PROJECT_ID\n", "\n", "# @markdown Click \"Show Code\" to see more details." ] }, { "cell_type": "markdown", "metadata": { "id": "WWNxEb-vlosS" }, "source": [ "## Deploy with Ollama from Hugging Face" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "USB7dvYqvNdu" }, "outputs": [], "source": [ "# @title Deploy\n", "\n", "# @markdown This section downloads the `deepseek-r1:1.5b` or `deepseek-r1:671b` model from Ollama and deploys it to a Vertex AI Endpoint.\n", "# @markdown It takes ~20 minutes to complete the deployment.\n", "\n", "MODEL_ID = \"deepseek-r1:1.5b\" # @param [\"deepseek-r1:1.5b\", \"deepseek-r1:671b\"]\n", "\n", "# The pre-built serving docker image for Ollama.\n", "OLLAMA_DOCKER_URI = \"us-docker.pkg.dev/deeplearning-platform-release/vertex-model-garden/ollama-serve.cu125.0-5.ubuntu2204.py310\"\n", "\n", "# @markdown Set use_dedicated_endpoint to False if you don't want to use [dedicated endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#create-dedicated-endpoint). Note that [dedicated endpoint does not support VPC Service Controls](https://cloud.google.com/vertex-ai/docs/predictions/choose-endpoint-type), uncheck the box if you are using VPC-SC.\n", "use_dedicated_endpoint = True # @param {type:\"boolean\"}\n", "\n", "if \"1.5b\" in MODEL_ID:\n", " machine_type = \"g2-standard-8\"\n", " accelerator_type = \"NVIDIA_L4\"\n", " accelerator_count = 1\n", "elif \"671b\" in MODEL_ID:\n", " accelerator_type = \"NVIDIA_H100_80GB\"\n", " machine_type = \"a3-highgpu-8g\"\n", " accelerator_count = 8\n", "else:\n", " raise ValueError(\n", " f\"Recommended GPU setting not found for: {accelerator_type} and {MODEL_ID}.\"\n", " )\n", "\n", "context_length = 131072 if \"1.5b\" in MODEL_ID else 16384\n", "\n", "common_util.check_quota(\n", " project_id=PROJECT_ID,\n", " region=REGION,\n", " accelerator_type=accelerator_type,\n", " accelerator_count=accelerator_count,\n", " is_for_training=False,\n", ")\n", "\n", "\n", "def deploy_model_ollama(\n", " model_name: str,\n", " model_id: str,\n", " publisher: str,\n", " publisher_model_id: str,\n", " context_length: int,\n", " machine_type: str = \"g2-standard-8\",\n", " accelerator_type: str = \"NVIDIA_L4\",\n", " accelerator_count: int = 1,\n", " use_dedicated_endpoint: bool = False,\n", ") -> Tuple[aiplatform.Model, aiplatform.Endpoint]:\n", " \"\"\"Deploys models with Ollama on GPU in Vertex AI.\"\"\"\n", " endpoint = aiplatform.Endpoint.create(\n", " display_name=f\"{model_name}-endpoint\",\n", " dedicated_endpoint_enabled=use_dedicated_endpoint,\n", " )\n", "\n", " env_vars = {\n", " \"MODEL_ID\": model_id,\n", " \"CONTEXT_LENGTH\": context_length,\n", " }\n", "\n", " model = aiplatform.Model.upload(\n", " display_name=model_name,\n", " serving_container_image_uri=OLLAMA_DOCKER_URI,\n", " serving_container_ports=[8080],\n", " serving_container_predict_route=\"/generate\",\n", " serving_container_health_route=\"/ping\",\n", " serving_container_environment_variables=env_vars,\n", " model_garden_source_model_name=(\n", " f\"publishers/{publisher}/models/{publisher_model_id}\"\n", " ),\n", " )\n", "\n", " model.deploy(\n", " endpoint=endpoint,\n", " machine_type=machine_type,\n", " accelerator_type=accelerator_type,\n", " accelerator_count=accelerator_count,\n", " deploy_request_timeout=3600,\n", " system_labels={\n", " \"NOTEBOOK_NAME\": \"model_garden_ollama_deployment.ipynb\",\n", " \"DEPLOY_SOURCE\": \"notebook\",\n", " },\n", " )\n", " print(\"endpoint_name:\", endpoint.name)\n", "\n", " return model, endpoint\n", "\n", "\n", "models[\"ollama\"], endpoints[\"ollama\"] = deploy_model_ollama(\n", " model_name=common_util.get_job_name_with_datetime(prefix=MODEL_ID),\n", " model_id=MODEL_ID,\n", " publisher=\"deepseek-ai\",\n", " publisher_model_id=\"deepseek-r1\",\n", " context_length=context_length,\n", " machine_type=machine_type,\n", " accelerator_type=accelerator_type,\n", " accelerator_count=accelerator_count,\n", " use_dedicated_endpoint=use_dedicated_endpoint,\n", ")\n", "\n", "# @markdown Click \"Show Code\" to see more details." ] }, { "cell_type": "markdown", "metadata": { "id": "cFB1B17ylx0a" }, "source": [ "## Predict" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "Aa4e1-6FvRAP" }, "outputs": [], "source": [ "# @title Raw Predict\n", "\n", "# @markdown Once deployment succeeds, you can send requests to the endpoint with text prompts.\n", "\n", "# @markdown Click \"Show Code\" to see more details.\n", "\n", "# Loads an existing endpoint instance using the endpoint name:\n", "# - Using `endpoint_name = endpoint.name` allows us to get the\n", "# endpoint name of the endpoint `endpoint` created in the cell\n", "# above.\n", "# - Alternatively, you can set `endpoint_name = \"1234567890123456789\"` to load\n", "# an existing endpoint with the ID 1234567890123456789.\n", "# You may uncomment the code below to load an existing endpoint.\n", "\n", "# endpoint_name = \"\" # @param {type:\"string\"}\n", "# aip_endpoint_name = (\n", "# f\"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{endpoint_name}\"\n", "# )\n", "# endpoint = aiplatform.Endpoint(aip_endpoint_name)\n", "\n", "prompt = \"Why is the sky blue?\" # @param {type: \"string\"}\n", "# @markdown If you encounter the issue like `ServiceUnavailable: 503 Took too long to respond when processing`, you can reduce the maximum number of output tokens, such as set `max_tokens` as 20.\n", "max_tokens = 128 # @param {type:\"integer\"}\n", "temperature = 1.0 # @param {type:\"number\"}\n", "top_p = 0.7 # @param {type:\"number\"}\n", "top_k = -1 # @param {type:\"integer\"}\n", "\n", "# Overrides max_tokens and top_k parameters during inferences.\n", "instances = [\n", " {\n", " \"prompt\": prompt,\n", " \"options\": {\n", " \"num_predict\": max_tokens,\n", " \"temperature\": temperature,\n", " \"top_p\": top_p,\n", " \"top_k\": top_k,\n", " },\n", " },\n", "]\n", "response = endpoints[\"ollama\"].predict(\n", " instances=instances, use_dedicated_endpoint=use_dedicated_endpoint\n", ")\n", "\n", "for prediction in response.predictions:\n", " print(prediction)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "KRWGdMl3WEO5" }, "outputs": [], "source": [ "# @title Predict: Streaming Chat Completions\n", "\n", "if use_dedicated_endpoint:\n", " DEDICATED_ENDPOINT_DNS = endpoints[\"ollama\"].gca_resource.dedicated_endpoint_dns\n", "ENDPOINT_RESOURCE_NAME = endpoints[\"ollama\"].resource_name\n", "\n", "# @title Chat Completions Inference\n", "\n", "# @markdown Once deployment succeeds, you can send requests to the endpoint using the OpenAI SDK.\n", "\n", "# @markdown First you will need to install the SDK and some auth-related dependencies.\n", "\n", "! pip install -qU openai google-auth requests\n", "\n", "# @markdown Next fill out some request parameters:\n", "\n", "user_message = \"How is your day going?\" # @param {type: \"string\"}\n", "# @markdown If you encounter the issue like `ServiceUnavailable: 503 Took too long to respond when processing`, you can reduce the maximum number of output tokens, such as set `max_tokens` as 20.\n", "max_tokens = 50 # @param {type: \"integer\"}\n", "temperature = 1.0 # @param {type: \"number\"}\n", "stream = False # @param {type: \"boolean\"}\n", "\n", "# @markdown Now we can send a request.\n", "\n", "import google.auth\n", "import openai\n", "\n", "creds, project = google.auth.default()\n", "auth_req = google.auth.transport.requests.Request()\n", "creds.refresh(auth_req)\n", "\n", "BASE_URL = (\n", " f\"https://{REGION}-aiplatform.googleapis.com/v1beta1/{ENDPOINT_RESOURCE_NAME}\"\n", ")\n", "try:\n", " if use_dedicated_endpoint:\n", " BASE_URL = f\"https://{DEDICATED_ENDPOINT_DNS}/v1beta1/{ENDPOINT_RESOURCE_NAME}\"\n", "except NameError:\n", " pass\n", "\n", "client = openai.OpenAI(base_url=BASE_URL, api_key=creds.token)\n", "\n", "model_response = client.chat.completions.create(\n", " model=\"\",\n", " messages=[{\"role\": \"user\", \"content\": user_message}],\n", " temperature=temperature,\n", " max_tokens=max_tokens,\n", " stream=stream,\n", ")\n", "\n", "if stream:\n", " usage = None\n", " contents = []\n", " for chunk in model_response:\n", " if chunk.usage is not None:\n", " usage = chunk.usage\n", " continue\n", " print(chunk.choices[0].delta.content, end=\"\")\n", " contents.append(chunk.choices[0].delta.content)\n", " print(f\"\\n\\n{usage}\")\n", "else:\n", " print(model_response)\n", "\n", "# @markdown Click \"Show Code\" to see more details." ] }, { "cell_type": "markdown", "metadata": { "id": "tAelDidov5AW" }, "source": [ "## Clean up resources" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "8SeZCFo5v7z-" }, "outputs": [], "source": [ "# @title Delete the models and endpoints\n", "# @markdown Delete the experiment models and endpoints to recycle the resources\n", "# @markdown and avoid unnecessary continuous charges that may incur.\n", "\n", "# Undeploy model and delete endpoint.\n", "for endpoint in endpoints.values():\n", " endpoint.delete(force=True)\n", "\n", "# Delete models.\n", "for model in models.values():\n", " model.delete()" ] } ], "metadata": { "colab": { "name": "model_garden_ollama_deployment.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }