gemini/model-optimizer/intro_model_optimizer.ipynb (558 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "j7YkN1cHfdNx"
},
"outputs": [],
"source": [
"# Copyright 2025 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ukKqC-y7Meaj"
},
"source": [
"# Getting Started with Model Optimizer\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/model-optimizer/intro_model_optimizer.ipynb\">\n",
" <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fmodel-optimizer%2Fintro_model_optimizer.ipynb\">\n",
" <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/model-optimizer/intro_model_optimizer.ipynb\">\n",
" <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/model-optimizer/intro_model_optimizer.ipynb\">\n",
" <img width=\"32px\" src=\"https://www.svgrepo.com/download/217753/github.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
"</table>\n",
"\n",
"<div style=\"clear: both;\"></div>\n",
"\n",
"\n",
"\n",
"<b>Share to:</b>\n",
"\n",
"<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/model-optimizer/intro_model_optimizer.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/model-optimizer/intro_model_optimizer.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/model-optimizer/intro_model_optimizer.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/model-optimizer/intro_model_optimizer.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/model-optimizer/intro_model_optimizer.ipynb\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
"</a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6e2b37e51e86"
},
"source": [
"| Author |\n",
"| --- |\n",
"| [Nardos Alemu](https://github.com/nardosalemu) |"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "u26sA--3MqnL"
},
"source": [
"## Overview"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MFgnm8KDNJQq"
},
"source": [
"Model Optimizer intelligently routes user queries across models (Pro, Flash, etc.) for resource optimization. This notebook provides an introduction on how to use Model Optimizer with the Google Gen AI SDK.\n",
"\n",
"Learn more about [Model Optimizer](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/vertex-ai-model-optimizer)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KXWvcF8MNKoz"
},
"source": [
"### Objectives\n",
"\n",
"This tutorial shows how to:\n",
"\n",
"- Send prompt queries to Model Optimizer.\n",
"- Configure routing preferences to optimize quality or cost.\n",
"- Utilize Model Optimizer for single-turn and multi-turn conversations.\n",
"- Execute function calls through Model Optimizer."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YgFD99zhNTl4"
},
"source": [
"### Costs\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"- Vertex AI\n",
"\n",
"Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7oz3w7mNOY2y"
},
"source": [
"## Getting Started"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2qQaXxTIOx5N"
},
"source": [
"### Install Google Gen AI SDK\n",
"\n",
"Learn more about [Google Gen AI SDK](https://cloud.google.com/vertex-ai/generative-ai/docs/sdks/overview)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "x0FvEUXQTqq8"
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-genai"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2ecdac7df988"
},
"source": [
"### Authenticate your notebook environment (Colab only)\n",
"\n",
"If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "25b2cafd04a0"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ScgYz3iePGTi"
},
"source": [
"### Set Google Cloud project information\n",
"\n",
"To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
"\n",
"Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "njigDi64pMH6"
},
"outputs": [],
"source": [
"# Use the environment variable if the user doesn't provide Project ID.\n",
"import os\n",
"\n",
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
"LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
"\n",
"from google import genai\n",
"\n",
"client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "10YwtoGTifpr"
},
"source": [
"### Import libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "tSeSuXHNin_K"
},
"outputs": [],
"source": [
"from IPython.display import Markdown, display\n",
"from google.genai.types import (\n",
" FeatureSelectionPreference,\n",
" FunctionDeclaration,\n",
" GenerateContentConfig,\n",
" ModelSelectionConfig,\n",
" Part,\n",
" Tool,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "F9t-K8prqP2E"
},
"source": [
"### Load Model & set System Instruction\n",
"\n",
"`model-optimizer-exp-04-09` is the model used for Model Optimizer."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZXlCsPRdybGp"
},
"outputs": [],
"source": [
"MODEL_ID = \"model-optimizer-exp-04-09\"\n",
"system_instruction = \"\" # @param {'type': 'string'}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9669b4ec9703"
},
"source": [
"### Choose Routing Option\n",
"\n",
"Model Optimizer intelligently routes user queries based on user-defined cost and quality preferences. This dynamic routing capability enables users to optimize their interactions according to their specific needs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "653f0e04f4b2"
},
"outputs": [],
"source": [
"model_selection_config = ModelSelectionConfig(\n",
" feature_selection_preference=FeatureSelectionPreference.BALANCED # Options: PRIORITIZE_QUALITY, BALANCED, PRIORITIZE_COST\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LU7XjOvWw9AX"
},
"source": [
"### Send a request\n",
"\n",
"The following function is defined to send a request to the model. `generate_content` is the method used to generate response. The method supports streaming with `stream=True`. The result has the same type as the non-streaming case, but you can iterate over the response chunks as they become available."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "c3775672ea59"
},
"outputs": [],
"source": [
"def generate_content(prompt, stream: bool = False) -> None:\n",
" if stream:\n",
" output_text = \"\"\n",
" markdown_display_area = display(Markdown(output_text), display_id=True)\n",
"\n",
" for chunk in client.models.generate_content_stream(\n",
" model=MODEL_ID,\n",
" contents=prompt,\n",
" config=GenerateContentConfig(\n",
" system_instruction=system_instruction,\n",
" model_selection_config=model_selection_config,\n",
" ),\n",
" ):\n",
" output_text += chunk.text\n",
" markdown_display_area.update(Markdown(output_text))\n",
" else:\n",
" response = client.models.generate_content(\n",
" model=MODEL_ID,\n",
" contents=prompt,\n",
" config=GenerateContentConfig(\n",
" system_instruction=system_instruction,\n",
" model_selection_config=model_selection_config,\n",
" ),\n",
" )\n",
" display(Markdown(response.text))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Alp6hJEhqu1-"
},
"source": [
"#### Single-turn query\n",
"\n",
"A single-turn query is the most basic type of interaction with the model, consisting of a single user prompt and a single model response. It's suitable for simple requests where context is not essential."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "q2ilitotdjF3"
},
"outputs": [],
"source": [
"prompt = \"Write a poem\" # @param {'type': 'string'}\n",
"generate_content(prompt, stream=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PRgmFGJUySkh"
},
"source": [
"#### Multi-turn queries\n",
"\n",
"Multi-turn queries involve ongoing conversations with the model, where the model retains context from previous turns to generate relevant responses. These allow for more complex interactions and the development of conversational flows.\n",
"\n",
"There are two ways to structure multi-turn queries:\n",
"\n",
"- **List of text prompts:** A simple list of strings, where each string represents a turn in the conversation. This format is suitable for basic multi-turn scenarios.\n",
"- **List of `Content` objects:** A more structured format using `Content` objects, where you can explicitly define the role (user or model) and parts of each turn. This format provides greater control and clarity, especially when dealing with complex conversations."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ueGtpt_3yUnc"
},
"outputs": [],
"source": [
"prompt = [\n",
" \"What is x multiplied by 2?\",\n",
" \"x = 42\",\n",
"]\n",
"generate_content(prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "R3AhsxKprN-X"
},
"source": [
"#### Function-call queries\n",
"\n",
"Model Optimizer also supports calls to functions to perform actions and retrieve information. You define the function's signature (name and parameters), and the model can generate a call to that function with appropriate arguments when it determines that doing so would be helpful. In this example, we define a simple function `get_current_weather` and demonstrate how the model can call it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NZHQ5_ZXJ997"
},
"outputs": [],
"source": [
"def get_current_weather(location: str, unit: str | None = \"celsius\") -> dict:\n",
" \"\"\"Gets weather in the specified location.\n",
"\n",
" Args:\n",
" location: The location for which to get the weather.\n",
" unit: Temperature unit. Can be Celsius or Fahrenheit. Default: Celsius.\n",
"\n",
" Returns:\n",
" The weather information as a dict.\n",
" \"\"\"\n",
" return {\n",
" \"location\": location,\n",
" \"unit\": unit,\n",
" \"weather\": \"Super nice, but maybe a bit hot.\",\n",
" }\n",
"\n",
"\n",
"_REQUEST_FUNCTION_PARAMETER_SCHEMA_STRUCT = {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
" },\n",
" \"unit\": {\n",
" \"type\": \"string\",\n",
" \"enum\": [\n",
" \"celsius\",\n",
" \"fahrenheit\",\n",
" ],\n",
" },\n",
" },\n",
" \"required\": [\"location\"],\n",
"}\n",
"\n",
"_REQUEST_FUNCTION_RESPONSE_SCHEMA_STRUCT = {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
" },\n",
" \"unit\": {\n",
" \"type\": \"string\",\n",
" \"enum\": [\n",
" \"celsius\",\n",
" \"fahrenheit\",\n",
" ],\n",
" },\n",
" \"weather\": {\n",
" \"type\": \"string\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"get_current_weather_func = FunctionDeclaration(\n",
" name=\"get_current_weather\",\n",
" description=\"Get the current weather in a given location\",\n",
" parameters=_REQUEST_FUNCTION_PARAMETER_SCHEMA_STRUCT,\n",
" response=_REQUEST_FUNCTION_RESPONSE_SCHEMA_STRUCT,\n",
")\n",
"\n",
"\n",
"prompt = \"What is the weather like in Boston?\"\n",
"\n",
"weather_tool = Tool(function_declarations=[get_current_weather_func])\n",
"\n",
"response = client.models.generate_content(\n",
" model=MODEL_ID,\n",
" contents=prompt,\n",
" config=GenerateContentConfig(\n",
" tools=[weather_tool],\n",
" model_selection_config=model_selection_config,\n",
" system_instruction=system_instruction,\n",
" ),\n",
")\n",
"\n",
"print(\"Called function: \", response.function_calls)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "234295d42fed"
},
"source": [
"To preview more details:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "592cff825422"
},
"outputs": [],
"source": [
"function_map = {\"get_current_weather\": get_current_weather}\n",
"\n",
"function_response_parts = []\n",
"for function_call in response.function_calls:\n",
" function = function_map[function_call.name]\n",
" function_result = function(**function_call.args)\n",
" function_response_parts.append(\n",
" Part.from_function_response(\n",
" name=function_call.name,\n",
" response=function_result,\n",
" )\n",
" )\n",
"\n",
"print(function_response_parts)"
]
}
],
"metadata": {
"colab": {
"name": "intro_model_optimizer.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}