embeddings/intro-textemb-vectorsearch.ipynb (1,225 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "c3_K0GGSTrhd"
},
"outputs": [],
"source": [
"# Copyright 2023 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7VQkf8sFTeDo"
},
"source": [
"# Getting Started with Text Embeddings + Vertex AI Vector Search\n",
"\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb\">\n",
" <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fembeddings%2Fintro-textemb-vectorsearch.ipynb\">\n",
" <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/embeddings/intro-textemb-vectorsearch.ipynb\">\n",
" <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/bigquery/import?url=https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb\">\n",
" <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/bigquery/v1/32px.svg\" alt=\"BigQuery Studio logo\"><br> Open in BigQuery Studio\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb\">\n",
" <img width=\"32px\" src=\"https://www.svgrepo.com/download/217753/github.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://www.cloudskillsboost.google/catalog_lab/31063\">\n",
" <img width=\"32px\" src=\"https://cdn.qwiklabs.com/assets/gcp_cloud-e3a77215f0b8bfa9b3f611c0d2208c7e8708ed31.svg\" alt=\"Google Cloud logo\"><br> Open in Cloud Skills Boost\n",
" </a>\n",
" </td>\n",
"</table>\n",
"\n",
"<div style=\"clear: both;\"></div>\n",
"\n",
"<b>Share to:</b>\n",
"\n",
"<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/intro-textemb-vectorsearch.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
"</a> "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4d742715e6de"
},
"source": [
"| | |\n",
"|-|-|\n",
"|Author(s) | [Smitha Venkat](https://github.com/smitha-google), [Kaz Sato](https://github.com/kazunori279)|"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "25a71983a324"
},
"source": [
"## Introduction\n",
"\n",
"**YouTube Video: What are text embeddings?**\n",
"\n",
"<a href=\"https://www.youtube.com/watch?v=vlcQV4j2kTo&list=PLIivdWyY5sqLvGdVLJZh2EMax97_T-OIB\" target=\"_blank\">\n",
" <img src=\"https://img.youtube.com/vi/vlcQV4j2kTo/maxresdefault.jpg\" alt=\"What are text embeddings?\" width=\"500\">\n",
"</a>\n",
"\n",
"In this tutorial, you learn how to use Google Cloud AI tools to quickly bring the power of Large Language Models to enterprise systems. \n",
"\n",
"This tutorial covers the following:\n",
"\n",
"* What are embeddings - what business challenges do they help solve?\n",
"* Understanding Text with Vertex AI Text Embeddings\n",
"* Find Embeddings fast with Vertex AI Vector Search\n",
"* Grounding LLM outputs with Vector Search\n",
"\n",
"This tutorial is based on [the blog post](https://cloud.google.com/blog/products/ai-machine-learning/how-to-use-grounding-for-your-llms-with-text-embeddings), combined with sample code.\n",
"\n",
"### Prerequisites\n",
"\n",
"This tutorial is designed for developers who has basic knowledge and experience with Python programming and machine learning.\n",
"\n",
"If you are not reading this tutorial in Qwiklab, then you need to have a Google Cloud project that is linked to a billing account to run this. Please go through [this document](https://cloud.google.com/vertex-ai/docs/start/cloud-environment) to create a project and setup a billing account for it."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2pImjuenUIQz"
},
"source": [
"### How much will this cost?\n",
"\n",
"In case you are using your own Cloud project, not a temporary project on Qwiklab, you need to spend roughly a few US dollars to finish this tutorial.\n",
"\n",
"The pricing of the Cloud services we will use in this tutorial are available in the following pages:\n",
"\n",
"- [Vertex AI Embeddings for Text](https://cloud.google.com/vertex-ai/pricing#generative_ai_models)\n",
"- [Vertex AI Vector Search](https://cloud.google.com/vertex-ai/pricing#matchingengine)\n",
"- [BigQuery](https://cloud.google.com/bigquery/pricing)\n",
"- [Cloud Storage](https://cloud.google.com/storage/pricing)\n",
"- [Vertex AI Workbench](https://cloud.google.com/vertex-ai/pricing#notebooks) if you use one\n",
"\n",
"You can use the [Pricing Calculator](https://cloud.google.com/products/calculator) to generate a cost estimate based on your projected usage. The following is an example of rough cost estimation with the calculator, assuming you will go through this tutorial a couple of time.\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/vs-quickstart/pricing.png\" width=\"50%\"/>\n",
"\n",
"### **Warning: delete your objects after the tutorial**\n",
"\n",
"In case you are using your own Cloud project, please make sure to delete all the Indexes, Index Endpoints and Cloud Storage buckets (and the Workbench instance if you use one) after finishing this tutorial. Otherwise the remaining assets would incur unexpected costs.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6Fu2OoUDTQ6w"
},
"source": [
"# Bringing Gen AI and LLMs to production services\n",
"\n",
"Many people are now starting to think about how to bring Gen AI and LLMs to production services, and facing with several challenges.\n",
"\n",
"- \"How to integrate LLMs or AI chatbots with existing IT systems, databases and business data?\"\n",
"- \"We have thousands of products. How can I let LLM memorize them all precisely?\"\n",
"- \"How to handle the hallucination issues in AI chatbots to build a reliable service?\"\n",
"\n",
"Here is a quick solution: **grounding** with **embeddings** and **vector search**.\n",
"\n",
"What is grounding? What are embedding and vector search? In this tutorial, we will learn these crucial concepts to build reliable Gen AI services for enterprise use. But before we dive deeper, let's try the demo below."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ORqZYLgTm9pJ"
},
"source": [
"<img src=\"https://storage.googleapis.com/gweb-cloudblog-publish/original_images/1._demo_animation.gif\" width=\"50%\"/>\n",
"\n",
"**Exercise: Try the Stack Overflow semantic search demo:**\n",
"\n",
"This demo is available as a [public live demo](https://ai-demos.dev/). Select \"STACKOVERFLOW\" and enter any coding question as a query, so it runs a text search on **8 million** questions posted on [Stack Overflow](https://stackoverflow.com/). Try the text semantic search with some queries like 'How to shuffle rows in SQL?' or arbitrary programming questions.\n",
"\n",
"In this tutorial, we are going to see how to build a similar search experience - what is involved in building solutions like this using Vertex AI Embeddings API and Vector Search."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "H1MAIOkCw35V"
},
"source": [
"# What is Embeddings?\n",
"\n",
"With the rise of LLMs, why is it becoming important for IT engineers and ITDMs to understand how they work?\n",
"\n",
"In traditional IT systems, most data is organized as structured or tabular data, using simple keywords, labels, and categories in databases and search engines.\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/textemb-vs-notebook/1.png\" width=\"50%\"/>\n",
"\n",
"In contrast, AI-powered services arrange data into a simple data structure known as \"embeddings.\"\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/textemb-vs-notebook/2.png\" width=\"50%\"/>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hJqjBmQsxz2Z"
},
"source": [
"Once trained with specific content like text, images, or any content, AI creates a space called \"embedding space\", which is essentially a map of the content's meaning.\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/textemb-vs-notebook/3.png\" width=\"50%\"/>\n",
"\n",
"AI can identify the location of each content on the map, that's what embedding is.\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/textemb-vs-notebook/4.png\" width=\"50%\"/>\n",
"\n",
"Let's take an example where a text discusses movies, music, and actors, with a distribution of 10%, 2%, and 30%, respectively. In this case, the AI can create an embedding with three values: 0.1, 0.02, and 0.3, in 3 dimensional space.\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/textemb-vs-notebook/5.png\" width=\"50%\"/>\n",
"\n",
"AI can put content with similar meanings closely together in the space."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "A5z7vyTyzk_4"
},
"source": [
"This is how Google organizes data across various services like Google Search, YouTube, Play, and many others, to provide search results and recommendations with relevant content.\n",
"\n",
"Embeddings can also be used to represent different types of things in businesses, such as products, users, user activities, conversations, music & videos, signals from IoT sensors, and so on."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tpKVmyEe0ab9"
},
"source": [
"AI and Embeddings are now playing a crucial role in creating a new way of human-computer interaction.\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/textemb-vs-notebook/6.png\" width=\"50%\"/>\n",
"\n",
"AI organizes data into embeddings, which represent what the user is looking for, the meaning of contents, or many other things you have in your business. This creates a new level of user experience that is becoming the new standard.\n",
"\n",
"To learn more about embeddings, [Foundational courses: Embeddings on Google Machine Learning Crush Course](https://developers.google.com/machine-learning/crash-course/embeddings/video-lecture) and [Meet AI's multitool: Vector embeddings by Dale Markowitz](https://cloud.google.com/blog/topics/developers-practitioners/meet-ais-multitool-vector-embeddings) are great materials.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ovQpiL2GUEXa"
},
"source": [
"# Vertex AI Embeddings for Text\n",
"\n",
"With the [Vertex AI Embeddings for Text](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings), you can easily create a text embedding with LLM. The product is also available on [Vertex AI Model Garden](https://cloud.google.com/model-garden)\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/textemb-vs-notebook/7.png\" width=\"50%\"/>\n",
"\n",
"This API is designed to extract embeddings from texts. It can take text input up to 2048 input tokens, and outputs 768 dimensional text embeddings."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nwJHDPG7lU52"
},
"source": [
"## LLM text embedding business use cases\n",
"\n",
"With the embedding API, you can apply the innovation of embeddings, combined with the LLM capability, to various text processing tasks, such as:\n",
"\n",
"**LLM-enabled Semantic Search**: text embeddings can be used to represent both the meaning and intent of a user's query and documents in the embedding space. Documents that have similar meaning to the user's query intent will be found fast with vector search technology. The model is capable of generating text embeddings that capture the subtle nuances of each sentence and paragraphs in the document.\n",
"\n",
"**LLM-enabled Text Classification**: LLM text embeddings can be used for text classification with a deep understanding of different contexts without any training or fine-tuning (so-called zero-shot learning). This wasn't possible with the past language models without task-specific training.\n",
"\n",
"**LLM-enabled Recommendation**: The text embedding can be used for recommendation systems as a strong feature for training recommendation models such as Two-Tower model. The model learns the relationship between the query and candidate embeddings, resulting in next-gen user experience with semantic product recommendation.\n",
"\n",
"LLM-enabled Clustering, Anomaly Detection, Sentiment Analysis, and more, can be also handled with the LLM-level deep semantics understanding.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ga5A7koYlvlZ"
},
"source": [
"## Sorting 8 million texts at \"librarian-level\" precision\n",
"\n",
"Vertex AI Embeddings for Text has an embedding space with 768 dimensions. As explained earlier, the space represents a huge map of a wide variety of texts in the world, organized by their meanings. With each input text, the model can find a location (embedding) in the map.\n",
"\n",
"By visualizing the embedding space, you can actually observe how the model sorts the texts at the \"librarian-level\" precision.\n",
"\n",
"**Exercise: Try the Nomic AI Atlas**\n",
"\n",
"[Nomic AI](http://nomic.ai/) provides a platform called Atlas for storing, visualizing and interacting with embedding spaces with high scalability and in a smooth UI, and they worked with Google for visualizing the embedding space of the 8 million Stack Overflow questions. You can try exploring around the space, zooming in and out to each data point on your browser on this page, courtesy of Nomic AI.\n",
"\n",
"The embedding space represents a huge map of texts, organized by their meanings\n",
"With each input text, the model can find a location (embedding) in the map\n",
"Like a librarian reading through millions of texts, sorting them with millions of nano-categories\n",
"\n",
"Try exploring it [here](https://atlas.nomic.ai/map/edaff028-12b5-42a0-8e8b-6430c9b8222b/bcb42818-3581-4fb5-ac30-9883d01f98ec). Zoom into a few categories, point each dots, and see how the LLM is sorting similar questions close together in the space.\n",
"\n",
"<img src=\"https://storage.googleapis.com/gweb-cloudblog-publish/images/4._Nomic_AI_Atlas.max-2200x2200.png\" width=\"50%\"/>\n",
"\n",
"### The librarian-level semantic understanding\n",
"\n",
"Here are the examples of the librarian-level semantic understanding by Embeddings API with Stack Overflow questions.\n",
"\n",
"<img src=\"https://storage.googleapis.com/gweb-cloudblog-publish/images/5._semantic_understanding.max-2200x2200.png\" width=\"50%\"/>\n",
"\n",
"For example, the model thinks the question \"Does moving the request line to a header frame require an app change?\" is similar to the question \"Does an application developed on HTTP1x require modifications to run on HTTP2?\". That is because The model knows both questions talk about what's the change required to support the HTTP2 header frame.\n",
"\n",
"Note that this demo didn't require any training or fine-tuning with computer programming specific datasets. This is the innovative part of the zero-shot learning capability of the LLM. It can be applied to a wide variety of industries, including finance, healthcare, retail, manufacturing, construction, media, and more, for deep semantic search on the industry-focused business documents without spending time and cost for collecting industry specific datasets and training models."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-iOWOKnIvYxf"
},
"source": [
"# Text Embeddings in Action\n",
"\n",
"Lets try using Text Embeddings in action with actual sample code."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AtXnXhF8U-8R"
},
"source": [
"## Setup\n",
"\n",
"Before get started with the Vertex AI services, we need to setup the following.\n",
"\n",
"* Install Python SDK\n",
"* Environment variables\n",
"* Authentication (Colab only)\n",
"* Enable APIs\n",
"* Set IAM permissions"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UjnvWl6FLUlF"
},
"source": [
"### Install Python SDK\n",
"\n",
"Vertex AI, Cloud Storage and BigQuery APIs can be accessed with multiple ways including REST API and Python SDK. In this tutorial we will use the SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FZgLGALt_al7"
},
"outputs": [],
"source": [
"%pip install --upgrade google-genai google-cloud-aiplatform google-cloud-storage 'google-cloud-bigquery[pandas]'"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nCoTvkOJoh76"
},
"source": [
"### Environment variables\n",
"\n",
"Sets environment variables. If asked, please replace the following `[your-project-id]` with your project ID and run it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fkmvFRrj3nQI"
},
"outputs": [],
"source": [
"# generate an unique id for this session\n",
"from datetime import datetime\n",
"import os\n",
"\n",
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
"LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
"\n",
"UID = datetime.now().strftime(\"%m%d%H%M\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cd7dcf92f6d1"
},
"source": [
"### Import Libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "378dc89ca80e"
},
"outputs": [],
"source": [
"import random\n",
"import time\n",
"\n",
"from google import genai\n",
"from google.cloud import aiplatform, bigquery\n",
"import numpy as np\n",
"import tqdm"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ph7mDSMRVTIZ"
},
"source": [
"### Authentication (Colab only)\n",
"\n",
"If you are running this notebook on Colab, you will need to run the following cell authentication. This step is not required if you are using Vertex AI Workbench as it is pre-authenticated."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5jQkFtlimNXR"
},
"outputs": [],
"source": [
"import sys\n",
"\n",
"# if it's Colab runtime, authenticate the user with Google Cloud\n",
"if \"google.colab\" in sys.modules:\n",
" from google.colab import auth\n",
"\n",
" auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jUPbl4IFLmC2"
},
"source": [
"### Enable APIs\n",
"\n",
"Run the following to enable APIs for Compute Engine, Vertex AI, Cloud Storage and BigQuery with this Google Cloud project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qGf0qMMQNond"
},
"outputs": [],
"source": [
"! gcloud services enable compute.googleapis.com aiplatform.googleapis.com storage.googleapis.com bigquery.googleapis.com --project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8cF8rkN3Lnhq"
},
"source": [
"### Set IAM permissions\n",
"\n",
"Also, we need to add access permissions to the default service account for using those services.\n",
"\n",
"- Go to [the IAM page](https://console.cloud.google.com/iam-admin/) in the Console\n",
"- Look for the principal for default compute service account. It should look like: `<project-number>-compute@developer.gserviceaccount.com`\n",
"- Click the edit button at right and click `ADD ANOTHER ROLE` to add `Vertex AI User`, `BigQuery User` and `Storage Admin` to the account.\n",
"\n",
"This will look like this:\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/vs-quickstart/iam-setting.png\" width=\"50%\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mahCxLXHMIls"
},
"source": [
"## Getting Started with Vertex AI Embeddings for Text\n",
"\n",
"Now it's ready to get started with embeddings!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rq07_-o0VoZD"
},
"source": [
"### Data Preparation\n",
"\n",
"We will be using [the Stack Overflow public dataset](https://console.cloud.google.com/marketplace/product/stack-exchange/stack-overflow) hosted on BigQuery table `bigquery-public-data.stackoverflow.posts_questions`. This is a very big dataset with 23 million rows that doesn't fit into the memory. We are going to limit it to 1000 rows for this tutorial."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "snrzPsEQDH4S"
},
"outputs": [],
"source": [
"# load the BQ Table into a Pandas DataFrame\n",
"QUESTIONS_SIZE = 100\n",
"\n",
"bq_client = bigquery.Client(project=PROJECT_ID)\n",
"QUERY_TEMPLATE = \"\"\"\n",
" SELECT distinct q.id, q.title\n",
" FROM (SELECT * FROM `bigquery-public-data.stackoverflow.posts_questions`\n",
" where Score > 0 ORDER BY View_Count desc) AS q\n",
" LIMIT {limit} ;\n",
" \"\"\"\n",
"query = QUERY_TEMPLATE.format(limit=QUESTIONS_SIZE)\n",
"query_job = bq_client.query(query)\n",
"rows = query_job.result()\n",
"df = rows.to_dataframe()\n",
"\n",
"# examine the data\n",
"df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "j6022U1FWzpb"
},
"source": [
"### Call the API to generate embeddings\n",
"\n",
"With the Stack Overflow dataset, we will use the `title` column (the question title) and generate embedding for it with Embeddings for Text API. The API is available in the Google Gen AI SDK.\n",
"\n",
"You may see some warning messages from the TensorFlow library but you can ignore them."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pY8M4DqO8wGx"
},
"outputs": [],
"source": [
"client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FqdVsgZDb_hc"
},
"source": [
"In this tutorial we will use `text-embedding-005` model for getting text embeddings. Please take a look at [Supported models](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings#supported_models) on the doc to see the list of supported models.\n",
"\n",
"You can pass up to 5 texts at once in a call. But there is a caveat. By default, the text embeddings API has a \"request per minute\" quota set to 60 for new Cloud projects and 600 for projects with usage history (see [Quotas and limits](https://cloud.google.com/vertex-ai/generative-ai/docs/quotas) to check the latest quota value for `base_model:text-embedding-005`). So, rather than using the function directly, you may want to define a wrapper like below to limit under 10 calls per second, and pass 5 texts each time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "785ec9e4f544"
},
"outputs": [],
"source": [
"TEXT_EMBEDDING_MODEL_ID = \"text-embedding-005\" # @param {type: \"string\"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8HUb9u_P2VWW"
},
"outputs": [],
"source": [
"# get embeddings for a list of texts\n",
"BATCH_SIZE = 5\n",
"\n",
"\n",
"def get_embeddings_wrapper(texts: list[str]) -> list[list[float]]:\n",
" embeddings: list[list[float]] = []\n",
" for i in tqdm.tqdm(range(0, len(texts), BATCH_SIZE)):\n",
" time.sleep(1) # to avoid the quota error\n",
" response = client.models.embed_content(\n",
" model=TEXT_EMBEDDING_MODEL_ID, contents=texts[i : i + BATCH_SIZE]\n",
" )\n",
" embeddings = embeddings + [e.values for e in response.embeddings]\n",
" return embeddings"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aK4eTSPfcEuh"
},
"source": [
"The following code will get embedding for the question titles and add them as a new column `embedding` to the DataFrame. This will take a few minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FcqPvu4PluN1"
},
"outputs": [],
"source": [
"# get embeddings for the question titles and add them as \"embedding\" column\n",
"df = df.assign(embedding=get_embeddings_wrapper(list(df.title)))\n",
"df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nB53SiJjVN6e"
},
"source": [
"## Look at the embedding similarities\n",
"\n",
"Let's see how these embeddings are organized in the embedding space with their meanings by quickly calculating the similarities between them and sorting them.\n",
"\n",
"As embeddings are vectors, you can calculate similarity between two embeddings by using one of the popular metrics like the followings:\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/textemb-vs-notebook/8.png\" width=\"50%\"/>\n",
"\n",
"Which metric should we use? Usually it depends on how each model is trained. In case of the Google model, we need to use inner product (dot product).\n",
"\n",
"In the following code, it picks up one question randomly and uses the `numpy` `np.dot` function to calculate the similarities between the question and other questions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "lKs6jSu7NiM6"
},
"outputs": [],
"source": [
"# pick one of them as a key question\n",
"key = random.randint(0, len(df))\n",
"\n",
"# calc dot product between the key and other questions\n",
"embs = np.array(df.embedding.to_list())\n",
"similarities = np.dot(embs[key], embs.T)\n",
"\n",
"# print similarities for the first 5 questions\n",
"similarities[:5]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "srM04lJBQp4w"
},
"source": [
"Finally, sort the questions with the similarities and print the list."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "lTUVvj9FQlab"
},
"outputs": [],
"source": [
"# print the question\n",
"print(f\"Key question: {df.title[key]}\\n\")\n",
"\n",
"# sort and print the questions by similarities\n",
"sorted_questions = sorted(\n",
" zip(df.title, similarities), key=lambda x: x[1], reverse=True\n",
")[:20]\n",
"for i, (question, similarity) in enumerate(sorted_questions):\n",
" print(f\"{similarity:.4f} {question}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "S75SQzAg1wHV"
},
"source": [
"# Find embeddings fast with Vertex AI Vector Search\n",
"\n",
"As we have explained above, you can find similar embeddings by calculating the distance or similarity between the embeddings.\n",
"\n",
"But this isn't easy when you have millions or billions of embeddings. For example, if you have 1 million embeddings with 768 dimensions, you need to repeat the distance calculations for 1 million x 768 times. This would take some seconds - too slow."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0sjhTy-a47YH"
},
"source": [
"So the researchers have been studying a technique called [Approximate Nearest Neighbor (ANN)](https://en.wikipedia.org/wiki/Nearest_neighbor_search) for faster search. ANN uses \"vector quantization\" for separating the space into multiple spaces with a tree structure. This is similar to the index in relational databases for improving the query performance, enabling very fast and scalable search with billions of embeddings.\n",
"\n",
"With the rise of LLMs, the ANN is getting popular quite rapidly, known as the Vector Search technology.\n",
"\n",
"<img src=\"https://storage.googleapis.com/gweb-cloudblog-publish/images/7._ANN.1143068821171228.max-2200x2200.png\" width=\"50%\"/>\n",
"\n",
"In 2020, Google Research published a new ANN algorithm called [ScaNN](https://ai.googleblog.com/2020/07/announcing-scann-efficient-vector.html). It is considered one of the best ANN algorithms in the industry, also the most important foundation for search and recommendation in major Google services such as Google Search, YouTube and many others."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xVOL8BgM2isz"
},
"source": [
"## What is Vertex AI Vector Search?\n",
"\n",
"Google Cloud developers can take the full advantage of Google's vector search technology with [Vertex AI Vector Search](https://cloud.google.com/vertex-ai/docs/vector-search/overview) (previously called Matching Engine). With this fully managed service, developers can just add the embeddings to its index and issue a search query with a key embedding for the blazingly fast vector search. In the case of the Stack Overflow demo, Vector Search can find relevant questions from 8 million embeddings in tens of milliseconds.\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/textemb-vs-notebook/9.png\" width=\"50%\"/>\n",
"\n",
"With Vector Search, you don't need to spend much time and money building your own vector search service from scratch or using open source tools if your goal is high scalability, availability and maintainability for production systems."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uBt8tjidSzyU"
},
"source": [
"## Get Started with Vector Search\n",
"\n",
"When you already have the embeddings, then getting started with Vector Search is pretty easy. In this section, we will follow the steps below.\n",
"\n",
"### Setting up Vector Search\n",
"- Save the embeddings in JSON files on Cloud Storage\n",
"- Build an Index\n",
"- Create an Index Endpoint\n",
"- Deploy the Index to the endpoint\n",
"\n",
"### Use Vector Search\n",
"\n",
"- Query with the endpoint\n",
"\n",
"### **Tip for Colab users**\n",
"\n",
"If you use Colab for this tutorial, you may lose your runtime while you are waiting for the Index building and deployment in the later sections as it takes tens of minutes. In that case, run the following sections again with the new instance to recover the runtime: [Install Python SDK, Environment variables and Authentication](https://colab.research.google.com/drive/1xJhLFEyPqW0qvKiERD6aYgeTHa6_U50N?resourcekey=0-2qUkxckCjt6W03AsqvZHhw#scrollTo=AtXnXhF8U-8R&line=9&uniqifier=1).\n",
"\n",
"Then, use the [Utilities](https://colab.research.google.com/drive/1xJhLFEyPqW0qvKiERD6aYgeTHa6_U50N?resourcekey=0-2qUkxckCjt6W03AsqvZHhw#scrollTo=BE1tELsH-u8N&line=1&uniqifier=1) to recover the Index and Index Endpoint and continue with the rest."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6pu1a3zjfQ0D"
},
"source": [
"### Save the embeddings in a JSON file\n",
"To load the embeddings to Vector Search, we need to save them in JSON files with JSONL format. See more information in the docs at [Input data format and structure](https://cloud.google.com/vertex-ai/docs/matching-engine/match-eng-setup/format-structure#data-file-formats).\n",
"\n",
"First, export the `id` and `embedding` columns from the DataFrame in JSONL format, and save it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GzZ30d4j_uLU"
},
"outputs": [],
"source": [
"# save id and embedding as a json file\n",
"jsonl_string = df[[\"id\", \"embedding\"]].to_json(orient=\"records\", lines=True)\n",
"with open(\"questions.json\", \"w\") as f:\n",
" f.write(jsonl_string)\n",
"\n",
"# show the first few lines of the json file\n",
"! head -n 3 questions.json"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-WTNJ3FAQl_W"
},
"source": [
"Then, create a new Cloud Storage bucket and copy the file to it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CzwDWJfzAk3n"
},
"outputs": [],
"source": [
"BUCKET_URI = f\"gs://{PROJECT_ID}-embvs-tutorial-{UID}\"\n",
"! gsutil mb -l $LOCATION -p {PROJECT_ID} {BUCKET_URI}\n",
"! gsutil cp questions.json {BUCKET_URI}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xxdbjKw1XDxl"
},
"source": [
"### Create an Index\n",
"\n",
"Now it's ready to load the embeddings to Vector Search. Its API is available under the [`aiplatform`](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform) package of the SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8unyr9KagAoI"
},
"outputs": [],
"source": [
"aiplatform.init(project=PROJECT_ID, location=LOCATION)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xpMUXqWQ75uu"
},
"source": [
"Create a [`MatchingEngineIndex`](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.MatchingEngineIndex) with its `create_tree_ah_index` function (Matching Engine is the previous name of Vector Search)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "kKDw5VXMkXb3"
},
"outputs": [],
"source": [
"# create index\n",
"my_index = aiplatform.MatchingEngineIndex.create_tree_ah_index(\n",
" display_name=f\"embvs-tutorial-index-{UID}\",\n",
" contents_delta_uri=BUCKET_URI,\n",
" dimensions=768,\n",
" approximate_neighbors_count=20,\n",
" distance_measure_type=\"DOT_PRODUCT_DISTANCE\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2rFam_w9U0dI"
},
"source": [
"By calling the `create_tree_ah_index` function, it starts building an Index. This will take under a few minutes if the dataset is small, otherwise about 50 minutes or more depending on the size of the dataset. You can check status of the index creation on [the Vector Search Console > INDEXES tab](https://console.cloud.google.com/vertex-ai/matching-engine/indexes).\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/vs-quickstart/creating-index.png\" width=\"50%\"/>\n",
"\n",
"#### The parameters for creating index\n",
"\n",
"- `contents_delta_uri`: The URI of Cloud Storage directory where you stored the embedding JSON files\n",
"- `dimensions`: Dimension size of each embedding. In this case, it is 768 as we are using the embeddings from the Text Embeddings API.\n",
"- `approximate_neighbors_count`: how many similar items we want to retrieve in typical cases\n",
"- `distance_measure_type`: what metrics to measure distance/similarity between embeddings. In this case it's `DOT_PRODUCT_DISTANCE`\n",
"\n",
"See [the document](https://cloud.google.com/vertex-ai/docs/vector-search/create-manage-index) for more details on creating Index and the parameters.\n",
"\n",
"#### Batch Update or Streaming Update?\n",
"There are two types of index: Index for *Batch Update* (used in this tutorial) and Index for *Streaming Updates*. The Batch Update index can be updated with a batch process whereas the Streaming Update index can be updated in real-time. The latter one is more suited for use cases where you want to add or update each embeddings in the index more often, and crucial to serve with the latest embeddings, such as e-commerce product search.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VLOAMF50XMI8"
},
"source": [
"### Create Index Endpoint and deploy the Index\n",
"\n",
"To use the Index, you need to create an [Index Endpoint](https://cloud.google.com/vertex-ai/docs/vector-search/deploy-index-public). It works as a server instance accepting query requests for your Index."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "h6IzyufWCjU1"
},
"outputs": [],
"source": [
"# create IndexEndpoint\n",
"my_index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create(\n",
" display_name=f\"embvs-tutorial-index-endpoint-{UID}\",\n",
" public_endpoint_enabled=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "11e3e1a3a9e9"
},
"source": [
"This tutorial utilizes a [Public Endpoint](https://cloud.google.com/vertex-ai/docs/vector-search/setup/setup#choose-endpoint) and does not support [Virtual Private Cloud (VPC)](https://cloud.google.com/vpc/docs/private-services-access). Unless you have a specific requirement for VPC, we recommend using a Public Endpoint. Despite the term \"public\" in its name, it does not imply open access to the public internet. Rather, it functions like other endpoints in Vertex AI services, which are secured by default through IAM. Without explicit IAM permissions, as we have previously established, no one can access the endpoint."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8n33iO1T5hFO"
},
"source": [
"With the Index Endpoint, deploy the Index by specifying an unique deployed index ID."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FcBHLifGwAWq"
},
"outputs": [],
"source": [
"DEPLOYED_INDEX_ID = f\"embvs_tutorial_deployed_{UID}\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1jUoGhY5TPFP"
},
"outputs": [],
"source": [
"# deploy the Index to the Index Endpoint\n",
"my_index_endpoint.deploy_index(index=my_index, deployed_index_id=DEPLOYED_INDEX_ID)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xu9ZmWcpXQ55"
},
"source": [
"If it is the first time to deploy an Index to an Index Endpoint, it will take around 25 minutes to automatically build and initiate the backend for it. After the first deployment, it will finish in seconds. To see the status of the index deployment, open [the Vector Search Console > INDEX ENDPOINTS tab](https://console.cloud.google.com/vertex-ai/matching-engine/index-endpoints) and click the Index Endpoint.\n",
"\n",
"<img src=\"https://storage.googleapis.com/github-repo/img/embeddings/vs-quickstart/deploying-index.png\" width=\"50%\">"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oTi4PjjbXV-O"
},
"source": [
"### Run Query\n",
"\n",
"Finally it's ready to use Vector Search. In the following code, it creates an embedding for a test question, and find similar question with the Vector Search."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FhNuRQqUWdfe"
},
"outputs": [],
"source": [
"test_embeddings = get_embeddings_wrapper([\"How to read JSON with Python?\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Q01DGMBPXAg-"
},
"outputs": [],
"source": [
"# Test query\n",
"response = my_index_endpoint.find_neighbors(\n",
" deployed_index_id=DEPLOYED_INDEX_ID,\n",
" queries=test_embeddings,\n",
" num_neighbors=20,\n",
")\n",
"\n",
"# show the result\n",
"import numpy as np\n",
"\n",
"for idx, neighbor in enumerate(response[0]):\n",
" id = np.int64(neighbor.id)\n",
" similar = df.query(\"id == @id\", engine=\"python\")\n",
" print(f\"{neighbor.distance:.4f} {similar.title.values[0]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tPDOL9caoYZ9"
},
"source": [
"The `find_neighbors` function only takes milliseconds to fetch the similar items even when you have billions of items on the Index, thanks to the ScaNN algorithm. Vector Search also supports [autoscaling](https://cloud.google.com/vertex-ai/docs/vector-search/deploy-index-public#autoscaling) which can automatically resize the number of nodes based on the demands of your workloads."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DDt4D6FDyc66"
},
"source": [
"# IMPORTANT: Cleaning Up\n",
"\n",
"In case you are using your own Cloud project, not a temporary project on Qwiklab, please make sure to delete all the Indexes, Index Endpoints and Cloud Storage buckets after finishing this tutorial. Otherwise the remaining objects would **incur unexpected costs**.\n",
"\n",
"If you used Workbench, you may also need to delete the Notebooks from [the console](https://console.cloud.google.com/vertex-ai/workbench)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "MEsKVzguyxNx"
},
"outputs": [],
"source": [
"# wait for a confirmation\n",
"input(\"Press Enter to delete Index Endpoint, Index and Cloud Storage bucket:\")\n",
"\n",
"# delete Index Endpoint\n",
"my_index_endpoint.undeploy_all()\n",
"my_index_endpoint.delete(force=True)\n",
"\n",
"# delete Index\n",
"my_index.delete()\n",
"\n",
"# delete Cloud Storage bucket\n",
"! gsutil rm -r {BUCKET_URI}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b8k26QOF3Ys7"
},
"source": [
"# Summary\n",
"\n",
"## Grounding LLM outputs with Vertex AI Vector Search\n",
"\n",
"As we have seen, by combining the Embeddings API and Vector Search, you can use the embeddings to \"ground\" LLM outputs to real business data with low latency.\n",
"\n",
"For example, if an user asks a question, Embeddings API can convert it to an embedding, and issue a query on Vector Search to find similar embeddings in its index. Those embeddings represent the actual business data in the databases. As we are just retrieving the business data and not generating any artificial texts, there is no risk of having hallucinations in the result.\n",
"\n",
"<img src=\"https://storage.googleapis.com/gweb-cloudblog-publish/original_images/10._grounding.png\" width=\"50%\"/>\n",
"\n",
"### The difference between the questions and answers\n",
"\n",
"In this tutorial, we have used the Stack Overflow dataset. There is a reason why we had to use it; As the dataset has many pairs of **questions and answers**, so you can just find questions similar to your question to find answers to it.\n",
"\n",
"In many business use cases, the semantics (meaning) of questions and answers are different. Also, there could be cases where you would want to add variety of recommended or personalized items to the results, like product search on e-commerce sites.\n",
"\n",
"In these cases, the simple semantics search don't work well. It's more like a recommendation system problem where you may want to train a model (e.g. Two-Tower model) to learn the relationship between the question embedding space and answer embedding space. Also, many production systems adds re-ranking phase after the semantic search to achieve higher search quality. Please see [Scaling deep retrieval with TensorFlow Recommenders and Vertex AI Matching Engine](https://cloud.google.com/blog/products/ai-machine-learning/scaling-deep-retrieval-tensorflow-two-towers-architecture) to learn more.\n",
"\n",
"### Hybrid of semantic + keyword search\n",
"\n",
"Another typical challenge you will face in production system is to support keyword search combined with the semantic search. For example, for e-commerce product search, you may want to let users find product by entering its product name or model number. As LLM doesn't memorize those product names or model numbers, semantic search can't handle those \"usual\" search functionalities.\n",
"\n",
"[Vertex AI Search](https://cloud.google.com/blog/products/ai-machine-learning/vertex-ai-search-and-conversation-is-now-generally-available) is another product you may consider for those requirements. While Vector Search provides a simple semantic search capability only, Search provides a integrated search solution that combines semantic search, keyword search, re-ranking and filtering, available as an out-of-the-box tool.\n",
"\n",
"### What about Retrieval Augmented Generation (RAG)?\n",
"\n",
"In this tutorial, we have looked at the simple combination of LLM embeddings and vector search. From this starting point, you may also extend the design to [Retrieval Augmented Generation (RAG)](https://www.google.com/search?q=Retrieval+Augmented+Generation+(RAG)&oq=Retrieval+Augmented+Generation+(RAG)).\n",
"\n",
"RAG is a popular architecture pattern of implementing grounding with LLM with text chat UI. The idea is to have the LLM text chat UI as a frontend for the document retrieval with vector search and summarization of the result.\n",
"\n",
"<img src=\"https://storage.googleapis.com/gweb-cloudblog-publish/images/Figure-7-Ask_Your_Documents_Flow.max-529x434.png\" width=\"50%\"/>\n",
"\n",
"There are some pros and cons between the two solutions.\n",
"\n",
"| | Embeddings + vector search | RAG |\n",
"|---|---|---|\n",
"| Design | simple | complex |\n",
"| UI | Text search UI | Text chat UI |\n",
"| Summarization of result | No | Yes |\n",
"| Multi-turn (Context aware) | No | Yes |\n",
"| Latency | milliseconds | seconds |\n",
"| Cost | lower | higher |\n",
"| Hallucinations | No risk | Some risk |\n",
"\n",
"The Embedding + vector search pattern we have looked at with this tutorial provides simple, fast and low cost semantic search functionality with the LLM intelligence. RAG adds context-aware text chat experience and result summarization to it. While RAG provides the more \"Gen AI-ish\" experience, it also adds a risk of hallucination and higher cost and time for the text generation.\n",
"\n",
"To learn more about how to build a RAG solution, you may look at [Building Generative AI applications made easy with Vertex AI PaLM API and LangChain](https://cloud.google.com/blog/products/ai-machine-learning/generative-ai-applications-with-vertex-ai-palm-2-models-and-langchain).\n",
"\n",
"## Resources\n",
"\n",
"To learn more, please check out the following resources:\n",
"\n",
"### Documentations\n",
"\n",
"[Vertex AI Embeddings for Text API documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings)\n",
"\n",
"[Vector Search documentation](https://cloud.google.com/vertex-ai/docs/matching-engine/overview)\n",
"\n",
"### Vector Search blog posts\n",
"\n",
"[Vertex Matching Engine: Blazing fast and massively scalable nearest neighbor search](https://cloud.google.com/blog/products/ai-machine-learning/vertex-matching-engine-blazing-fast-and-massively-scalable-nearest-neighbor-search)\n",
"\n",
"[Find anything blazingly fast with Google's vector search technology](https://cloud.google.com/blog/topics/developers-practitioners/find-anything-blazingly-fast-googles-vector-search-technology)\n",
"\n",
"[Enabling real-time AI with Streaming Ingestion in Vertex AI](https://cloud.google.com/blog/products/ai-machine-learning/real-time-ai-with-google-cloud-vertex-ai)\n",
"\n",
"[Mercari leverages Google's vector search technology to create a new marketplace](https://cloud.google.com/blog/topics/developers-practitioners/mercari-leverages-googles-vector-search-technology-create-new-marketplace)\n",
"\n",
"[Recommending news articles using Vertex AI Matching Engine](https://cloud.google.com/blog/products/ai-machine-learning/recommending-articles-using-vertex-ai-matching-engine)\n",
"\n",
"[What is Multimodal Search: \"LLMs with vision\" change businesses](https://cloud.google.com/blog/products/ai-machine-learning/multimodal-generative-ai-search)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BE1tELsH-u8N"
},
"source": [
"# Utilities\n",
"\n",
"Sometimes it takes tens of minutes to create or deploy Indexes and you would lose connection with the Colab runtime. In that case, instead of creating or deploying new Index again, you can check [the Vector Search Console](https://console.cloud.google.com/vertex-ai/matching-engine/index-endpoints) and get the existing ones to continue."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wF_pkdpJ-yaq"
},
"source": [
"## Get an existing Index\n",
"\n",
"To get an Index object that already exists, replace the following `[your-index-id]` with the index ID and run the cell. You can check the ID on [the Vector Search Console > INDEXES tab](https://console.cloud.google.com/vertex-ai/matching-engine/indexes)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mEBkZZt_-0jG"
},
"outputs": [],
"source": [
"my_index_id = \"[your-index-id]\" # @param {type:\"string\"}\n",
"my_index = aiplatform.MatchingEngineIndex(my_index_id)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_vlgzkyw-3CI"
},
"source": [
"## Get an existing Index Endpoint\n",
"\n",
"To get an Index Endpoint object that already exists, replace the following `[your-index-endpoint-id]` with the Index Endpoint ID and run the cell. You can check the ID on [the Vector Search Console > INDEX ENDPOINTS tab](https://console.cloud.google.com/vertex-ai/matching-engine/index-endpoints)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "E0OFnirF-6Rk"
},
"outputs": [],
"source": [
"my_index_endpoint_id = \"[your-index-endpoint-id]\" # @param {type:\"string\"}\n",
"my_index_endpoint = aiplatform.MatchingEngineIndexEndpoint(my_index_endpoint_id)"
]
}
],
"metadata": {
"colab": {
"name": "intro-textemb-vectorsearch.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}