notebooks/text_embeddings_models/text-embeddings-on-ipu.ipynb (619 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "source": [ "# General-purpose Text Embeddings on the IPU" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "This notebook describes how to use supported embeddings models to generate SOTA text embeddings on the IPU. You can use the following:\n", "* [E5 model](https://arxiv.org/pdf/2212.03533.pdf) (Emb**E**ddings from bidir**E**ctional **E**ncoder r**E**presentations) to generate text embeddings on the IPU.\n", "* [Sentence Transformers MPNet Base V2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2), which is an embeddings model based on the MPNet base model.\n", "* [Sentence-T5](https://arxiv.org/abs/2108.08877), which runs on a pre-trained T5 model encoder." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Environment setup\n", "\n", "The best way to run this demo is on Paperspace Gradient's cloud IPUs because everything is already set up for you.\n", "\n", "[![Run on Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/cPfXNO)\n", "\n", "To run the demo using other IPU hardware, you need to have the Poplar SDK enabled and a PopTorch wheel installed. Refer to the [Getting Started guide for your system](https://docs.graphcore.ai/en/latest/getting-started.html) for details on how to do this. Also refer to the Jupyter Quick Start guide for how to set up Jupyter to be able to run this notebook on a remote IPU machine." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "First, install the requirements for running this notebook:" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "# Install Optimum Graphcore if it is not in your environment\n", "! pip install optimum-graphcore==0.7.1 sentence-transformers==2.2.2 graphcore-cloud-tools[logger]@git+https://github.com/graphcore/graphcore-cloud-tools\n", "\n", "%load_ext graphcore_cloud_tools.notebook_logging.gc_logger" ], "outputs": [], "metadata": { "scrolled": true } }, { "cell_type": "markdown", "source": [ "In order to improve usability and support for future users, Graphcore would like to collect information about the applications and code being run in this notebook. The following information will be anonymised before being sent to Graphcore:\n", "\n", "- User progression through the notebook\n", "- Notebook details: number of cells, code being run and the output of the cells\n", "- Environment details\n", "\n", "You can disable logging at any time by running `%unload_ext graphcore_cloud_tools.notebook_logging.gc_logger` from any cell." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Import the required modules for the notebook:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 2, "source": [ "import os\n", "import torch\n", "import poptorch\n", "import numpy as np\n", "from tqdm.notebook import tqdm\n", "import logging" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "We need to instantiate some global parameters that will be used to run the models.\n", "\n", "The **micro batch size** (number of batches to process in parallel) is set to 2. This is smaller than usual because of the effect of the micro batch size on device memory. \n", "\n", "We use on-IPU loops (**device iterations**) which iterate over a number of batches sequentially (where the iteration takes place on-device in one dataloader call), to extend the batch size. This yields a greater throughput because it is more efficient than loading smaller batches on the host a large number of times. \n", "\n", "Data parallelism is controlled by the **replication factor**, which specifies how many devices the batch sizes are replicated over. This value is set to `None` by default as it will be automatically determined by the `pod_type` of the machine being used. By default, the model itself requires 1 IPU to run, and if it is running on a IPU-POD4 (4 IPU) machine, the replication factor is set to 4. Similarly, if the model is running on an IPU-POD16, the replication factor is set to 16. This can be overridden with a different value if needed. Specifically, if `replication_factor=N` the model will be replicated over `N` IPUs as long as `N * n_ipu (number of IPUs a single instance of the model uses) <= total available IPUs`.\n", "\n", "The total effective batch size for inference is calculated by:\n", "```\n", "effective_batch_size = replication_factor * device_iterations * micro_batch_size\n", "```\n", "\n", "The model itself, through model pipelining, can also be run over 4 IPUs (by setting `ipus_per_replica` to 4), in which case the replication factor will be adjusted accordingly. The reason we might want to spread the model over more IPUs is to reduce the memory consumption of the model over a single machine allowing for higher batch sizes to be used. For example, with 4 IPUs, we compute far fewer layers per IPU, while with 1 IPU, all model layers are on a single IPU. This is particularly beneficial on an IPU-POD16 machine, as the 4-IPU pipelined version of the model can be run at a higher effective batch size (with a higher micro batch size) and achieve an even higher overall batched throughput." ], "metadata": {} }, { "cell_type": "code", "execution_count": 3, "source": [ "logger = logging.getLogger(\"\")\n", "\n", "n_ipu = int(os.getenv(\"NUM_AVAILABLE_IPU\", 4))\n", "ipus_per_replica = 1\n", "micro_batch_size = 2\n", "device_iterations = 512//n_ipu\n", "replication_factor = None\n", "\n", "random_seed = 42" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "To run embeddings models, we will set up a generic IPU embeddings class which loads the pre-trained model onto the IPU and runs the embedding pooling and normalisation stages in the forward pass along with the model. You may want to change the internal pooling (`pool(...)`) function in the class to support other pooling methods. The class currently supports averaging and classification using the encoder output state by passing `pool_type` when calling the model. " ], "metadata": {} }, { "cell_type": "code", "execution_count": 4, "source": [ "import logging\n", "from typing import Optional, List\n", "\n", "from transformers import AutoModel\n", "from optimum.graphcore.modeling_utils import to_pipelined\n", "from optimum.graphcore import IPUConfig\n", "\n", "logger = logging.getLogger(\"e5\")\n", "\n", "class IPUEmbeddingsModel(torch.nn.Module):\n", " def __init__(self, model, ipu_config: IPUConfig, fp16=True):\n", " super().__init__()\n", " self.encoder = to_pipelined(model, ipu_config)\n", " self.encoder = self.encoder.parallelize()\n", " if fp16: self.encoder = self.encoder.half()\n", " \n", " def pool(self, last_hidden_states: torch.Tensor, attention_mask: torch.Tensor, pool_type: str) -> torch.Tensor:\n", " \n", " last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)\n", " \n", " if pool_type == \"avg\":\n", " emb = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]\n", " elif pool_type == \"cls\":\n", " emb = last_hidden[:, 0]\n", " else:\n", " raise ValueError(f\"pool_type {pool_type} not supported\")\n", "\n", " return emb\n", " \n", " def forward(self, pool_type: str ='avg', **kwargs) -> torch.Tensor:\n", " outputs = self.encoder(**kwargs)\n", " \n", " embeds = self.pool(outputs.last_hidden_state, kwargs[\"attention_mask\"], pool_type=pool_type)\n", " embeds = torch.nn.functional.normalize(embeds, p=2, dim=-1)\n", "\n", " return embeds" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Before setting up and running each of the models, let's create a simple `infer` function which handles loading the batches from a dataloader and generating the embeddings from any model:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 5, "source": [ "import time\n", "\n", "def infer(model, dataloader):\n", " encoded_embeds = []\n", " with torch.no_grad():\n", " for batch_dict in tqdm(dataloader, desc='encoding'):\n", " lat = time.time()\n", " outputs = model(**batch_dict)\n", " lat = time.time() - lat\n", " \n", " encoded_embeds.append(outputs)\n", " print(f\"batch len: {len(batch_dict['input_ids'])} | batch latency: {lat}s | per_sample: {lat/len(batch_dict['input_ids'])}s | throughput: {len(batch_dict['input_ids'])/lat} samples/s\")\n", " \n", " return torch.cat(encoded_embeds, axis=0)" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Embeddings with E5-Large" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "First, use `AutoConfig` from Transformers to load the model config for the E5 large model. E5 uses a bidirectional encoder, essentially the encoder stage of a BERT model, to generate the trained embeddings. The config will define the architecture of the model, such as the number of encoder layers and size of the hidden dimension within the model. The sequence length for the model is set by default to the maximum defined sequence length in the model config (`max_position_embeddings`) and can be adjusted by changing the `e5_seq_len` parameter, with a maximum value of 512.\n", "\n", "We also need to tokenize the dataset. For this we define a custom transform function which applies the pre-trained tokenization for each model to the dataset. We will call this function when loading the function, to avoid loading multiple tokenized datasets at the same time.\n", "\n", "We define some IPU-specific configurations to get the most out of the model. The `get_ipu_config` function will set up the IPU config according to the model config, taking into consideration the defined number of IPUs for model parallelism, the number of IPUs available and batching configurations." ], "metadata": {} }, { "cell_type": "code", "execution_count": 6, "source": [ "from config import get_ipu_config\n", "from transformers import AutoConfig, AutoTokenizer, BatchEncoding" ], "outputs": [], "metadata": {} }, { "cell_type": "code", "execution_count": 7, "source": [ "e5_model_name = 'intfloat/e5-large'\n", "e5_tokenizer = AutoTokenizer.from_pretrained(e5_model_name)\n", "e5_model_config = AutoConfig.from_pretrained(e5_model_name)\n", "e5_model = AutoModel.from_pretrained(e5_model_name, config=e5_model_config)\n", "\n", "e5_seq_len = e5_model_config.max_position_embeddings\n", "\n", "def e5_transform_func(example) -> BatchEncoding:\n", " return e5_tokenizer(\n", " example['text'],\n", " max_length = e5_seq_len,\n", " padding=\"max_length\",\n", " truncation=True\n", " )\n", "\n", "e5_ipu_config = get_ipu_config(\n", " e5_model_config, n_ipu, ipus_per_replica, device_iterations, replication_factor, random_seed)" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Embeddings with All-MPNet" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "For MPNet, we do the same for the pre-trained model. The maximum value for the sequence length is 512." ], "metadata": {} }, { "cell_type": "code", "execution_count": 8, "source": [ "mpnet_model_name = 'sentence-transformers/all-mpnet-base-v2'\n", "mpnet_tokenizer = AutoTokenizer.from_pretrained(mpnet_model_name)\n", "mpnet_model_config = AutoConfig.from_pretrained(mpnet_model_name)\n", "mpnet_model = AutoModel.from_pretrained(mpnet_model_name, config=mpnet_model_config)\n", "\n", "mpnet_seq_len = mpnet_model_config.max_position_embeddings\n", "\n", "def mpnet_transform_func(example) -> BatchEncoding:\n", " return mpnet_tokenizer(\n", " example['text'],\n", " max_length=mpnet_seq_len,\n", " padding=\"max_length\",\n", " truncation=True\n", " )\n", " \n", "mpnet_ipu_config = get_ipu_config(\n", " mpnet_model_config, n_ipu, ipus_per_replica, device_iterations, replication_factor, random_seed)" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Embeddings with Sentence-T5" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Note that for T5, we need to use `T5EncoderModel` instead of `AutoModel`. We must manually specify the encoder as T5 is an encoder-decoder model, and we don't want to load the decoder for embeddings generation. \n", "\n", "Transformers `AutoModel` supports a 1-to-1 mapping of architecture definitions to model types, and it will load the `T5Model` class by default. We can override this by directly importing and loading the pre-trained model using `T5EncoderModel`. For T5 the sequence length is determined by the `n_positions` parameter in the model config. The expected maximum sequence length for T5 is also 512." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "from transformers.models.t5.modeling_t5 import T5EncoderModel\n", "\n", "t5_model_name = 'sentence-transformers/sentence-t5-base'\n", "t5_tokenizer = AutoTokenizer.from_pretrained(t5_model_name)\n", "t5_model_config = AutoConfig.from_pretrained(t5_model_name)\n", "t5_model = T5EncoderModel.from_pretrained(t5_model_name, config=t5_model_config)\n", "\n", "t5_seq_len = t5_model_config.n_positions\n", "\n", "def t5_transform_func(example) -> BatchEncoding:\n", " return t5_tokenizer(\n", " example['text'],\n", " max_length=t5_seq_len,\n", " padding=\"max_length\",\n", " truncation=True\n", " )\n", "\n", "\n", "\n", "t5_ipu_config = get_ipu_config(\n", " t5_model_config, n_ipu, ipus_per_replica, device_iterations * 4, replication_factor, random_seed)\n" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Creating the embeddings model" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "We'll wrap this behaviour into a simple function so we can iteratively run all three models and initialise `poptorch.Dataloader` to create an IPU-ready batched dataloader. We pass an arbitrary call to the model using the first batch to ensure we have compiled the model executable (or loaded the already compiled executable).\n", "\n", "The function goes through the model and dataset setup for a given model and:\n", "1. Initialises the `IPUEmbeddingsModel` class with the loaded model and IPU config.\n", "2. Converts the IPU config into an IPU options object and passes this to a `poptorch.inferenceModel` wrapper to prepare the model for the IPU.\n", "3. Initialises [`poptorch.Dataloader`](https://docs.graphcore.ai/projects/poptorch-user-guide/en/latest/batching.html) to batch the data according to the IPU options and the defined micro batch size.\n", "4. Runs the model once with a batch to compile or load the compiled executable. " ], "metadata": {} }, { "cell_type": "code", "execution_count": 10, "source": [ "from transformers import default_data_collator as data_collator\n", "\n", "def create_model(model, ipu_config, dataset, micro_batch_size):\n", " model = IPUEmbeddingsModel(model, ipu_config)\n", "\n", " ipu_options = ipu_config.to_options(for_inference=True)\n", " model = poptorch.inferenceModel(model, ipu_options)\n", "\n", " dataloader = poptorch.DataLoader(\n", " ipu_options,\n", " dataset['train'],\n", " batch_size=micro_batch_size,\n", " shuffle=False,\n", " drop_last=True,\n", " num_workers=2,\n", " collate_fn=data_collator\n", " )\n", "\n", " model(**next(iter(dataloader)))\n", " return model, dataloader" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Let's load a dataset we'll use to try out the models. Using the Hugging Face `datasets` library we can load a pre-existing dataset from the Hugging Face Hub. In this case, let's use the `rotten_tomatoes` film review dataset. Later in the notebook, we will use this dataset to create a basic semantic search functionality." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "from datasets import Dataset, load_dataset\n", "dataset = load_dataset(\"rotten_tomatoes\")\n", "print(dataset)" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Run the E5 model" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "The dataset first needs to be tokenized using the pre-trained tokenizer for each model, we can use the `map()` method to tokenize each of the inputs of the dataset using the model-specific transform function. Then we can convert the Hugging Face Arrow format dataset to a PyTorch-ready dataset with `set_format` which converts the tokenized inputs into tensors.\n", "\n", "To run the model, simply call the `infer` function we created earlier to generate embeddings for the full dataset. " ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "tokenized_dataset = dataset.map(e5_transform_func, batched=True)\n", "tokenized_dataset.set_format(type=\"torch\", columns=[\"input_ids\", \"attention_mask\"])\n", "print(e5_ipu_config)\n", "\n", "model, dataloader = create_model(e5_model, e5_ipu_config, tokenized_dataset, micro_batch_size)\n", "\n", "e5_data_embeddings = infer(model, dataloader)\n", "\n", "model.detachFromDevice()" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Run the All-MPNet model" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "tokenized_dataset = dataset.map(mpnet_transform_func, batched=True)\n", "tokenized_dataset.set_format(type=\"torch\", columns=[\"input_ids\", \"attention_mask\"])\n", "\n", "model, dataloader = create_model(mpnet_model, mpnet_ipu_config, tokenized_dataset, micro_batch_size)\n", "\n", "mpnet_data_embeddings = infer(model, dataloader)\n", "\n", "model.detachFromDevice()" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Run the Sentence-T5 model" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "tokenized_dataset = dataset.map(t5_transform_func, batched=True)\n", "tokenized_dataset.set_format(type=\"torch\", columns=[\"input_ids\", \"attention_mask\"])\n", "\n", "print(t5_ipu_config)\n", "print(micro_batch_size)\n", "\n", "model, dataloader = create_model(t5_model, t5_ipu_config, tokenized_dataset, micro_batch_size)\n", "\n", "t5_data_embeddings = infer(model, dataloader)\n", "\n", "model.detachFromDevice()" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "The embeddings for a single sequence represent low-dimensional numerical representations of the word-level and sentence-level context for each token. These pre-trained embeddings can be used in applications like embedding retrieval for recommender systems, or semantic searches for query-matching using cosine-similarity. Both of these use cases take advantage of the generated embeddings space, by performing a relative comparison of the user input sequence embeddings using some proximity metric.\n", "\n", "We'll use the open source `sentence_transformers` library which provides utilities for embeddings tasks to perform a semantic search on a user query to retrieve the sequences from the dataset that are most similar to the query. This is a helpful utility for making, for example, more responsive FAQs." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Semantic search with generated embeddings" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Using the `rotten_tomatoes` dataset, lets create a simple similarity search engine using the `sentence_transformers` semantic search function, which uses cosine similarity to retrieve close-proximity sentences from a given set of embeddings to a given query. We have already generated embeddings for the dataset, so the next step is to do the same with a given query and perform the search.\n", "\n", "First, to process the query, we need to tokenize it and convert it to a single-batch input for the model. This has been wrapped into a simple function which tokenizes and prepares a dictionary of model inputs (`input_ids`, `attention_mask`, ...) to which we just need to pass a string." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "def prepare_query(query: str):\n", " t_query = mpnet_tokenizer(\n", " query,\n", " max_length=mpnet_model_config.max_position_embeddings,\n", " padding=\"max_length\",\n", " truncation=True\n", " )\n", "\n", " return {k: torch.as_tensor([t_query[k]]) for k in t_query}" ], "outputs": [], "metadata": { "scrolled": true } }, { "cell_type": "markdown", "source": [ "Next, to perform inference with a single input (so an effective batch size of 1) we re-instantiate the model by setting all device batching, replication and micro batch-size to 1 and re-compile the model. For this example, we use the All-MPNet model. The change in batch size necessitates a recompilation, since the input shape to the model has been changed. We will follow the steps to initiate the model outlined earlier in the notebook, with the only change being setting the `get_ipu_config` function to have all batching turned off." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "mpnet_infer_ipu_config = get_ipu_config(\n", " mpnet_model_config, n_ipu, ipus_per_replica=1, device_iterations=1, replication_factor=1, random_seed=random_seed)\n", "\n", "model = IPUEmbeddingsModel(mpnet_model, mpnet_infer_ipu_config)\n", "model = poptorch.inferenceModel(model, mpnet_infer_ipu_config.to_options(for_inference=True))\n", "\n", "o=model(**prepare_query(\"Running once to compile\"))" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Finally, we can use the model to embed a single query, and perform a semantic search across the full dataset embeddings to retrieve highly relevant reviews to the query." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "from sentence_transformers.util import semantic_search\n", "\n", "query = \"Strongly disliked this action movie\"\n", "\n", "query_embeddings = model(**prepare_query(query))\n", "hits = semantic_search(query_embeddings.float(), mpnet_data_embeddings.float(), top_k=10)\n", "\n", "print(f\"\\n SEARCH QUERY: {query}\")\n", "for n, res in enumerate(hits[0]):\n", " print(f\"\\n Result (rank {n+1}) | Score: {res['score']} | Text: {dataset['train']['text'][res['corpus_id']]} \")" ], "outputs": [], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "model.detachFromDevice()" ], "outputs": [], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }