notebooks/whisper-quantized-example.ipynb (425 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "source": [ "# Speech Transcription on IPUs using Whisper - Quantized Inference\n", "\n", "This notebook demonstrates speech transcription on the IPU using the [Whisper implementation in the 🤗 Transformers library](https://huggingface.co/spaces/openai/whisper) alongside [Optimum Graphcore](https://github.com/huggingface/optimum-graphcore) using INT4 **group quantization**.\n", "\n", "Whisper is a versatile speech recognition model that can transcribe speech as well as perform multi-lingual translation and recognition tasks.\n", "It was trained on diverse datasets to give human-level speech recognition performance without the need for fine-tuning.\n", "\n", "This notebook demonstrates the use of group quantization in Whisper inference to compress the weights from FP16 to INT4. Group quantization is a common scheme and divides each weights matrix into groups of 16 elements and for each group store the maximum and minimum values as FP16. Then, it divides the range between the minimum and maximum values of each group into 16 intervals and finally codes individual elements as an INT4 based on the interval that they fall into. This gives a compression of about 3.5x. While the model is running in forward mode, the weights are decompressed on-the-fly back to FP16 for calculation. There is a small loss of accuracy when using these compressed values, but it is typically only about a 0.1% word error rate (WER).\n", "\n", "\n", "[🤗 Optimum Graphcore](https://github.com/huggingface/optimum-graphcore) is the interface between the [🤗 Transformers library](https://huggingface.co/docs/transformers/index) and [Graphcore IPUs](https://www.graphcore.ai/products/ipu).\n", "It provides a set of tools enabling model parallelization and loading on IPUs, training and fine-tuning on all the tasks already supported by 🤗 Transformers while being compatible with the 🤗 Hub and every model available on it out of the box.\n", "\n", "> **Hardware requirements:** All the Whisper models from `whisper-tiny` to `whisper-large-v2` can run in inference mode on smallest IPU-POD4 machine.\n", "\n", "[![Join our Slack Community](https://img.shields.io/badge/Slack-Join%20Graphcore's%20Community-blue?style=flat-square&logo=slack)](https://www.graphcore.ai/join-community)" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Environment setup\n", "\n", "The best way to run this demo is on Paperspace Gradient's cloud IPUs because everything is already set up for you.\n", "\n", "[![Run on Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/Djq2SC)\n", "\n", "To run the demo using other IPU hardware, you need to have the Poplar SDK enabled and a PopTorch wheel installed. Refer to the [Getting Started guide for your system](https://docs.graphcore.ai/en/latest/getting-started.html) for details on how to do this. Also refer to the Jupyter Quick Start guide for how to set up Jupyter to be able to run this notebook on a remote IPU machine." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Dependencies" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Install the dependencies the notebook needs." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "# Install optimum from source \n", "!pip install optimum-graphcore==0.7.1 transformers librosa matplotlib graphcore-cloud-tools[logger]@git+https://github.com/graphcore/graphcore-cloud-tools\n", "%load_ext graphcore_cloud_tools.notebook_logging.gc_logger" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "In order to improve usability and support for future users, Graphcore would like to collect information about the applications and code being run in this notebook. The following information will be anonymised before being sent to Graphcore:\n", "\n", "- User progression through the notebook\n", "- Notebook details: number of cells, code being run and the output of the cells\n", "- Environment details\n", "\n", "You can disable logging at any time by running `%unload_ext graphcore_cloud_tools.notebook_logging.gc_logger` from any cell." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "IPU Whisper with group quantization requires features from Poplar SDK version 3.3 or later. The following code checks whether these features can be enabled." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "import warnings\n", "from transformers.utils.versions import require_version\n", "\n", "try:\n", " require_version(\"poptorch>=3.3\")\n", " enable_sdk_features=True\n", " print(f\"SDK check passed.\")\n", "except Exception:\n", " enable_sdk_features=False\n", " warnings.warn(\"SDK versions earlier than 3.3 do not support the functionality in this notebook. We recommend that you relaunch the Paperspace Notebook with the PyTorch SDK 3.3 image. You can use https://hub.docker.com/r/graphcore/pytorch-early-access\")\n" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Running Whisper on the IPU\n", "\n", "We start by importing the required modules, some of which are needed to configure the IPU.\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "# Generic imports\n", "import os\n", "from datasets import load_dataset\n", "import matplotlib.pyplot as plt\n", "import librosa\n", "import IPython\n", "import random\n", "\n", "\n", "# IPU-specific imports\n", "from optimum.graphcore import IPUConfig\n", "from optimum.graphcore.modeling_utils import to_pipelined\n", "from optimum.graphcore.models.whisper import WhisperProcessorTorch\n", "\n", "# HF-related imports\n", "from transformers import WhisperForConditionalGeneration" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "This notebook demonstrates how to run all sizes of Whisper. All sizes will fit on an IPU-POD4:\n", "\n", "- `whisper-tiny`, `base` and `small` only require 1 IPU\n", "- `whisper-medium` requires 2 IPUs\n", "- `whisper-large` requires 4 IPUs" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "The Whisper model is available on Hugging Face in several sizes, from `whisper-tiny` with 39M parameters to `whisper-large` with 1550M parameters.\n", "\n", "The [Whisper architecture](https://openai.com/research/whisper) is an encoder-decoder Transformer, with the audio split into 30-second chunks.\n", "- For `whisper-tiny`, `small` and `base`, both encoder and decoder fit on 1 IPU.\n", "- For `whisper-medium `, one IPU is used to place the encoder part and two others for the decoder part.\n", "- For `whisper-large `, two IPUs are used to place the encoder part and two others for the decoder part.\n", "\n", "The `IPUConfig` object helps to configure the model to be pipelined across the IPUs.\n", "The number of transformer layers per IPU can be adjusted by using `layers_per_ipu`." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "num_available_ipus=int(os.getenv(\"NUM_AVAILABLE_IPU\", 4))\n", "cache_dir = os.getenv(\"POPLAR_EXECUTABLE_CACHE_DIR\", \"./exe_cache\")\n", "\n", "configs = {\n", " \"tiny\": (\"openai/whisper-tiny.en\", \n", " IPUConfig(executable_cache_dir=cache_dir,\n", " ipus_per_replica=1,\n", " explicit_ir_inference=True,\n", " )),\n", " \n", " \"base\": (\"openai/whisper-base.en\", \n", " IPUConfig(executable_cache_dir=cache_dir,\n", " ipus_per_replica=1,\n", " explicit_ir_inference=True,\n", " )),\n", "\n", " \"small\": (\"openai/whisper-small.en\", \n", " IPUConfig(executable_cache_dir=cache_dir,\n", " ipus_per_replica=1,\n", " explicit_ir_inference=True,\n", " )),\n", " \n", " \"medium\": (\"openai/whisper-medium.en\",\n", " IPUConfig(executable_cache_dir=cache_dir,\n", " ipus_per_replica=2,\n", " explicit_ir_inference=True,\n", " )),\n", "\n", " \"large\": (\"openai/whisper-large-v2\", \n", " IPUConfig(executable_cache_dir=cache_dir,\n", " ipus_per_replica=4,\n", " layers_per_ipu=[-1, -1, 14, 18],\n", " matmul_proportion=0.1,\n", " inference_projection_serialization_factor=5,\n", " explicit_ir_inference=True,\n", " )),\n", "}\n", "\n", "\n", "def select_whisper_config(size: str, custom_checkpoint: str):\n", " model_checkpoint, ipu_config = configs[size]\n", " if custom_checkpoint is not None:\n", " model_checkpoint = custom_checkpoint\n", "\n", " print(f\"Using whisper-{size} config with the checkpoint '{model_checkpoint}'.\")\n", " return model_checkpoint, ipu_config " ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Select the Whisper size bellow, try `\"tiny\"`,`\"base\"`, `\"small\"`, `\"medium\"`, `\"large\"`." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "model_checkpoint, ipu_config = select_whisper_config(\"tiny\", custom_checkpoint=None) " ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "You can also use a custom checkpoint from Hugging Face Hub using the argument `custom_checkpoint` above. In this case, you have to make sure that `size` matches the checkpoint model size.\n", "\n", "Two features of Optimum Graphcore are demonstrated below:\n", "1. `use_cond_encoder` : This enables putting the Whisper encoder and decoder on a single IPU and switching between them using a compiled `cond` operation. This is only available if `ipus_per_replica == 1`.\n", "2. `use_group_quantized_linears` : This enables compressing all the weights of the transformer block's linear layers to INT4 using group quantization." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "# Instantiate processor and model\n", "processor = WhisperProcessorTorch.from_pretrained(model_checkpoint)\n", "model = WhisperForConditionalGeneration.from_pretrained(model_checkpoint)\n", "num_beams = 1\n", "\n", "# Adapt whisper to run on the IPU\n", "\n", "pipelined_model = to_pipelined(model, ipu_config)\n", "pipelined_model = pipelined_model.parallelize(\n", " for_generation=True, \n", " use_cache=True, \n", " batch_size=1,\n", " num_beams=num_beams,\n", " max_length=448,\n", " on_device_generation_steps=16,\n", " use_encoder_output_buffer=ipu_config.ipus_per_replica > 1,\n", " use_cond_encoder=ipu_config.ipus_per_replica == 1,\n", " use_group_quantized_linears=True # Enables quantization!\n", ").half()" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Now we can load the dataset and process an example audio file.\n", "If precompiled models are not available, then the first run of the model triggers two graph compilations.\n", "This means that our first test transcription could take a minute or two to run, but subsequent runs will be much faster." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "# load the dataset and read an example sound file\n", "ds = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\n", "test_sample = ds[2]\n", "sample_rate = test_sample['audio']['sampling_rate']\n", "\n", "def transcribe(data, rate):\n", " input_features = processor(data, return_tensors=\"pt\", sampling_rate=rate).input_features.half()\n", "\n", " # This triggers a compilation, unless a precompiled model is available.\n", " sample_output = pipelined_model.generate(\n", " input_features,\n", " use_cache=True,\n", " num_beams=num_beams,\n", " max_length=448, \n", " min_length=3)\n", " transcription = processor.batch_decode(sample_output, skip_special_tokens=True)[0]\n", " return transcription\n", "\n", "test_transcription = transcribe(test_sample[\"audio\"][\"array\"], sample_rate)" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "In the next cell, we compare the expected text from the dataset with the transcribed result from the model.\n", "There will typically be some small differences, but even `whisper-tiny` does a great job! It even adds punctuation.\n", "\n", "You can listen to the audio and compare the model result yourself using the controls below." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "print(f\"Expected: {test_sample['text']}\\n\")\n", "print(f\"Transcribed: {test_transcription}\")\n", "\n", "plt.figure(figsize=(14, 5))\n", "librosa.display.waveshow(test_sample[\"audio\"][\"array\"], sr=sample_rate)\n", "IPython.display.Audio(test_sample[\"audio\"][\"array\"], rate=sample_rate)" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "The model only needs to be compiled once. Subsequent inferences will be much faster.\n", "In the cell below, we repeat the exercise but with a random example from the dataset.\n", "\n", "You might like to re-run this next cell multiple times to get different comparisons." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "idx = random.randint(0, ds.num_rows - 1)\n", "data = ds[idx][\"audio\"][\"array\"]\n", "\n", "print(f\"Example #{idx}\\n\")\n", "print(f\"Expected: {ds[idx]['text']}\\n\")\n", "print(f\"Transcribed: {transcribe(data, sample_rate)}\")\n", "\n", "plt.figure(figsize=(14, 5))\n", "librosa.display.waveshow(data, sr=sample_rate)\n", "IPython.display.Audio(data, rate=sample_rate, autoplay=True)" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Finally, we detach the process from the IPUs when we are done to make the IPUs available to other users." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "pipelined_model.detachFromDevice()" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Next Steps\n", "\n", "The `whisper-tiny` model used here is very fast for inference and so cheap to run, but its accuracy can be improved.\n", "The `whisper-base`, `whisper-small` and `whisper-medium` models have 74M, 244M and 769 M parameters respectively (compared to just 39M for `whisper-tiny`). You can try out `whisper-base`, `whisper-small` and `whisper-medium` by changing `select_whisper_config(\"small\")` (at the beginning of this notebook) to:\n", "- `select_whisper_config(\"base\")`\n", "- `select_whisper_config(\"small\")`\n", "- `select_whisper_config(\"medium\")` respectively.\n", "\n", "Larger models and multilingual models are also available.\n", "To access the multilingual models, remove the `.en` from the checkpoint name. Note however that the multilingual models are slightly less accurate for this English transcription task but they can be used for transcribing other languages or for translating to English. The largest model `whisper-large` has 1550M parameters and requires a 4-IPUs pipeline.\n", "You can try it by setting `select_whisper_config(\"large\")`\n", "\n", "You can also try using beam search by setting `num_beams>1` in the calls to `parallelize` and `generate` above. `whisper-small` will fit on 1 IPU with `num_beams=5`.\n", "\n", "For `whisper-medium` with `num_beams>1` the model will need 4 IPUs to fit. For `whisper-large` with `num_beams>1` you will need more than the 4 IPUS in an IPU-POD4. On Paperspace, you can use either an IPU-POD16 or a Bow Pod16 machine, each with 16 IPUs. Please contact Graphcore if you need assistance running these larger models.\n" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Conclusion\n", "\n", "In this notebook, we demonstrated using Whisper and group quantization for speech recognition and transcription on the IPU.\n", "We used the Optimum Graphcore package to interface between the IPU and the 🤗 Transformers library. This meant that only a few lines of code were needed to get this state-of-the-art automated speech recognition model running on IPUs." ], "metadata": {} } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }