notebooks/mt5_xnli.ipynb (415 lines of code) (raw):

{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "8feaa524", "metadata": {}, "source": [ "Copyright (c) 2023 Graphcore Ltd. All rights reserved.\n", "\n", "For all available notebooks, check [IPU-powered Jupyter Notebooks](https://www.graphcore.ai/ipu-jupyter-notebooks) to see how IPUs perform on other tasks." ] }, { "attachments": {}, "cell_type": "markdown", "id": "2f37d919-8e25-4149-9f94-6aeebce8d2cd", "metadata": {}, "source": [ "# Zero-Shot Text Classification on IPUs using MT5 - Inference\n", "\n", "This notebook shows you how to use the multilingual variant of T5, [MT5](https://huggingface.co/models?other=arxiv:2010.11934), for [zero-shot text classification](https://huggingface.co/tasks/zero-shot-classification) in languages other than English on the Graphcore IPU. We use the large configuration of MT5 finetuned on the [XNLI corpus](https://huggingface.co/datasets/xnli) to showcase this. \n", "\n", "Since MT5 has no sequence classification head, it is currently not compatible with the [zero-shot-classification Huggingface pipelines API](https://huggingface.co/docs/transformers/main/main_classes/pipelines#transformers.ZeroShotClassificationPipeline). We demonstrate explicitly how to perform text-generation in cases like this. The content displayed in this notebook is the same as that shown in the [Alan Turing Institute MT5 large XNLI finetuned zero-shot example](https://huggingface.co/alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli) with minor changes for IPU exeuction. \n", "\n", "| Domain | Tasks | Model | Datasets | Workflow | Number of IPUs | Execution time |\n", "|---------|-------|-------|----------|----------|--------------|--------------|\n", "| Natural language processing | Zero-shot classification | mt5-large | - | Inference | 8 | 30min |\n", "\n", "[![Join our Slack Community](https://img.shields.io/badge/Slack-Join%20Graphcore's%20Community-blue?style=flat-square&logo=slack)](https://www.graphcore.ai/join-community)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "eda82837", "metadata": {}, "source": [ "## Environment setup\n", "\n", "The best way to run this demo is on Paperspace Gradient's cloud IPUs because everything is already set up for you.\n", "\n", "[![Run on Gradient](https://assets.paperspace.io/img/gradient-badge.svg)]()\n", "\n", "To run the demo using other IPU hardware, you need to have the Poplar SDK enabled. Refer to the [Getting Started guide](https://docs.graphcore.ai/en/latest/getting-started.html#getting-started) for your system for details on how to enable the Poplar SDK. Also refer to the [Jupyter Quick Start guide](https://docs.graphcore.ai/projects/jupyter-notebook-quick-start/en/latest/index.html) for how to set up Jupyter to be able to run this notebook on a remote IPU machine." ] }, { "attachments": {}, "cell_type": "markdown", "id": "eeb1dd08", "metadata": {}, "source": [ "## Dependencies and configuration\n", "\n", "In order to improve usability and support for future users, Graphcore would like to collect information about the\n", "applications and code being run in this notebook. The following information will be anonymised before being sent to Graphcore:\n", "\n", "- User progression through the notebook\n", "- Notebook details: number of cells, code being run and the output of the cells\n", "- Environment details\n", "\n", "You can disable logging at any time by running `%unload_ext graphcore_cloud_tools.notebook_logging.gc_logger` from any cell." ] }, { "attachments": {}, "cell_type": "markdown", "id": "748e35fb", "metadata": {}, "source": [ "Install the dependencies for this notebook." ] }, { "cell_type": "code", "execution_count": null, "id": "122509c7", "metadata": {}, "outputs": [], "source": [ "%pip install \"optimum-graphcore==0.7\"\n", "%pip install graphcore-cloud-tools[logger]@git+https://github.com/graphcore/graphcore-cloud-tools\n", "%load_ext graphcore_cloud_tools.notebook_logging.gc_logger" ] }, { "cell_type": "code", "execution_count": null, "id": "91c5dd6b", "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "num_available_ipus = int(os.getenv(\"NUM_AVAILABLE_IPU\", 0))\n", "if num_available_ipus < 8:\n", " raise EnvironmentError(\n", " f\"This notebook requires 8 IPUs but only {num_available_ipus} are available. \"\n", " \"Try this notebook on IPU-POD16 or Bow-POD16 on Paperspace.\"\n", " )\n", "executable_cache_dir = os.getenv(\"POPLAR_EXECUTABLE_CACHE_DIR\", \"/tmp/exe_cache/\") + \"/mt5_zero_shot_classification\"" ] }, { "attachments": {}, "cell_type": "markdown", "id": "78d4d238", "metadata": {}, "source": [ "## Configure MT5 for the IPU\n", "\n", "\n", "Ordinarily given a finetuned model checkpoint and pipeline task, inference on the IPU can be performed by making a few changes as shown in the example below:\n", "\n", "```diff\n", "-from transformers import pipeline\n", "+from optimum.graphcore import pipeline, IPUConfig\n", " \n", "-pipe = pipeline(task=\"text2text-generation\", model=\"t5-small\")\n", "+ipu_config = IPUConfig.from_pretrained(\"Graphcore/t5-small-ipu\", inference_layers_per_ipu=[3, 3,3 ,3])\n", "+pipe = pipeline(task=\"text2text-generation\", model=\"t5-small\", ipu_config=ipu_config)\n", " pipe(\"Translate English to Romanian: The quick brown fox jumped over the lazy dog\")\n", " [{'generated_text': 'vulporul brun a sărit rapid peste câinul leneş'}]\n", "}\n", "```\n", "\n", "However since MT5 is not a supported model for the zero-shot-classification pipeline we cannot proceed with inference in the same was as above. Instead, we proceed by:\n", "1. Instantiating an MT5 model \n", "2. Configuring the model to run on the IPU in inference mode\n", "3. Tokenizing input sequences for zero-shot classification\n", "4. Obtaining output logits from the inference model to compute classification probabilities\n", "\n", "For the first step we load the fine-tuned mt5 model on the XNLI corpus from the Huggingface Model Hub:\n" ] }, { "cell_type": "code", "execution_count": null, "id": "553c8da5", "metadata": {}, "outputs": [], "source": [ "from transformers import MT5ForConditionalGeneration\n", "model_checkpoint = \"alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli\"\n", "model = MT5ForConditionalGeneration.from_pretrained(model_checkpoint).eval()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "1c0fc93a", "metadata": {}, "source": [ "In order to configure the model to run on the IPU, we load the corresponding IPUConfig from the Huggingface hub that specifies how to place the constituent layers of `mt5-large` on the IPU:" ] }, { "cell_type": "code", "execution_count": null, "id": "ed59f789", "metadata": {}, "outputs": [], "source": [ "from optimum.graphcore import IPUConfig\n", "ipu_config = IPUConfig.from_pretrained(\"Graphcore/mt5-large-ipu\", executable_cache_dir=executable_cache_dir)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "49e9ffab", "metadata": {}, "source": [ "Now we can configure the model to run on the IPU by firstly obtaining the IPU pipelined variant of the model by using the `to_pipelined` function from Optimum Graphcore. Before parallelizing the model for the IPU, we also set the model to run in half precision:" ] }, { "cell_type": "code", "execution_count": null, "id": "8dd70471", "metadata": {}, "outputs": [], "source": [ "from optimum.graphcore.modeling_utils import to_pipelined\n", "ipu_model = to_pipelined(model, ipu_config=ipu_config).half().parallelize(for_generation=True)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "6fb3c58d", "metadata": {}, "source": [ "## Preprocessing the data " ] }, { "attachments": {}, "cell_type": "markdown", "id": "e014263b-9a6e-4c94-8a0f-8b692fa67bc6", "metadata": {}, "source": [ "In order to use MT5 for the XNLI task, any input sequence needs to be of the form \"xnli: premise: {example premise} hypothesis: {example hypothesis}\". Below we create some example Spanish sequences for classification and define a function that manipulates each example to be in the required format. " ] }, { "cell_type": "code", "execution_count": null, "id": "c53ec97a", "metadata": {}, "outputs": [], "source": [ "sequence_to_classify = \"¿A quién vas a votar en 2020?\"\n", "candidate_labels = [\"Europa\", \"salud pública\", \"política\"]\n", "hypothesis_template = \"Este ejemplo es {}.\"\n", "\n", "# construct sequence of premise, hypothesis pairs\n", "pairs = [(sequence_to_classify, hypothesis_template.format(label)) for label in\n", " candidate_labels]\n", "\n", "print(pairs)\n", "\n", "def process_nli(premise: str, hypothesis: str):\n", " \"\"\" process to required xnli format with task prefix \"\"\"\n", " return \"\".join(['xnli: premise: ', premise, ' hypothesis: ', hypothesis])\n", "\n", "# format for mt5 xnli task\n", "seqs = [process_nli(premise=premise, hypothesis=hypothesis) for\n", " premise, hypothesis in pairs]\n", "\n", "seqs" ] }, { "attachments": {}, "cell_type": "markdown", "id": "1d5633b5", "metadata": {}, "source": [ "The input sequences can now be tokenized for input to the model:" ] }, { "cell_type": "code", "execution_count": null, "id": "40ec8aee", "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n", "inputs = tokenizer.batch_encode_plus(seqs, return_tensors=\"pt\", padding=True)\n", "inputs" ] }, { "attachments": {}, "cell_type": "markdown", "id": "72466db9", "metadata": {}, "source": [ "## Zero-Shot text classification\n", "\n", "Given the example inputs and model configured for the IPU, we can now perform text generation and obtain output logit scores that can be used for classification:" ] }, { "cell_type": "code", "execution_count": null, "id": "fc429b5b", "metadata": {}, "outputs": [], "source": [ "import torch\n", "\n", "out = ipu_model.generate(**inputs, output_scores=True, return_dict_in_generate=True,\n", " num_beams=1)\n", "\n", "# sanity check that our sequences are expected length (1 + start token + end token = 3)\n", "for i, seq in enumerate(out.sequences):\n", " assert len(seq) == 3, f\"generated sequence {i} not of expected length, 3. Actual length: {len(seq)}\"\n", "\n", "# get the scores for our only token of interest\n", "# we'll now treat these like the output logits of a `*ForSequenceClassification` model\n", "scores = out.scores[0].to(torch.float32)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4c4c3d19", "metadata": {}, "source": [ "The output scores have dimension number of sequences x vocabulary size. However for [Natural Language Inference](http://nlpprogress.com/english/natural_language_inference.html) we are interested in only the logits scores for the tokens `contradicts`, `neutral` and `entails`. Below we subset the obtained scores to include only the aforementioned tokens." ] }, { "cell_type": "code", "execution_count": null, "id": "97d6ff89", "metadata": {}, "outputs": [], "source": [ "ENTAILS_LABEL = \"▁0\"\n", "NEUTRAL_LABEL = \"▁1\"\n", "CONTRADICTS_LABEL = \"▁2\"\n", "\n", "label_inds = tokenizer.convert_tokens_to_ids(\n", " [ENTAILS_LABEL, NEUTRAL_LABEL, CONTRADICTS_LABEL])\n", "\n", "# scores has a size of the model's vocab.\n", "# However, for this task we have a fixed set of labels\n", "# sanity check that these labels are always the top 3 scoring\n", "for i, sequence_scores in enumerate(scores):\n", " top_scores = sequence_scores.argsort()[-3:]\n", " assert set(top_scores.tolist()) == set(\n", " label_inds\n", " ), f\"top scoring tokens are not expected for this task. Expected: {label_inds}. Got: {top_scores.tolist()}.\"\n", "\n", "# new indices of entailment and contradiction in scores\n", "entailment_ind = 0\n", "contradiction_ind = 2\n", "\n", "# cut down scores to our task labels\n", "scores = scores[:, label_inds]\n", "scores" ] }, { "attachments": {}, "cell_type": "markdown", "id": "f8fc39dd", "metadata": {}, "source": [ "We can use the logits to show a binary classification view per input sequence of entailment vs contradiction. Alternatively a multinomial representaion can be shown for the single premise and its hypotheses:" ] }, { "cell_type": "code", "execution_count": null, "id": "c1a374dd", "metadata": {}, "outputs": [], "source": [ "# we can show, per item, the entailment vs contradiction probabilities\n", "entail_vs_contra_scores = scores[:, [entailment_ind, contradiction_ind]]\n", "entail_vs_contra_probabilities = torch.nn.functional.softmax(entail_vs_contra_scores, dim=1)\n", "for seq, binary_score in dict(zip(seqs, entail_vs_contra_probabilities.tolist())).items():\n", " print(seq, binary_score)\n", "\n", "# or we can show probabilities similar to `ZeroShotClassificationPipeline`\n", "# this gives a zero-shot classification style output across labels\n", "entail_scores = scores[:, entailment_ind]\n", "entail_probabilities = torch.nn.functional.softmax(entail_scores, dim=0)\n", "\n", "print(sequence_to_classify, dict(zip(candidate_labels, entail_probabilities.tolist())))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "828d5b18", "metadata": {}, "source": [ "The cell below detaches the MT5 model from the device, allowing you to use available IPUs for other workloads." ] }, { "cell_type": "code", "execution_count": null, "id": "2ee281dc", "metadata": {}, "outputs": [], "source": [ "ipu_model.detachFromDevice()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "fb5feafd", "metadata": {}, "source": [ "## Conclusion and next steps\n", "\n", "We have demonstrated how to use MT5 to perform the task of zero-shot text classification on the IPU with a small number of modifications. In this notebook we have used a fine-tuned checkpoint available on the Huggingface hub, however, if you would like to see how to fine-tune MT5, take a look at our Machine Translation on IPUs using MT5 - Fine-tuning `mt5_translation.ipynb` notebook. For all available notebooks, check [IPU-powered Jupyter Notebooks](https://www.graphcore.ai/ipu-jupyter-notebooks) to see how IPUs perform on other tasks.\n", "\n", "\n", "Have a question? Please contact us on our [Graphcore community channel](https://www.graphcore.ai/join-community)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }