notebooks/text_classification.ipynb (1,046 lines of code) (raw):
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "rEJBSTyZIrIb"
},
"source": [
"# Text Classification Task on IPU using RoBERTa - Fine-tuning"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "kTCFado4IrIc"
},
"source": [
"In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) models to a text classification task of the [GLUE Benchmark](https://gluebenchmark.com/).\n",
"\n",
"\n",
"\n",
"The GLUE Benchmark is a group of nine classification tasks on sentences or pairs of sentences. The classification tasks are:\n",
"\n",
"- [CoLA](https://nyu-mll.github.io/CoLA/) (Corpus of Linguistic Acceptability) Determine if a sentence is grammatically correct or not.(This dataset contains sentences labelled as being grammatically correct or not.)\n",
"- [MNLI](https://arxiv.org/abs/1704.05426) (Multi-Genre Natural Language Inference) Determine if a sentence entails, contradicts or is unrelated to a given hypothesis. (This dataset has two versions, one with the validation and test sets coming from the same distribution, another called `mismatched` where the validation and testing use out-of-domain data.)\n",
"- [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) (Microsoft Research Paraphrase Corpus) Determine if two sentences are paraphrases of each other or not.\n",
"- [QNLI](https://rajpurkar.github.io/SQuAD-explorer/) (Question-answering Natural Language Inference) Determine if the answer to a question is in the second sentence or not. (This dataset is built from the SQuAD dataset.)\n",
"- [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Quora Question Pairs2) Determine if two questions are semantically equivalent or not.\n",
"- [RTE](https://aclweb.org/aclwiki/Recognizing_Textual_Entailment) (Recognizing Textual Entailment) Determine if a sentence entails a given hypothesis or not.\n",
"- [SST-2](https://nlp.stanford.edu/sentiment/index.html) (Stanford Sentiment Treebank) Determine if the sentence has a positive or negative sentiment.\n",
"- [STS-B](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) (Semantic Textual Similarity Benchmark) Determine the similarity of two sentences with a score from 1 to 5.\n",
"- [WNLI](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html) (Winograd Natural Language Inference) Determine if a sentence with an anonymous pronoun and a sentence with this pronoun replaced are entailed or not. (This dataset is built from the Winograd Schema Challenge dataset.)\n",
"\n",
"We will see how to easily load the dataset for each of these tasks and use the `IPUTrainer` API to fine-tune a model on it. Each task is named by its acronym. In addition, there is also a task for the mismatched version of MNLI called `mnli-mm`. This task has the same training set as `mnli` but has different validation and test sets. The full list of tasks are:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "YZbiBDuGIrId"
},
"outputs": [],
"source": [
"GLUE_TASKS = [\"cola\", \"mnli\", \"mnli-mm\", \"mrpc\", \"qnli\", \"qqp\", \"rte\", \"sst2\", \"stsb\", \"wnli\"]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"| Domain | Tasks | Model | Datasets | Workflow | Number of IPUs | Execution time |\n",
"|---------|-------|-------|----------|----------|--------------|--------------|\n",
"| Natural language processing | Text classification | roberta-base | GLUE |Fine-tuning | 4 | 15min |"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"[](https://www.graphcore.ai/join-community)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Environment setup\n",
"\n",
"The best way to run this demo is on Paperspace Gradient's cloud IPUs because everything is already set up for you.\n",
"\n",
"[](https://ipu.dev/3XDBUvQ)\n",
"\n",
"To run the demo using other IPU hardware, you need to have the Poplar SDK enabled. Refer to the [Getting Started guide](https://docs.graphcore.ai/en/latest/getting-started.html#getting-started) for your system for details on how to enable the Poplar SDK. Also refer to the [Jupyter Quick Start guide](https://docs.graphcore.ai/projects/jupyter-notebook-quick-start/en/latest/index.html) for how to set up Jupyter to be able to run this notebook on a remote IPU machine."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to improve usability and support for future users, Graphcore would like to collect information about the\n",
"applications and code being run in this notebook. The following information will be anonymised before being sent to Graphcore:\n",
"\n",
"- User progression through the notebook\n",
"- Notebook details: number of cells, code being run and the output of the cells\n",
"- Environment details\n",
"\n",
"You can disable logging at any time by running `%unload_ext graphcore_cloud_tools.notebook_logging.gc_logger` from any cell."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dependencies and configuration"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the dependencies for this notebook."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"id": "MOsHUjgdIrIW",
"outputId": "f84a093e-147f-470e-aad9-80fb51193c8e"
},
"outputs": [],
"source": [
"%pip install \"optimum-graphcore==0.7\"\n",
"%pip install scikit-learn\n",
"%pip install graphcore-cloud-tools[logger]@git+https://github.com/graphcore/graphcore-cloud-tools\n",
"%load_ext graphcore_cloud_tools.notebook_logging.gc_logger"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "4RRkXuteIrIh"
},
"source": [
"This notebook is built to run on any of the classification tasks listed in the introduction, with any model checkpoint from the [🤗 Model Hub](https://huggingface.co/models) as long as that model has a version with a classification head and is supported by Optimum Graphcore. The IPU config files of the supported models are available in Graphcore's [🤗 account](https://huggingface.co/Graphcore). You can also create your own IPU config file locally. \n",
"\n",
"In this notebook, we use both data parallelism and pipeline parallelism (see the [tutorial on efficient data loading](https://github.com/graphcore/examples/tree/master/tutorials/pytorch/efficient_data_loading) for more information). Therefore the global batch size, which is the actual number of samples used for the weight update, is determined from three factors:\n",
"- global batch size = micro batch size * gradient accumulation steps * replication factor\n",
"\n",
"The replication factor is determined by the type of IPU Pod. The Pod type will be used as a key to select the replication factor from a dictionary defined in the IPU config file. For example, the dictionary in the IPU config file [Graphcore/roberta-base-ipu](https://huggingface.co/Graphcore/roberta-base-ipu/blob/main/ipu_config.json) looks like this:\n",
"\n",
"- `\"replication_factor\": {\"pod4\": 1, \"pod8\": 2, \"pod16\": 4, \"pod32\": 8, \"pod64\": 16, \"default\": 1}`\n",
"\n",
"Depending on your model and the IPU Pod you are using, you might need to adjust these three batch-size-related arguments.\n",
"\n",
"Finally `max_seq_length` is the length we are going to pad the sentences to, so it should not be larger than the maximum length of the model. Set these six parameters, then the rest of the notebook should run smoothly:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"id": "zVvslsfMIrIh"
},
"outputs": [],
"source": [
"task = \"cola\"\n",
"model_checkpoint = \"roberta-base\"\n",
"ipu_config_name = \"Graphcore/roberta-base-ipu\"\n",
"micro_batch_size = 1\n",
"gradient_accumulation_steps = 16\n",
"max_seq_length = 512"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Values for machine size and cache directories can be configured through environment variables or directly in the notebook:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"n_ipu = int(os.getenv(\"NUM_AVAILABLE_IPU\", 4))\n",
"executable_cache_dir = os.getenv(\"POPLAR_EXECUTABLE_CACHE_DIR\", \"/tmp/exe_cache/\") + \"/text_classification\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Sharing your model with the community"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can share your model with the 🤗 community. You do this by completing the following steps:\n",
"\n",
"1. Store your authentication token from the 🤗 website. [Sign up to 🤗](https://huggingface.co/join) if you haven't already.\n",
"2. Execute the following cell and input your username and password:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from huggingface_hub import notebook_login\n",
"\n",
"notebook_login()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Then you need to install Git-LFS to manage large files:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!apt install git-lfs"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "whPRbBNbIrIl"
},
"source": [
"## Loading the dataset"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "W7QYTpxXIrIl"
},
"source": [
"We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we will use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`. "
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"id": "IreSlFmlIrIm"
},
"outputs": [],
"source": [
"from datasets import load_dataset, load_metric"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "CKx2zKs5IrIq"
},
"source": [
"Only `mnli-mm` requires a special check, otherwise we can directly pass our task name to these functions. `load_dataset` will cache the dataset to avoid downloading it again the next time you run this cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 270,
"referenced_widgets": [
"69caab03d6264fef9fc5649bffff5e20",
"3f74532faa86412293d90d3952f38c4a",
"50615aa59c7247c4804ca5cbc7945bd7",
"fe962391292a413ca55dc932c4279fa7",
"299f4b4c07654e53a25f8192bd1d7bbd",
"ad04ed1038154081bbb0c1444784dcc2",
"7c667ad22b5740d5a6319f1b1e3a8097",
"46c2b043c0f84806978784a45a4e203b",
"80e2943be35f46eeb24c8ab13faa6578",
"de5956b5008d4fdba807bae57509c393",
"931db1f7a42f4b46b7ff8c2e1262b994",
"6c1db72efff5476e842c1386fadbbdba",
"ccd2f37647c547abb4c719b75a26f2de",
"d30a66df5c0145e79693e09789d96b81",
"5fa26fc336274073abbd1d550542ee33",
"2b34de08115d49d285def9269a53f484",
"d426be871b424affb455aeb7db5e822e",
"160bf88485f44f5cb6eaeecba5e0901f",
"745c0d47d672477b9bb0dae77b926364",
"d22ab78269cd4ccfbcf70c707057c31b",
"d298eb19eeff453cba51c2804629d3f4",
"a7204ade36314c86907c562e0a2158b8",
"e35d42b2d352498ca3fc8530393786b2",
"75103f83538d44abada79b51a1cec09e",
"f6253931d90543e9b5fd0bb2d615f73a",
"051aa783ff9e47e28d1f9584043815f5",
"0984b2a14115454bbb009df71c1cf36f",
"8ab9dfce29854049912178941ef1b289",
"c9de740e007141958545e269372780a4",
"cbea68b25d6d4ba09b2ce0f27b1726d5",
"5781fc45cf8d486cb06ed68853b2c644",
"d2a92143a08a4951b55bab9bc0a6d0d3",
"a14c3e40e5254d61ba146f6ec88eae25",
"c4ffe6f624ce4e978a0d9b864544941a",
"1aca01c1d8c940dfadd3e7144bb35718",
"9fbbaae50e6743f2aa19342152398186",
"fea27ca6c9504fc896181bc1ff5730e5",
"940d00556cb849b3a689d56e274041c2",
"5cdf9ed939fb42d4bf77301c80b8afca",
"94b39ccfef0b4b08bf2fb61bb0a657c1",
"9a55087c85b74ea08b3e952ac1d73cbe",
"2361ab124daf47cc885ff61f2899b2af",
"1a65887eb37747ddb75dc4a40f7285f2",
"3c946e2260704e6c98593136bd32d921",
"50d325cdb9844f62a9ecc98e768cb5af",
"aa781f0cfe454e9da5b53b93e9baabd8",
"6bb68d3887ef43809eb23feb467f9723",
"7e29a8b952cf4f4ea42833c8bf55342f",
"dd5997d01d8947e4b1c211433969b89b",
"2ace4dc78e2f4f1492a181bcd63304e7",
"bbee008c2791443d8610371d1f16b62b",
"31b1c8a2e3334b72b45b083688c1a20c",
"7fb7c36adc624f7dbbcb4a831c1e4f63",
"0b7c8f1939074794b3d9221244b1344d",
"a71908883b064e1fbdddb547a8c41743",
"2f5223f26c8541fc87e91d2205c39995"
]
},
"id": "s_AY1ATSIrIq",
"outputId": "fd0578d1-8895-443d-b56f-5908de9f1b6b"
},
"outputs": [],
"source": [
"actual_task = \"mnli\" if task == \"mnli-mm\" else task\n",
"dataset = load_dataset(\"glue\", actual_task)\n",
"metric = load_metric('glue', actual_task)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "RzfPtOMoIrIu"
},
"source": [
"The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test sets (with more keys for the mismatched validation and test sets in the special case of `mnli`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GWiVUF0jIrIv",
"outputId": "35e3ea43-f397-4a54-c90c-f2cf8d36873e"
},
"outputs": [],
"source": [
"dataset"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "u3EtYfeHIrIz"
},
"source": [
"To access an actual element, you need to select a split first (`train` in the example below), then specify an index:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "X6HrpprwIrIz",
"outputId": "d7670bc0-42e4-4c09-8a6a-5c018ded7d95"
},
"outputs": [],
"source": [
"dataset[\"train\"][0]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "WHUmphG3IrI3"
},
"source": [
"We want to get a sense of what the data looks like, so we define the `show_random_elements` function to display some examples picked randomly from the dataset."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"id": "i3j8APAoIrI3"
},
"outputs": [],
"source": [
"import datasets\n",
"import random\n",
"import pandas as pd\n",
"from IPython.display import display, HTML\n",
"\n",
"def show_random_elements(dataset, num_examples=10):\n",
" assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\n",
" picks = []\n",
" for _ in range(num_examples):\n",
" pick = random.randint(0, len(dataset)-1)\n",
" while pick in picks:\n",
" pick = random.randint(0, len(dataset)-1)\n",
" picks.append(pick)\n",
" \n",
" df = pd.DataFrame(dataset[picks])\n",
" for column, typ in dataset.features.items():\n",
" if isinstance(typ, datasets.ClassLabel):\n",
" df[column] = df[column].transform(lambda i: typ.names[i])\n",
" display(HTML(df.to_html()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "SZy5tRB_IrI7",
"outputId": "ba8f2124-e485-488f-8c0c-254f34f24f13"
},
"outputs": [],
"source": [
"show_random_elements(dataset[\"train\"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "lnjDIuQ3IrI-"
},
"source": [
"The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Metric):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5o4rUteaIrI_",
"outputId": "18038ef5-554c-45c5-e00a-133b02ec10f1"
},
"outputs": [],
"source": [
"metric"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "jAWdqcUBIrJC"
},
"source": [
"You can call its `compute` method with your predictions and labels directly and it will return a dictionary with the metric(s) value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6XN1Rq0aIrJC",
"outputId": "a4405435-a8a9-41ff-9f79-a13077b587c7"
},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"fake_preds = np.random.randint(0, 2, size=(64,))\n",
"fake_labels = np.random.randint(0, 2, size=(64,))\n",
"metric.compute(predictions=fake_preds, references=fake_labels)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "YOCrQwPoIrJG"
},
"source": [
"Note that `load_metric` has loaded the proper metric associated with your task, which is:\n",
"\n",
"- for CoLA: [Matthews Correlation Coefficient](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient)\n",
"- for MNLI (matched or mismatched): Accuracy\n",
"- for MRPC: Accuracy and [F1 score](https://en.wikipedia.org/wiki/F1_score)\n",
"- for QNLI: Accuracy\n",
"- for QQP: Accuracy and [F1 score](https://en.wikipedia.org/wiki/F1_score)\n",
"- for RTE: Accuracy\n",
"- for SST-2: Accuracy\n",
"- for STS-B: [Pearson Correlation Coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) and [Spearman's_Rank_Correlation_Coefficient](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient)\n",
"- for WNLI: Accuracy\n",
"\n",
"so the metric object only computes the one(s) needed for your task."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "n9qywopnIrJH"
},
"source": [
"## Preprocessing the data"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "YVx71GdAIrJH"
},
"source": [
"Before we can feed the text samples to our model, we need to preprocess them. This is done by using a 🤗 Transformers `Tokenizer` which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pre-trained vocabulary) and put it in a format the model expects, as well as generate the other inputs that the model requires.\n",
"\n",
"To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:\n",
"\n",
"- we get a tokenizer that corresponds to the model architecture we want to use,\n",
"- we download the vocabulary used when pre-training this specific checkpoint.\n",
"\n",
"That vocabulary will be cached, so it won't have to be downloaded again the next time we run the cell."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"id": "eXNLu_-nIrJI"
},
"outputs": [],
"source": [
"from transformers import AutoTokenizer\n",
" \n",
"tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "Vl6IidfdIrJK"
},
"source": [
"We pass `use_fast=True` to the call above to use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library. Those fast tokenizers are available for almost all models, but if you got an error with the previous call, remove that argument."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "rowT4iCLIrJK"
},
"source": [
"You can call this tokenizer directly on one sentence or a pair of sentences:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "a5hBlsrHIrJL",
"outputId": "acdaa98a-a8cd-4a20-89b8-cc26437bbe90"
},
"outputs": [],
"source": [
"tokenizer(\"Hello, this one sentence!\", \"And this sentence goes with it.\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "qo_0B1M2IrJM"
},
"source": [
"Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here, but they are required by the model we will instantiate later. You can learn more about this in this [preprocessing tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.\n",
"\n",
"To preprocess our dataset, we will thus need the names of the columns containing the sentence(s). The following dictionary keeps track of the correspondence between task and column names:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"id": "fyGdtK9oIrJM"
},
"outputs": [],
"source": [
"task_to_keys = {\n",
" \"cola\": (\"sentence\", None),\n",
" \"mnli\": (\"premise\", \"hypothesis\"),\n",
" \"mnli-mm\": (\"premise\", \"hypothesis\"),\n",
" \"mrpc\": (\"sentence1\", \"sentence2\"),\n",
" \"qnli\": (\"question\", \"sentence\"),\n",
" \"qqp\": (\"question1\", \"question2\"),\n",
" \"rte\": (\"sentence1\", \"sentence2\"),\n",
" \"sst2\": (\"sentence\", None),\n",
" \"stsb\": (\"sentence1\", \"sentence2\"),\n",
" \"wnli\": (\"sentence1\", \"sentence2\"),\n",
"}"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "xbqtC4MrIrJO"
},
"source": [
"We double check that this mapping does work on our current dataset:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "19GG646uIrJO",
"outputId": "0cb4a520-817e-4f92-8de8-bb45df367657"
},
"outputs": [],
"source": [
"sentence1_key, sentence2_key = task_to_keys[task]\n",
"if sentence2_key is None:\n",
" print(f\"Sentence: {dataset['train'][0][sentence1_key]}\")\n",
"else:\n",
" print(f\"Sentence 1: {dataset['train'][0][sentence1_key]}\")\n",
" print(f\"Sentence 2: {dataset['train'][0][sentence2_key]}\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "2C0hcmp9IrJQ"
},
"source": [
"We write the function that will preprocess our samples. We feed them to the tokenizer with the three arguments. `padding=\"max_length\"` will ensure that an input shorter than the maximum length will be padded to the maximum length. `truncation=True` will ensure that an input longer than the maximum length will be truncated to the maximum length. `max_length=max_seq_length` sets the maximum length of a sequence.\n",
"\n",
"Note that it is necessary to pad all the sentences to the same length since currently Graphcore's PyTorch implementation only runs in static mode."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"id": "vc0BSBLIIrJQ"
},
"outputs": [],
"source": [
"def preprocess_function(examples):\n",
" if sentence2_key is None:\n",
" return tokenizer(examples[sentence1_key], padding=\"max_length\", truncation=True, max_length=max_seq_length)\n",
" return tokenizer(examples[sentence1_key], examples[sentence2_key], padding=\"max_length\", truncation=True, max_length=max_seq_length)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "0lm8ozrJIrJR"
},
"source": [
"This function works with one or several samples. In the case of several samples, the tokenizer will return a list of lists for each key:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-b70jh26IrJS",
"outputId": "acd3a42d-985b-44ee-9daa-af5d944ce1d9"
},
"outputs": [],
"source": [
"preprocess_function(dataset['train'][:5])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "zS-6iXTkIrJT"
},
"source": [
"To apply this function on all the sentences (or pairs of sentences) in our dataset, we use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed with a single command."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "DDtsaJeVIrJT",
"outputId": "aa4734bf-4ef5-4437-9948-2c16363da719"
},
"outputs": [],
"source": [
"encoded_dataset = dataset.map(preprocess_function, batched=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "voWiw8C7IrJV"
},
"source": [
"Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library normally detects when the function you pass to `map` has changed (and thus requires that the cached data not be used). For instance, the 🤗 Datasets library will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.\n",
"\n",
"Note that we passed `batched=True` to encode the text samples together into batches. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the text samples in a batch concurrently."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "545PP3o8IrJV"
},
"source": [
"## Fine-tuning the model"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "FBiW8UpKIrJW"
},
"source": [
"Now that our data is ready, we can download the pre-trained model and fine-tune it. Since all our tasks are about sentence classification, we use the `AutoModelForSequenceClassification` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which is always 2, except for STS-B which is a regression problem and MNLI where we have 3 labels):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "TlqNaB8jIrJW",
"outputId": "84916cf3-6e6c-47f3-d081-032ec30a4132"
},
"outputs": [],
"source": [
"from transformers import AutoModelForSequenceClassification, default_data_collator\n",
"from optimum.graphcore import IPUConfig, IPUTrainer, IPUTrainingArguments\n",
"\n",
"num_labels = 3 if task.startswith(\"mnli\") else 1 if task==\"stsb\" else 2\n",
"model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "CczA5lJlIrJX"
},
"source": [
"The warning tells us we are throwing away some weights and randomly initializing others. This is normal in this case, because we are removing the head used to pre-train the model on a masked language modelling objective and replacing it with a new head for which we don't have pre-trained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "_N8urzhyIrJY"
},
"source": [
"To instantiate the `IPUTrainer` class, we will need to define:\n",
"* `IPUConfig`, which is a class that specifies attributes and configuration parameters to compile and put the model on the device.\n",
"* `IPUTrainingArguments`, which is a class that contains all the attributes to customize the training.\n",
"\n",
"We initialize `IPUConfig` with one config name or path, which we set earlier:"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"ipu_config = IPUConfig.from_pretrained(\n",
" ipu_config_name, executable_cache_dir = executable_cache_dir\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"`IPUTrainingArguments` requires one folder name, which will be used to save the checkpoints of the model. All other arguments are optional:"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"id": "Bliy8zgjIrJY"
},
"outputs": [],
"source": [
"metric_name = \"pearson\" if task == \"stsb\" else \"matthews_correlation\" if task == \"cola\" else \"accuracy\"\n",
"model_name = model_checkpoint.split(\"/\")[-1]\n",
"\n",
"args = IPUTrainingArguments(\n",
" \"/tmp/\"+f\"{model_name}-finetuned-{task}\",\n",
" evaluation_strategy = \"epoch\",\n",
" save_strategy = \"epoch\",\n",
" learning_rate=2e-5,\n",
" per_device_train_batch_size=micro_batch_size,\n",
" per_device_eval_batch_size=micro_batch_size,\n",
" num_train_epochs=5,\n",
" weight_decay=0.01,\n",
" load_best_model_at_end=True,\n",
" metric_for_best_model=metric_name,\n",
" dataloader_drop_last=True,\n",
" logging_steps=10,\n",
" n_ipu=n_ipu,\n",
" gradient_accumulation_steps=gradient_accumulation_steps,\n",
" push_to_hub=False,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "km3pGVdTIrJc"
},
"source": [
"Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the three batch-size-related arguments (`micro_batch_size`, `gradient_accumulation_steps`, `max_seq_length`) defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay. Since the best model might not be the one at the end of training, we ask `IPUTrainer` to load the best model it saved (according to `metric_name`) at the end of training.\n",
"\n",
"The last argument (`push_to_hub`) is to indicate that the model should be pushed to the [🤗 Models Hub](https://huggingface.co/models) regularly during training. Remove it if you didn't follow the setup steps at the top of the notebook to share your model with the community. If you want to save your model locally with a name that is different to the name of the repository it will be pushed to, or if you want to push your model under an organization and not your name space, use the `hub_model_id` argument to set the repo name. Note that this needs to be the full name, including your namespace: for instance `\"sgugger/bert-finetuned-mrpc\"` or `\"huggingface/bert-finetuned-mrpc\"`."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "7sZOdRlRIrJd"
},
"source": [
"The last thing to define for `IPUTrainer` is how to compute the metrics from the predictions. We need to define a function for this, which will use `metric` which we loaded earlier. The only preprocessing we have to do is to take the argmax of our predicted logits (or squeeze the last axis in the case of STS-B):"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"id": "UmvbnJ9JIrJd"
},
"outputs": [],
"source": [
"def compute_metrics(eval_pred):\n",
" predictions, labels = eval_pred\n",
" if task != \"stsb\":\n",
" predictions = np.argmax(predictions, axis=1)\n",
" else:\n",
" predictions = predictions[:, 0]\n",
" return metric.compute(predictions=predictions, references=labels)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "rXuFTAzDIrJe"
},
"source": [
"Then we need to pass all of this along with our datasets to `IPUTrainer`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "imY1oC3SIrJf"
},
"outputs": [],
"source": [
"validation_key = \"validation_mismatched\" if task == \"mnli-mm\" else \"validation_matched\" if task == \"mnli\" else \"validation\"\n",
"trainer = IPUTrainer(\n",
" model,\n",
" ipu_config,\n",
" args,\n",
" train_dataset=encoded_dataset[\"train\"],\n",
" eval_dataset=encoded_dataset[validation_key],\n",
" compute_metrics=compute_metrics\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "ibWGmvxbIrJg"
},
"source": [
"You can customize this part by defining and passing your own `data_collator` which will receive the samples like the dictionaries seen above and will need to return a dictionary of tensors."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "CdzABDVcIrJg"
},
"source": [
"We can now fine-tune our model by calling the `train` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uNx5pyRlIrJh",
"outputId": "077e661e-d36c-469b-89b8-7ff7f73541ec"
},
"outputs": [],
"source": [
"trainer.train()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "CKASz-2vIrJi"
},
"source": [
"We can check with the `evaluate` method that `IPUTrainer` did reload the best model properly (if it was not the last one):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UOUcBkX8IrJi",
"outputId": "de5b9dd6-9dc0-4702-cb43-55e9829fde25"
},
"outputs": [],
"source": [
"trainer.evaluate()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "ffP-VQOyIrJk"
},
"source": [
"To see how your model fared you can compare it to the [GLUE Benchmark leaderboard](https://gluebenchmark.com/leaderboard)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can now upload the result of the training to the 🤗 Models Hub by running the following:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# trainer.push_to_hub()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also share this model and other users can load it with the identifier `\"your-username/the-name-you-picked\"` so for instance:\n",
"\n",
"```python\n",
"from transformers import AutoModelForSequenceClassification\n",
"\n",
"model = AutoModelForSequenceClassification.from_pretrained(\"sgugger/my-awesome-model\")\n",
"```"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"Try out the other [IPU-powered Jupyter Notebooks](https://www.graphcore.ai/ipu-jupyter-notebooks) to see how how IPUs perform on other tasks."
]
}
],
"metadata": {
"colab": {
"name": "Text Classification on GLUE",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.16"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 1
}