notebooks/token_classification.ipynb (1,184 lines of code) (raw):
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "rEJBSTyZIrIb"
},
"source": [
"# Token Classification Task on IPU using BERT - Fine-tuning"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) models for a token classification task. Token classification is the task of predicting a label for each token.\n",
"\n",
"\n",
"\n",
"The most common token classification tasks are:\n",
"\n",
"- NER (named-entity recognition): Classify the entities in the text (for example person, organization, location).\n",
"- POS (part-of-speech tagging): Grammatically classify the tokens (for example noun, verb, adjective).\n",
"- Chunk (chunking): Grammatically classify the tokens and group them into \"chunks\" that go together.\n",
"\n",
"We will see how to easily load a dataset for these kinds of tasks and use the `IPUTrainer` API to fine-tune a model on it."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"| Domain | Tasks | Model | Datasets | Workflow | Number of IPUs | Execution time |\n",
"|---------|-------|-------|----------|----------|--------------|--------------|\n",
"| Natural language processing | Token classification | Multiple | conll2003 | Fine-tuning | 4 | 8min |"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"[](https://www.graphcore.ai/join-community)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Environment setup\n",
"\n",
"The best way to run this demo is on Paperspace Gradient's cloud IPUs because everything is already set up for you.\n",
"\n",
"[](https://ipu.dev/3YCsqT1)\n",
"\n",
"To run the demo using other IPU hardware, you need to have the Poplar SDK enabled. Refer to the [Getting Started guide](https://docs.graphcore.ai/en/latest/getting-started.html#getting-started) for your system for details on how to enable the Poplar SDK. Also refer to the [Jupyter Quick Start guide](https://docs.graphcore.ai/projects/jupyter-notebook-quick-start/en/latest/index.html) for how to set up Jupyter to be able to run this notebook on a remote IPU machine."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dependencies and configuration\n",
"\n",
"In order to improve usability and support for future users, Graphcore would like to collect information about the\n",
"applications and code being run in this notebook. The following information will be anonymised before being sent to Graphcore:\n",
"\n",
"- User progression through the notebook\n",
"- Notebook details: number of cells, code being run and the output of the cells\n",
"- Environment details\n",
"\n",
"You can disable logging at any time by running `%unload_ext graphcore_cloud_tools.notebook_logging.gc_logger` from any cell."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the dependencies for this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install \"optimum-graphcore==0.7\" seqeval\n",
"%pip install graphcore-cloud-tools[logger]@git+https://github.com/graphcore/graphcore-cloud-tools\n",
"%load_ext graphcore_cloud_tools.notebook_logging.gc_logger"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook is built to run on any token classification task, with any model checkpoint from the [🤗 Model Hub](https://huggingface.co/models) as long as that model has a version with a token classification head and a fast tokenizer. You can look at the [supported frameworks](https://huggingface.co/docs/transformers/index#supported-frameworks) to confirm that the model has the necessary features. The model must also be supported by Optimum Graphcore. \n",
"\n",
"The notebook might need some small adjustments to the names of the columns used you choose to use a different dataset than the one used here. \n",
"\n",
"`max_seq_length` is the length we are going to pad the sentences to, so it should not be larger than the maximum length of the model. The IPU config files of the supported models are available in Graphcore's [🤗 account](https://huggingface.co/Graphcore). You can also create your own IPU config file locally. \n",
"\n",
"In this notebook, we are using both data parallelism and pipeline parallelism. Refer to the [tutorial on efficient data loading](https://github.com/graphcore/examples/tree/master/tutorials/pytorch/efficient_data_loading) for more information. Therefore the global batch size, which is the actual number of samples used for the weight update, is calculated from three factors:\n",
"- global batch size = micro_batch_size * gradient accumulation steps * replication factor\n",
"\n",
"The replication factor is determined by the type of IPU Pod, which will be used as a key to select the replication factor from a dictionary defined in the IPU config file. For example, the dictionary in the IPU config file [Graphcore/bert-base-ipu](https://huggingface.co/Graphcore/bert-base-ipu/blob/main/ipu_config.json) looks like this:\n",
"- \"replication_factor\": {\"pod4\": 1, \"pod8\": 2, \"pod16\": 4, \"pod32\": 8, \"pod64\": 16, \"default\": 1}\n",
"\n",
"Depending on your model and the IPU Pod you are using, you might need to adjust these three batch-size-related arguments.\n",
"\n",
"Set these seven parameters, and the rest of the notebook should run smoothly:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"task = \"ner\" # Should be one of \"ner\", \"pos\" or \"chunk\"\n",
"model_checkpoint = \"bert-base-uncased\"\n",
"batch_size = 16\n",
"\n",
"max_seq_length = 128\n",
"ipu_config_name = \"Graphcore/bert-base-ipu\"\n",
"micro_batch_size = 1\n",
"gradient_accumulation_steps = 16"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Values for machine size and cache directories can be configured through environment variables or directly in the notebook:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"n_ipu = int(os.getenv(\"NUM_AVAILABLE_IPU\", 4))\n",
"executable_cache_dir = os.getenv(\"POPLAR_EXECUTABLE_CACHE_DIR\", \"/tmp/exe_cache/\") + \"/token_classification\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Sharing your model with the community"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can share your model with the 🤗 community. You do this by completing the following steps:\n",
"\n",
"1. Store your authentication token from the 🤗 website. [Sign up to 🤗](https://huggingface.co/join) if you haven't already.\n",
"2. Execute the following cell and input your username and password."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from huggingface_hub import notebook_login\n",
"\n",
"notebook_login()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Then you need to install Git-LFS to manage large files:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!apt install git-lfs"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "whPRbBNbIrIl"
},
"source": [
"## Loading the dataset"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "W7QYTpxXIrIl"
},
"source": [
"We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we will use for evaluation (to compare our model to the benchmark). This can be easily done with the `load_dataset` and `load_metric` functions. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "IreSlFmlIrIm"
},
"outputs": [],
"source": [
"from datasets import load_dataset, load_metric"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "CKx2zKs5IrIq"
},
"source": [
"For this notebook, we'll use the [CONLL 2003 dataset](https://www.aclweb.org/anthology/W03-0419.pdf). The notebook should work with any token classification dataset provided by the 🤗 Datasets library. If you're using your own dataset defined from a JSON or a CSV file, then you may have to make some changes to the names of the columns used. Refer to the 🤗 Datasets documentation on [loading datasets from local files](https://huggingface.co/docs/datasets/loading_datasets.html#from-local-files) for more information."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 270,
"referenced_widgets": [
"69caab03d6264fef9fc5649bffff5e20",
"3f74532faa86412293d90d3952f38c4a",
"50615aa59c7247c4804ca5cbc7945bd7",
"fe962391292a413ca55dc932c4279fa7",
"299f4b4c07654e53a25f8192bd1d7bbd",
"ad04ed1038154081bbb0c1444784dcc2",
"7c667ad22b5740d5a6319f1b1e3a8097",
"46c2b043c0f84806978784a45a4e203b",
"80e2943be35f46eeb24c8ab13faa6578",
"de5956b5008d4fdba807bae57509c393",
"931db1f7a42f4b46b7ff8c2e1262b994",
"6c1db72efff5476e842c1386fadbbdba",
"ccd2f37647c547abb4c719b75a26f2de",
"d30a66df5c0145e79693e09789d96b81",
"5fa26fc336274073abbd1d550542ee33",
"2b34de08115d49d285def9269a53f484",
"d426be871b424affb455aeb7db5e822e",
"160bf88485f44f5cb6eaeecba5e0901f",
"745c0d47d672477b9bb0dae77b926364",
"d22ab78269cd4ccfbcf70c707057c31b",
"d298eb19eeff453cba51c2804629d3f4",
"a7204ade36314c86907c562e0a2158b8",
"e35d42b2d352498ca3fc8530393786b2",
"75103f83538d44abada79b51a1cec09e",
"f6253931d90543e9b5fd0bb2d615f73a",
"051aa783ff9e47e28d1f9584043815f5",
"0984b2a14115454bbb009df71c1cf36f",
"8ab9dfce29854049912178941ef1b289",
"c9de740e007141958545e269372780a4",
"cbea68b25d6d4ba09b2ce0f27b1726d5",
"5781fc45cf8d486cb06ed68853b2c644",
"d2a92143a08a4951b55bab9bc0a6d0d3",
"a14c3e40e5254d61ba146f6ec88eae25",
"c4ffe6f624ce4e978a0d9b864544941a",
"1aca01c1d8c940dfadd3e7144bb35718",
"9fbbaae50e6743f2aa19342152398186",
"fea27ca6c9504fc896181bc1ff5730e5",
"940d00556cb849b3a689d56e274041c2",
"5cdf9ed939fb42d4bf77301c80b8afca",
"94b39ccfef0b4b08bf2fb61bb0a657c1",
"9a55087c85b74ea08b3e952ac1d73cbe",
"2361ab124daf47cc885ff61f2899b2af",
"1a65887eb37747ddb75dc4a40f7285f2",
"3c946e2260704e6c98593136bd32d921",
"50d325cdb9844f62a9ecc98e768cb5af",
"aa781f0cfe454e9da5b53b93e9baabd8",
"6bb68d3887ef43809eb23feb467f9723",
"7e29a8b952cf4f4ea42833c8bf55342f",
"dd5997d01d8947e4b1c211433969b89b",
"2ace4dc78e2f4f1492a181bcd63304e7",
"bbee008c2791443d8610371d1f16b62b",
"31b1c8a2e3334b72b45b083688c1a20c",
"7fb7c36adc624f7dbbcb4a831c1e4f63",
"0b7c8f1939074794b3d9221244b1344d",
"a71908883b064e1fbdddb547a8c41743",
"2f5223f26c8541fc87e91d2205c39995"
]
},
"id": "s_AY1ATSIrIq",
"outputId": "fd0578d1-8895-443d-b56f-5908de9f1b6b"
},
"outputs": [],
"source": [
"datasets = load_dataset(\"conll2003\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "RzfPtOMoIrIu"
},
"source": [
"The `datasets` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test sets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GWiVUF0jIrIv",
"outputId": "35e3ea43-f397-4a54-c90c-f2cf8d36873e"
},
"outputs": [],
"source": [
"datasets"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see the training, validation and test sets all have a column for the tokens (the input text split into words) and one column of labels for each kind of task we listed in the introduction."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "u3EtYfeHIrIz"
},
"source": [
"To access an actual element, you need to select a split (\"train\" in the example), then specify an index:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "X6HrpprwIrIz",
"outputId": "d7670bc0-42e4-4c09-8a6a-5c018ded7d95"
},
"outputs": [],
"source": [
"datasets[\"train\"][0]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The labels are already coded as integer IDs to be easily usable by our model, but the correspondence with the actual categories is stored in the `features` of the dataset:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"datasets[\"train\"].features[f\"ner_tags\"]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"So for the NER tags, 0 corresponds to 'O', 1 to 'B-PER' and so on. In addition to 'O' (which means no special entity), there are four labels for NER. Each label is prefixed with 'B-' (for beginning) or 'I-' (for intermediate) that indicate if the token is the first one for the current group with the label or not:\n",
"- 'PER' for person\n",
"- 'ORG' for organization\n",
"- 'LOC' for location\n",
"- 'MISC' for miscellaneous"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Since the labels are lists of `ClassLabel`, the actual names of the labels are nested in the `feature` attribute of the `datasets` object:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"label_list = datasets[\"train\"].features[f\"{task}_tags\"].feature.names\n",
"label_list"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "WHUmphG3IrI3"
},
"source": [
"We want to get a sense of what the data looks like, so we define the `show_random_elements` function to display some examples picked randomly from the dataset (automatically decoding the labels in passing)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "i3j8APAoIrI3"
},
"outputs": [],
"source": [
"from datasets import ClassLabel, Sequence\n",
"import random\n",
"import pandas as pd\n",
"from IPython.display import display, HTML\n",
"\n",
"def show_random_elements(dataset, num_examples=10):\n",
" assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\n",
" picks = []\n",
" for _ in range(num_examples):\n",
" pick = random.randint(0, len(dataset)-1)\n",
" while pick in picks:\n",
" pick = random.randint(0, len(dataset)-1)\n",
" picks.append(pick)\n",
" \n",
" df = pd.DataFrame(dataset[picks])\n",
" for column, typ in dataset.features.items():\n",
" if isinstance(typ, ClassLabel):\n",
" df[column] = df[column].transform(lambda i: typ.names[i])\n",
" elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):\n",
" df[column] = df[column].transform(lambda x: [typ.feature.names[i] for i in x])\n",
" display(HTML(df.to_html()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "SZy5tRB_IrI7",
"outputId": "ba8f2124-e485-488f-8c0c-254f34f24f13",
"scrolled": true
},
"outputs": [],
"source": [
"show_random_elements(datasets[\"train\"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "n9qywopnIrJH"
},
"source": [
"## Preprocessing the data"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "YVx71GdAIrJH"
},
"source": [
"Before we can feed these text samples to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pre-trained vocabulary) and put it in a format the model expects, as well as generate the other inputs that the model requires.\n",
"\n",
"To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:\n",
"\n",
"- We get a tokenizer that corresponds to the model architecture we want to use,\n",
"- We download the vocabulary used when pre-training this specific checkpoint.\n",
"\n",
"This vocabulary will be cached, so it won't be downloaded again the next time we run the cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "eXNLu_-nIrJI"
},
"outputs": [],
"source": [
"from transformers import AutoTokenizer\n",
" \n",
"tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "Vl6IidfdIrJK"
},
"source": [
"The following assertion ensures that our tokenizer is a fast tokenizer (backed by Rust) from the 🤗 Tokenizers library. Fast tokenizers are available for almost all models, and we will need some of the special features they have for our preprocessing."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import transformers\n",
"assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can check which type of models have a fast tokenizer available and which don't in the [Supported frameworks table](https://huggingface.co/docs/transformers/index#supported-frameworks)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "rowT4iCLIrJK"
},
"source": [
"You can directly call this tokenizer on one sentence:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "a5hBlsrHIrJL",
"outputId": "acdaa98a-a8cd-4a20-89b8-cc26437bbe90"
},
"outputs": [],
"source": [
"tokenizer(\"Hello, this is one sentence!\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. The details don't matter much for what we're doing here, but they are required by the model we will instantiate later. You can learn more about them in the [tutorial on preprocessing](https://huggingface.co/transformers/preprocessing.html).\n",
"\n",
"If, as is the case here, your inputs have already been split into words, you should pass the list of words to your tokenizer with the argument `is_split_into_words=True`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tokenizer([\"Hello\", \",\", \"this\", \"is\", \"one\", \"sentence\", \"split\", \"into\", \"words\", \".\"], is_split_into_words=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that transformers are often pre-trained with subword tokenizers, meaning that even if your inputs have been split into words already, each of those words could be split again by the tokenizer. Let's look at an example of this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"example = datasets[\"train\"][4]\n",
"print(example[\"tokens\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tokenized_input = tokenizer(example[\"tokens\"], is_split_into_words=True)\n",
"tokens = tokenizer.convert_ids_to_tokens(tokenized_input[\"input_ids\"])\n",
"print(tokens)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Here the words \"Zwingmann\" and \"sheepmeat\" have been split into three subtokens.\n",
"\n",
"This means that we need to do some processing on our labels as the input IDs returned by the tokenizer are longer than the lists of labels our dataset contains. This is necessary because some special tokens might be added (we can a `[CLS]` and a `[SEP]` above) and also because of any possible splits of words into multiple tokens:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"len(example[f\"{task}_tags\"]), len(tokenized_input[\"input_ids\"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Thankfully, the tokenizer returns outputs that have a `word_ids` method which can help us."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(tokenized_input.word_ids())"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"As we can see, it returns a list with the same number of elements as our processed input IDs, mapping special tokens to `None` and all other tokens to their respective word. This means we can align the labels with the processed input IDs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"word_ids = tokenized_input.word_ids()\n",
"aligned_labels = [-100 if i is None else example[f\"{task}_tags\"][i] for i in word_ids]\n",
"print(len(aligned_labels), len(tokenized_input[\"input_ids\"]))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"There are two strategies to use, selected with the `label_all_tokens` parameter. Firstly, we can set the labels of all special tokens to -100 (the index that is ignored by PyTorch) and the labels of all other tokens to the label of the word they come from. Then we can set the label only on the first token obtained from a given word, and give a label of -100 to the other subtokens from the same word. We use the first strategy here. If you want to use the second strategy, set `label_all_tokens` to False."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"label_all_tokens = True"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "2C0hcmp9IrJQ"
},
"source": [
"We're now ready to write the function that will preprocess our samples. We feed them to the `tokenizer` with the argument `padding=\"max_length\"` (to ensure that an input shorter than the maximum length will be padded to the maximum length), `truncation=True` (to truncate text samples that are longer than the maximum length), `max_length=max_seq_length` (to set the maximum length of a sequence) and `is_split_into_words=True` (as seen above). Note that it is necessary that all the sentences have the same length since currently Graphcore's PyTorch implementation only runs in static mode. Then we align the labels with the token IDs using the strategy we picked:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "vc0BSBLIIrJQ"
},
"outputs": [],
"source": [
"def tokenize_and_align_labels(examples):\n",
" tokenized_inputs = tokenizer(\n",
" examples[\"tokens\"],\n",
" padding=\"max_length\",\n",
" truncation=True,\n",
" max_length=max_seq_length,\n",
" is_split_into_words=True\n",
" )\n",
"\n",
" labels = []\n",
" for i, label in enumerate(examples[f\"{task}_tags\"]):\n",
" word_ids = tokenized_inputs.word_ids(batch_index=i)\n",
" previous_word_idx = None\n",
" label_ids = []\n",
" for word_idx in word_ids:\n",
" # Special tokens have a word ID that is None. We set the label to -100 so they are automatically\n",
" # ignored in the loss function.\n",
" if word_idx is None:\n",
" label_ids.append(-100)\n",
" # We set the label for the first token of each word.\n",
" elif word_idx != previous_word_idx:\n",
" label_ids.append(label[word_idx])\n",
" # For the other tokens in a word, we set the label to either the current label or -100, depending on\n",
" # the label_all_tokens flag.\n",
" else:\n",
" label_ids.append(label[word_idx] if label_all_tokens else -100)\n",
" previous_word_idx = word_idx\n",
"\n",
" labels.append(label_ids)\n",
"\n",
" tokenized_inputs[\"labels\"] = labels\n",
" return tokenized_inputs"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "0lm8ozrJIrJR"
},
"source": [
"This function works with one or several samples. In the case of several samples, the tokenizer will return a list of lists for each key:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-b70jh26IrJS",
"outputId": "acd3a42d-985b-44ee-9daa-af5d944ce1d9"
},
"outputs": [],
"source": [
"tokenize_and_align_labels(datasets['train'][:5])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "zS-6iXTkIrJT"
},
"source": [
"To apply this function on all the sentences (or pairs of sentences) in our dataset, we just use the `map` method of the `dataset` object we created earlier. This will apply the function to all elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed with a single command."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "DDtsaJeVIrJT",
"outputId": "aa4734bf-4ef5-4437-9948-2c16363da719"
},
"outputs": [],
"source": [
"tokenized_datasets = datasets.map(tokenize_and_align_labels, batched=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "voWiw8C7IrJV"
},
"source": [
"Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is able to detect when the function you pass to `map` has changed (and thus to not use the cached data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files. You can also pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.\n",
"\n",
"Note that we passed `batched=True` to encode the text samples together into batches. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the text samples in a batch concurrently."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "545PP3o8IrJV"
},
"source": [
"## Fine-tuning the model"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "FBiW8UpKIrJW"
},
"source": [
"Now that our data is ready, we can download the pre-trained model and fine-tune it. Since we are focussing on token classification tasks, we use the `AutoModelForTokenClassification` class. As with the tokenizer, the `from_pretrained` method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which we can get from the features, as seen before):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "TlqNaB8jIrJW",
"outputId": "84916cf3-6e6c-47f3-d081-032ec30a4132"
},
"outputs": [],
"source": [
"from transformers import AutoModelForTokenClassification\n",
"from optimum.graphcore import IPUConfig, IPUTrainer, IPUTrainingArguments\n",
"\n",
"model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=len(label_list))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "CczA5lJlIrJX"
},
"source": [
"The warning tells us that we are throwing away some weights and randomly initializing others. This is normal in this case, because we are removing the head used to pre-train the model on a masked language modelling objective and replacing it with a new head for which we don't have pre-trained weights, so the library warns us that we should fine-tune this model before using it for inference, which is exactly what we are going to do."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "_N8urzhyIrJY"
},
"source": [
"To instantiate an `IPUTrainer` class, we will need to define: \n",
"* `IPUConfig`, which is a class that specifies attributes and configuration parameters to compile and put the model on the device.\n",
"* `IPUTrainingArguments`, which is a class that contains all the attributes to customize the training.\n",
"* A data collator.\n",
"* How to compute the metrics from the predictions.\n",
"\n",
"We initialize `IPUConfig` with a config name or path, which we set earlier:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ipu_config = IPUConfig.from_pretrained(\n",
" ipu_config_name, executable_cache_dir=executable_cache_dir\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"`IPUTrainingArguments` requires a folder name, which will be used to save the checkpoints of the model. All other arguments are optional:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Bliy8zgjIrJY"
},
"outputs": [],
"source": [
"model_name = model_checkpoint.split(\"/\")[-1]\n",
"args = IPUTrainingArguments(\n",
" f\"{model_name}-finetuned-{task}\",\n",
" evaluation_strategy = \"epoch\",\n",
" learning_rate=2e-5,\n",
" per_device_train_batch_size=micro_batch_size,\n",
" per_device_eval_batch_size=micro_batch_size,\n",
" gradient_accumulation_steps=gradient_accumulation_steps,\n",
" n_ipu=n_ipu,\n",
" num_train_epochs=3,\n",
" weight_decay=0.01,\n",
" dataloader_drop_last=True,\n",
" logging_steps=10,\n",
" push_to_hub=False,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "km3pGVdTIrJc"
},
"source": [
"Here we have set the evaluation to be done at the end of each epoch, tweak the learning rate, use the three batch-size-related arguments defined earlier in the notebook and customize the number of epochs for training, as well as the weight decay.\n",
"\n",
"`push_to_hub` in `IPUTrainingArguments` is necessary if we want to push the model to the [🤗 Model hub](https://huggingface.co/models) regularly during training. You can remove them if you didn't follow the installation steps at the beginning of this notebook. If you want to save your model locally to a name that is different to the name of the repository it will be pushed to, or if you want to push your model under an organization and not your name space, use the `hub_model_id` argument to set the repo name (it needs to be the full name, including your namespace: for instance `\"sgugger/bert-finetuned-ner\"` or `\"huggingface/bert-finetuned-ner\"`)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we will need a data collator that does not do any additional preprocessing. This is because we have already done the padding during the earlier preprocessing stage."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import DefaultDataCollator\n",
"\n",
"data_collator = DefaultDataCollator()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to compute the metrics from the predictions, we will load the [`seqeval`](https://github.com/chakki-works/seqeval) metric (which is commonly used to evaluate results on the CONLL dataset) via the Datasets library."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metric = load_metric(\"seqeval\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"This metric takes a list of labels for the predictions and references:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"labels = [label_list[i] for i in example[f\"{task}_tags\"]]\n",
"metric.compute(predictions=[labels], references=[labels])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "7sZOdRlRIrJd"
},
"source": [
"So we will need to do a bit of post-processing on our predictions:\n",
"- Select the predicted index (with the maximum logit) for each token.\n",
"- Convert it to its string label.\n",
"- Ignore data where we have set a label of -100.\n",
"\n",
"The following function does all this post-processing on the result of `IPUTrainer.evaluate` (which is a named tuple containing predictions and labels) before applying the metric:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UmvbnJ9JIrJd"
},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"def compute_metrics(p):\n",
" predictions, labels = p\n",
" predictions = np.argmax(predictions, axis=2)\n",
"\n",
" # Remove ignored index (special tokens)\n",
" true_predictions = [\n",
" [label_list[p] for (p, l) in zip(prediction, label) if l != -100]\n",
" for prediction, label in zip(predictions, labels)\n",
" ]\n",
" true_labels = [\n",
" [label_list[l] for (p, l) in zip(prediction, label) if l != -100]\n",
" for prediction, label in zip(predictions, labels)\n",
" ]\n",
"\n",
" results = metric.compute(predictions=true_predictions, references=true_labels)\n",
" return {\n",
" \"precision\": results[\"overall_precision\"],\n",
" \"recall\": results[\"overall_recall\"],\n",
" \"f1\": results[\"overall_f1\"],\n",
" \"accuracy\": results[\"overall_accuracy\"],\n",
" }"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "rXuFTAzDIrJe"
},
"source": [
"Note that we drop the precision/recall/f1 computed for each category and only focus on the overall precision/recall/f1/accuracy.\n",
"\n",
"Now we are ready to instantiate our `IPUTrainer` class. We pass all of this along with our datasets to `IPUTrainer`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "imY1oC3SIrJf"
},
"outputs": [],
"source": [
"trainer = IPUTrainer(\n",
" model,\n",
" ipu_config,\n",
" args,\n",
" train_dataset=tokenized_datasets[\"train\"],\n",
" eval_dataset=tokenized_datasets[\"validation\"],\n",
" data_collator=data_collator,\n",
" compute_metrics=compute_metrics\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "CdzABDVcIrJg"
},
"source": [
"We can now fine-tune our model by calling the `train` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trainer.train()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "CKASz-2vIrJi"
},
"source": [
"The `evaluate` method allows you to run the evaluation again on the evaluation dataset or on another dataset:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UOUcBkX8IrJi",
"outputId": "de5b9dd6-9dc0-4702-cb43-55e9829fde25"
},
"outputs": [],
"source": [
"trainer.evaluate()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"To get the precision/recall/f1 computed for each category now that we have finished training, we can apply the same function as before on the result of the `predict` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"predictions, labels, _ = trainer.predict(tokenized_datasets[\"validation\"])\n",
"predictions = np.argmax(predictions, axis=2)\n",
"\n",
"# Remove ignored index (special tokens)\n",
"true_predictions = [\n",
" [label_list[p] for (p, l) in zip(prediction, label) if l != -100]\n",
" for prediction, label in zip(predictions, labels)\n",
"]\n",
"true_labels = [\n",
" [label_list[l] for (p, l) in zip(prediction, label) if l != -100]\n",
" for prediction, label in zip(predictions, labels)\n",
"]\n",
"\n",
"results = metric.compute(predictions=true_predictions, references=true_labels)\n",
"results"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can now upload the result of the training to the 🤗 Hub:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# trainer.push_to_hub()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also share this model and other users can load it with the identifier \"your-username/the-name-you-picked\" so for instance:\n",
"\n",
"```python\n",
"from transformers import AutoModelForTokenClassification\n",
"\n",
"model = AutoModelForTokenClassification.from_pretrained(\"sgugger/my-awesome-model\")\n",
"```"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"Try out the other [IPU-powered Jupyter Notebooks](https://www.graphcore.ai/ipu-jupyter-notebooks) to see how how IPUs perform on other tasks."
]
}
],
"metadata": {
"colab": {
"name": "Token Classification",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3.8.10 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 1
}