transformers_doc/en/token_classification.ipynb (1,018 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Transformers installation\n",
"! pip install transformers datasets evaluate accelerate\n",
"# To install from source instead of the last release, comment the command above and uncomment the following one.\n",
"# ! pip install git+https://github.com/huggingface/transformers.git"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Token classification"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"hide_input": true
},
"outputs": [
{
"data": {
"text/html": [
"<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/wVHdVlPScxA?rel=0&controls=0&showinfo=0\" frameborder=\"0\" allowfullscreen></iframe>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#@title\n",
"from IPython.display import HTML\n",
"\n",
"HTML('<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/wVHdVlPScxA?rel=0&controls=0&showinfo=0\" frameborder=\"0\" allowfullscreen></iframe>')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Token classification assigns a label to individual tokens in a sentence. One of the most common token classification tasks is Named Entity Recognition (NER). NER attempts to find a label for each entity in a sentence, such as a person, location, or organization.\n",
"\n",
"This guide will show you how to:\n",
"\n",
"1. Finetune [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased) on the [WNUT 17](https://huggingface.co/datasets/wnut_17) dataset to detect new entities.\n",
"2. Use your finetuned model for inference.\n",
"\n",
"<Tip>\n",
"\n",
"To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/token-classification).\n",
"\n",
"</Tip>\n",
"\n",
"Before you begin, make sure you have all the necessary libraries installed:\n",
"\n",
"```bash\n",
"pip install transformers datasets evaluate seqeval\n",
"```\n",
"\n",
"We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from huggingface_hub import notebook_login\n",
"\n",
"notebook_login()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load WNUT 17 dataset"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Start by loading the WNUT 17 dataset from the 🤗 Datasets library:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datasets import load_dataset\n",
"\n",
"wnut = load_dataset(\"wnut_17\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then take a look at an example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'id': '0',\n",
" 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0],\n",
" 'tokens': ['@paulwalk', 'It', \"'s\", 'the', 'view', 'from', 'where', 'I', \"'m\", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.']\n",
"}"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"wnut[\"train\"][0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Each number in `ner_tags` represents an entity. Convert the numbers to their label names to find out what the entities are:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[\n",
" \"O\",\n",
" \"B-corporation\",\n",
" \"I-corporation\",\n",
" \"B-creative-work\",\n",
" \"I-creative-work\",\n",
" \"B-group\",\n",
" \"I-group\",\n",
" \"B-location\",\n",
" \"I-location\",\n",
" \"B-person\",\n",
" \"I-person\",\n",
" \"B-product\",\n",
" \"I-product\",\n",
"]"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"label_list = wnut[\"train\"].features[f\"ner_tags\"].feature.names\n",
"label_list"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The letter that prefixes each `ner_tag` indicates the token position of the entity:\n",
"\n",
"- `B-` indicates the beginning of an entity.\n",
"- `I-` indicates a token is contained inside the same entity (for example, the `State` token is a part of an entity like\n",
" `Empire State Building`).\n",
"- `0` indicates the token doesn't correspond to any entity."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Preprocess"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"hide_input": true
},
"outputs": [
{
"data": {
"text/html": [
"<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/iY2AZYdZAr0?rel=0&controls=0&showinfo=0\" frameborder=\"0\" allowfullscreen></iframe>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#@title\n",
"from IPython.display import HTML\n",
"\n",
"HTML('<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/iY2AZYdZAr0?rel=0&controls=0&showinfo=0\" frameborder=\"0\" allowfullscreen></iframe>')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The next step is to load a DistilBERT tokenizer to preprocess the `tokens` field:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoTokenizer\n",
"\n",
"tokenizer = AutoTokenizer.from_pretrained(\"distilbert/distilbert-base-uncased\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you saw in the example `tokens` field above, it looks like the input has already been tokenized. But the input actually hasn't been tokenized yet and you'll need to set `is_split_into_words=True` to tokenize the words into subwords. For example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['[CLS]', '@', 'paul', '##walk', 'it', \"'\", 's', 'the', 'view', 'from', 'where', 'i', \"'\", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire', 'state', 'building', '=', 'es', '##b', '.', 'pretty', 'bad', 'storm', 'here', 'last', 'evening', '.', '[SEP]']"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"example = wnut[\"train\"][0]\n",
"tokenized_input = tokenizer(example[\"tokens\"], is_split_into_words=True)\n",
"tokens = tokenizer.convert_ids_to_tokens(tokenized_input[\"input_ids\"])\n",
"tokens"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, this adds some special tokens `[CLS]` and `[SEP]` and the subword tokenization creates a mismatch between the input and labels. A single word corresponding to a single label may now be split into two subwords. You'll need to realign the tokens and labels by:\n",
"\n",
"1. Mapping all tokens to their corresponding word with the [`word_ids`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_ids) method.\n",
"2. Assigning the label `-100` to the special tokens `[CLS]` and `[SEP]` so they're ignored by the PyTorch loss function (see [CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html)).\n",
"3. Only labeling the first token of a given word. Assign `-100` to other subtokens from the same word.\n",
"\n",
"Here is how you can create a function to realign the tokens and labels, and truncate sequences to be no longer than DistilBERT's maximum input length:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def tokenize_and_align_labels(examples):\n",
" tokenized_inputs = tokenizer(examples[\"tokens\"], truncation=True, is_split_into_words=True)\n",
"\n",
" labels = []\n",
" for i, label in enumerate(examples[f\"ner_tags\"]):\n",
" word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.\n",
" previous_word_idx = None\n",
" label_ids = []\n",
" for word_idx in word_ids: # Set the special tokens to -100.\n",
" if word_idx is None:\n",
" label_ids.append(-100)\n",
" elif word_idx != previous_word_idx: # Only label the first token of a given word.\n",
" label_ids.append(label[word_idx])\n",
" else:\n",
" label_ids.append(-100)\n",
" previous_word_idx = word_idx\n",
" labels.append(label_ids)\n",
"\n",
" tokenized_inputs[\"labels\"] = labels\n",
" return tokenized_inputs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map) function. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now create a batch of examples using [DataCollatorWithPadding](https://huggingface.co/docs/transformers/main/en/main_classes/data_collator#transformers.DataCollatorWithPadding). It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import DataCollatorForTokenClassification\n",
"\n",
"data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import DataCollatorForTokenClassification\n",
"\n",
"data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors=\"tf\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Evaluate"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) framework (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric). Seqeval actually produces several scores: precision, recall, F1, and accuracy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import evaluate\n",
"\n",
"seqeval = evaluate.load(\"seqeval\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get the NER labels first, and then create a function that passes your true predictions and true labels to [compute](https://huggingface.co/docs/evaluate/main/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the scores:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"labels = [label_list[i] for i in example[f\"ner_tags\"]]\n",
"\n",
"\n",
"def compute_metrics(p):\n",
" predictions, labels = p\n",
" predictions = np.argmax(predictions, axis=2)\n",
"\n",
" true_predictions = [\n",
" [label_list[p] for (p, l) in zip(prediction, label) if l != -100]\n",
" for prediction, label in zip(predictions, labels)\n",
" ]\n",
" true_labels = [\n",
" [label_list[l] for (p, l) in zip(prediction, label) if l != -100]\n",
" for prediction, label in zip(predictions, labels)\n",
" ]\n",
"\n",
" results = seqeval.compute(predictions=true_predictions, references=true_labels)\n",
" return {\n",
" \"precision\": results[\"overall_precision\"],\n",
" \"recall\": results[\"overall_recall\"],\n",
" \"f1\": results[\"overall_f1\"],\n",
" \"accuracy\": results[\"overall_accuracy\"],\n",
" }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before you start training your model, create a map of the expected ids to their labels with `id2label` and `label2id`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"id2label = {\n",
" 0: \"O\",\n",
" 1: \"B-corporation\",\n",
" 2: \"I-corporation\",\n",
" 3: \"B-creative-work\",\n",
" 4: \"I-creative-work\",\n",
" 5: \"B-group\",\n",
" 6: \"I-group\",\n",
" 7: \"B-location\",\n",
" 8: \"I-location\",\n",
" 9: \"B-person\",\n",
" 10: \"I-person\",\n",
" 11: \"B-product\",\n",
" 12: \"I-product\",\n",
"}\n",
"label2id = {\n",
" \"O\": 0,\n",
" \"B-corporation\": 1,\n",
" \"I-corporation\": 2,\n",
" \"B-creative-work\": 3,\n",
" \"I-creative-work\": 4,\n",
" \"B-group\": 5,\n",
" \"I-group\": 6,\n",
" \"B-location\": 7,\n",
" \"I-location\": 8,\n",
" \"B-person\": 9,\n",
" \"I-person\": 10,\n",
" \"B-product\": 11,\n",
" \"I-product\": 12,\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<Tip>\n",
"\n",
"If you aren't familiar with finetuning a model with the [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](https://huggingface.co/docs/transformers/main/en/tasks/../training#train-with-pytorch-trainer)!\n",
"\n",
"</Tip>\n",
"\n",
"You're ready to start training your model now! Load DistilBERT with [AutoModelForTokenClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForTokenClassification) along with the number of expected labels, and the label mappings:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer\n",
"\n",
"model = AutoModelForTokenClassification.from_pretrained(\n",
" \"distilbert/distilbert-base-uncased\", num_labels=13, id2label=id2label, label2id=label2id\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point, only three steps remain:\n",
"\n",
"1. Define your training hyperparameters in [TrainingArguments](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer) will evaluate the seqeval scores and save the training checkpoint.\n",
"2. Pass the training arguments to [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function.\n",
"3. Call [train()](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.train) to finetune your model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_args = TrainingArguments(\n",
" output_dir=\"my_awesome_wnut_model\",\n",
" learning_rate=2e-5,\n",
" per_device_train_batch_size=16,\n",
" per_device_eval_batch_size=16,\n",
" num_train_epochs=2,\n",
" weight_decay=0.01,\n",
" eval_strategy=\"epoch\",\n",
" save_strategy=\"epoch\",\n",
" load_best_model_at_end=True,\n",
" push_to_hub=True,\n",
")\n",
"\n",
"trainer = Trainer(\n",
" model=model,\n",
" args=training_args,\n",
" train_dataset=tokenized_wnut[\"train\"],\n",
" eval_dataset=tokenized_wnut[\"test\"],\n",
" processing_class=tokenizer,\n",
" data_collator=data_collator,\n",
" compute_metrics=compute_metrics,\n",
")\n",
"\n",
"trainer.train()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once training is completed, share your model to the Hub with the [push_to_hub()](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trainer.push_to_hub()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<Tip>\n",
"\n",
"If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](https://huggingface.co/docs/transformers/main/en/tasks/../training#train-a-tensorflow-model-with-keras)!\n",
"\n",
"</Tip>\n",
"To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import create_optimizer\n",
"\n",
"batch_size = 16\n",
"num_train_epochs = 3\n",
"num_train_steps = (len(tokenized_wnut[\"train\"]) // batch_size) * num_train_epochs\n",
"optimizer, lr_schedule = create_optimizer(\n",
" init_lr=2e-5,\n",
" num_train_steps=num_train_steps,\n",
" weight_decay_rate=0.01,\n",
" num_warmup_steps=0,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then you can load DistilBERT with [TFAutoModelForTokenClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.TFAutoModelForTokenClassification) along with the number of expected labels, and the label mappings:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import TFAutoModelForTokenClassification\n",
"\n",
"model = TFAutoModelForTokenClassification.from_pretrained(\n",
" \"distilbert/distilbert-base-uncased\", num_labels=13, id2label=id2label, label2id=label2id\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Convert your datasets to the `tf.data.Dataset` format with [prepare_tf_dataset()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tf_train_set = model.prepare_tf_dataset(\n",
" tokenized_wnut[\"train\"],\n",
" shuffle=True,\n",
" batch_size=16,\n",
" collate_fn=data_collator,\n",
")\n",
"\n",
"tf_validation_set = model.prepare_tf_dataset(\n",
" tokenized_wnut[\"validation\"],\n",
" shuffle=False,\n",
" batch_size=16,\n",
" collate_fn=data_collator,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"\n",
"model.compile(optimizer=optimizer) # No loss argument!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The last two things to setup before you start training is to compute the seqeval scores from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](https://huggingface.co/docs/transformers/main/en/tasks/../main_classes/keras_callbacks).\n",
"\n",
"Pass your `compute_metrics` function to [KerasMetricCallback](https://huggingface.co/docs/transformers/main/en/main_classes/keras_callbacks#transformers.KerasMetricCallback):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers.keras_callbacks import KerasMetricCallback\n",
"\n",
"metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Specify where to push your model and tokenizer in the [PushToHubCallback](https://huggingface.co/docs/transformers/main/en/main_classes/keras_callbacks#transformers.PushToHubCallback):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers.keras_callbacks import PushToHubCallback\n",
"\n",
"push_to_hub_callback = PushToHubCallback(\n",
" output_dir=\"my_awesome_wnut_model\",\n",
" tokenizer=tokenizer,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then bundle your callbacks together:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"callbacks = [metric_callback, push_to_hub_callback]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!\n",
"\n",
"<Tip>\n",
"\n",
"For a more in-depth example of how to finetune a model for token classification, take a look at the corresponding\n",
"[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)\n",
"or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb).\n",
"\n",
"</Tip>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Inference"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great, now that you've finetuned a model, you can use it for inference!\n",
"\n",
"Grab some text you'd like to run inference on:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"text = \"The Golden State Warriors are an American professional basketball team based in San Francisco.\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for NER with your model, and pass your text to it:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'entity': 'B-location',\n",
" 'score': 0.42658573,\n",
" 'index': 2,\n",
" 'word': 'golden',\n",
" 'start': 4,\n",
" 'end': 10},\n",
" {'entity': 'I-location',\n",
" 'score': 0.35856336,\n",
" 'index': 3,\n",
" 'word': 'state',\n",
" 'start': 11,\n",
" 'end': 16},\n",
" {'entity': 'B-group',\n",
" 'score': 0.3064001,\n",
" 'index': 4,\n",
" 'word': 'warriors',\n",
" 'start': 17,\n",
" 'end': 25},\n",
" {'entity': 'B-location',\n",
" 'score': 0.65523505,\n",
" 'index': 13,\n",
" 'word': 'san',\n",
" 'start': 80,\n",
" 'end': 83},\n",
" {'entity': 'B-location',\n",
" 'score': 0.4668663,\n",
" 'index': 14,\n",
" 'word': 'francisco',\n",
" 'start': 84,\n",
" 'end': 93}]"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from transformers import pipeline\n",
"\n",
"classifier = pipeline(\"ner\", model=\"stevhliu/my_awesome_wnut_model\")\n",
"classifier(text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also manually replicate the results of the `pipeline` if you'd like:\n",
"\n",
"Tokenize the text and return PyTorch tensors:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoTokenizer\n",
"\n",
"tokenizer = AutoTokenizer.from_pretrained(\"stevhliu/my_awesome_wnut_model\")\n",
"inputs = tokenizer(text, return_tensors=\"pt\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Pass your inputs to the model and return the `logits`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoModelForTokenClassification\n",
"\n",
"model = AutoModelForTokenClassification.from_pretrained(\"stevhliu/my_awesome_wnut_model\")\n",
"with torch.no_grad():\n",
" logits = model(**inputs).logits"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get the class with the highest probability, and use the model's `id2label` mapping to convert it to a text label:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['O',\n",
" 'O',\n",
" 'B-location',\n",
" 'I-location',\n",
" 'B-group',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'B-location',\n",
" 'B-location',\n",
" 'O',\n",
" 'O']"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"predictions = torch.argmax(logits, dim=2)\n",
"predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]]\n",
"predicted_token_class"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Tokenize the text and return TensorFlow tensors:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoTokenizer\n",
"\n",
"tokenizer = AutoTokenizer.from_pretrained(\"stevhliu/my_awesome_wnut_model\")\n",
"inputs = tokenizer(text, return_tensors=\"tf\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Pass your inputs to the model and return the `logits`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import TFAutoModelForTokenClassification\n",
"\n",
"model = TFAutoModelForTokenClassification.from_pretrained(\"stevhliu/my_awesome_wnut_model\")\n",
"logits = model(**inputs).logits"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get the class with the highest probability, and use the model's `id2label` mapping to convert it to a text label:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['O',\n",
" 'O',\n",
" 'B-location',\n",
" 'I-location',\n",
" 'B-group',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'O',\n",
" 'B-location',\n",
" 'B-location',\n",
" 'O',\n",
" 'O']"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"predicted_token_class_ids = tf.math.argmax(logits, axis=-1)\n",
"predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]\n",
"predicted_token_class"
]
}
],
"metadata": {},
"nbformat": 4,
"nbformat_minor": 4
}