transformers_doc/quicktour.ipynb (820 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Transformers installation\n",
"! pip install transformers datasets\n",
"# To install from source instead of the last release, comment the command above and uncomment the following one.\n",
"# ! pip install git+https://github.com/huggingface/transformers.git"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Quick tour"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get up and running with 🤗 Transformers! Start using the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) for rapid inference, and quickly load a pretrained model and tokenizer with an [AutoClass](https://huggingface.co/docs/transformers/main/en/./model_doc/auto) to solve your text, vision or audio task.\n",
"\n",
"<Tip>\n",
"\n",
"All code examples presented in the documentation have a toggle on the top left for PyTorch and TensorFlow. If\n",
"not, the code is expected to work for both backends without any change.\n",
"\n",
"</Tip>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pipeline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) is the easiest way to use a pretrained model for a given task."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"hide_input": true
},
"outputs": [
{
"data": {
"text/html": [
"<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/tiZFewofSLM?rel=0&controls=0&showinfo=0\" frameborder=\"0\" allowfullscreen></iframe>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#@title\n",
"from IPython.display import HTML\n",
"\n",
"HTML('<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/tiZFewofSLM?rel=0&controls=0&showinfo=0\" frameborder=\"0\" allowfullscreen></iframe>')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) supports many common tasks out-of-the-box:\n",
"\n",
"**Text**:\n",
"* Sentiment analysis: classify the polarity of a given text.\n",
"* Text generation (in English): generate text from a given input.\n",
"* Name entity recognition (NER): label each word with the entity it represents (person, date, location, etc.).\n",
"* Question answering: extract the answer from the context, given some context and a question.\n",
"* Fill-mask: fill in the blank given a text with masked words.\n",
"* Summarization: generate a summary of a long sequence of text or document.\n",
"* Translation: translate text into another language.\n",
"* Feature extraction: create a tensor representation of the text.\n",
"\n",
"**Image**:\n",
"* Image classification: classify an image.\n",
"* Image segmentation: classify every pixel in an image.\n",
"* Object detection: detect objects within an image.\n",
"\n",
"**Audio**:\n",
"* Audio classification: assign a label to a given segment of audio.\n",
"* Automatic speech recognition (ASR): transcribe audio data into text.\n",
"\n",
"<Tip>\n",
"\n",
"For more details about the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) and associated tasks, refer to the documentation [here](https://huggingface.co/docs/transformers/main/en/./main_classes/pipelines).\n",
"\n",
"</Tip>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pipeline usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the following example, you will use the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) for sentiment analysis.\n",
"\n",
"Install the following dependencies if you haven't already:\n",
"\n",
"```bash\n",
"pip install torch\n",
"```\n",
"```bash\n",
"pip install tensorflow\n",
"```\n",
"\n",
"Import [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) and specify the task you want to complete:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import pipeline\n",
"\n",
"classifier = pipeline(\"sentiment-analysis\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The pipeline downloads and caches a default [pretrained model](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) and tokenizer for sentiment analysis. Now you can use the `classifier` on your target text:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'label': 'POSITIVE', 'score': 0.9998}]"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"classifier(\"We are very happy to show you the 🤗 Transformers library.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For more than one sentence, pass a list of sentences to the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) which returns a list of dictionaries:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"label: POSITIVE, with score: 0.9998\n",
"label: NEGATIVE, with score: 0.5309"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"results = classifier([\"We are very happy to show you the 🤗 Transformers library.\", \"We hope you don't hate it.\"])\n",
"for result in results:\n",
" print(f\"label: {result['label']}, with score: {round(result['score'], 4)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) can also iterate over an entire dataset. Start by installing the [🤗 Datasets](https://huggingface.co/docs/datasets/) library:\n",
"\n",
"```bash\n",
"pip install datasets \n",
"```\n",
"\n",
"Create a [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) with the task you want to solve for and the model you want to use."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"from transformers import pipeline\n",
"\n",
"speech_recognizer = pipeline(\"automatic-speech-recognition\", model=\"facebook/wav2vec2-base-960h\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, load a dataset (see the 🤗 Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart.html) for more details) you'd like to iterate over. For example, let's load the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datasets import load_dataset, Audio\n",
"\n",
"dataset = load_dataset(\"PolyAI/minds14\", name=\"en-US\", split=\"train\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We need to make sure that the sampling rate of the dataset matches the sampling \n",
"rate `facebook/wav2vec2-base-960h` was trained on."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataset = dataset.cast_column(\"audio\", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Audio files are automatically loaded and resampled when calling the `\"audio\"` column.\n",
"Let's extract the raw waveform arrays of the first 4 samples and pass it as a list to the pipeline:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', \"FONDERING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE\", \"I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS\", 'HOW DO I TURN A JOIN A COUNT']"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result = speech_recognizer(dataset[:4][\"audio\"])\n",
"print([d[\"text\"] for d in result])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For a larger dataset where the inputs are big (like in speech or vision), you will want to pass along a generator instead of a list that loads all the inputs in memory. See the [pipeline documentation](https://huggingface.co/docs/transformers/main/en/./main_classes/pipelines) for more information."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use another model and tokenizer in the pipeline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) can accommodate any model from the [Model Hub](https://huggingface.co/models), making it easy to adapt the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) for other use-cases. For example, if you'd like a model capable of handling French text, use the tags on the Model Hub to filter for an appropriate model. The top filtered result returns a multilingual [BERT model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) fine-tuned for sentiment analysis. Great, let's use this model!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = \"nlptown/bert-base-multilingual-uncased-sentiment\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use the [AutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForSequenceClassification) and [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer) to load the pretrained model and it's associated tokenizer (more on an `AutoClass` below):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoTokenizer, AutoModelForSequenceClassification\n",
"\n",
"model = AutoModelForSequenceClassification.from_pretrained(model_name)\n",
"tokenizer = AutoTokenizer.from_pretrained(model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use the [TFAutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.TFAutoModelForSequenceClassification) and [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer) to load the pretrained model and it's associated tokenizer (more on an `TFAutoClass` below):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoTokenizer, TFAutoModelForSequenceClassification\n",
"\n",
"model = TFAutoModelForSequenceClassification.from_pretrained(model_name)\n",
"tokenizer = AutoTokenizer.from_pretrained(model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then you can specify the model and tokenizer in the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline), and apply the `classifier` on your target text:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'label': '5 stars', 'score': 0.7273}]"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"classifier = pipeline(\"sentiment-analysis\", model=model, tokenizer=tokenizer)\n",
"classifier(\"Nous sommes très heureux de vous présenter la bibliothèque 🤗 Transformers.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you can't find a model for your use-case, you will need to fine-tune a pretrained model on your data. Take a look at our [fine-tuning tutorial](https://huggingface.co/docs/transformers/main/en/./training) to learn how. Finally, after you've fine-tuned your pretrained model, please consider sharing it (see tutorial [here](https://huggingface.co/docs/transformers/main/en/./model_sharing)) with the community on the Model Hub to democratize NLP for everyone! 🤗"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AutoClass"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"hide_input": true
},
"outputs": [
{
"data": {
"text/html": [
"<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/AhChOFRegn4?rel=0&controls=0&showinfo=0\" frameborder=\"0\" allowfullscreen></iframe>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#@title\n",
"from IPython.display import HTML\n",
"\n",
"HTML('<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/AhChOFRegn4?rel=0&controls=0&showinfo=0\" frameborder=\"0\" allowfullscreen></iframe>')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Under the hood, the [AutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForSequenceClassification) and [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer) classes work together to power the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline). An [AutoClass](https://huggingface.co/docs/transformers/main/en/./model_doc/auto) is a shortcut that automatically retrieves the architecture of a pretrained model from it's name or path. You only need to select the appropriate `AutoClass` for your task and it's associated tokenizer with [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer). \n",
"\n",
"Let's return to our example and see how you can use the `AutoClass` to replicate the results of the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### AutoTokenizer"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A tokenizer is responsible for preprocessing text into a format that is understandable to the model. First, the tokenizer will split the text into words called *tokens*. There are multiple rules that govern the tokenization process, including how to split a word and at what level (learn more about tokenization [here](https://huggingface.co/docs/transformers/main/en/./tokenizer_summary)). The most important thing to remember though is you need to instantiate the tokenizer with the same model name to ensure you're using the same tokenization rules a model was pretrained with.\n",
"\n",
"Load a tokenizer with [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoTokenizer\n",
"\n",
"model_name = \"nlptown/bert-base-multilingual-uncased-sentiment\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, the tokenizer converts the tokens into numbers in order to construct a tensor as input to the model. This is known as the model's *vocabulary*.\n",
"\n",
"Pass your text to the tokenizer:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102],\n",
" 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n",
" 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"encoding = tokenizer(\"We are very happy to show you the 🤗 Transformers library.\")\n",
"print(encoding)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The tokenizer will return a dictionary containing:\n",
"\n",
"* [input_ids](https://huggingface.co/docs/transformers/main/en/./glossary#input-ids): numerical representions of your tokens.\n",
"* [atttention_mask](https://huggingface.co/docs/transformers/main/en/.glossary#attention-mask): indicates which tokens should be attended to.\n",
"\n",
"Just like the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline), the tokenizer will accept a list of inputs. In addition, the tokenizer can also pad and truncate the text to return a batch with uniform length:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pt_batch = tokenizer(\n",
" [\"We are very happy to show you the 🤗 Transformers library.\", \"We hope you don't hate it.\"],\n",
" padding=True,\n",
" truncation=True,\n",
" max_length=512,\n",
" return_tensors=\"pt\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tf_batch = tokenizer(\n",
" [\"We are very happy to show you the 🤗 Transformers library.\", \"We hope you don't hate it.\"],\n",
" padding=True,\n",
" truncation=True,\n",
" max_length=512,\n",
" return_tensors=\"tf\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Read the [preprocessing](https://huggingface.co/docs/transformers/main/en/./preprocessing) tutorial for more details about tokenization."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### AutoModel"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [AutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModel) like you would load an [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer). The only difference is selecting the correct [AutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModel) for the task. Since you are doing text - or sequence - classification, load [AutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForSequenceClassification):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoModelForSequenceClassification\n",
"\n",
"model_name = \"nlptown/bert-base-multilingual-uncased-sentiment\"\n",
"pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<Tip>\n",
"\n",
"See the [task summary](https://huggingface.co/docs/transformers/main/en/./task_summary) for which [AutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModel) class to use for which task.\n",
"\n",
"</Tip>\n",
"\n",
"Now you can pass your preprocessed batch of inputs directly to the model. You just have to unpack the dictionary by adding `**`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pt_outputs = pt_model(**pt_batch)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model outputs the final activations in the `logits` attribute. Apply the softmax function to the `logits` to retrieve the probabilities:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725],\n",
" [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>)"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from torch import nn\n",
"\n",
"pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)\n",
"print(pt_predictions)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [TFAutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.TFAutoModel) like you would load an [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer). The only difference is selecting the correct [TFAutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.TFAutoModel) for the task. Since you are doing text - or sequence - classification, load [TFAutoModelForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.TFAutoModelForSequenceClassification):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import TFAutoModelForSequenceClassification\n",
"\n",
"model_name = \"nlptown/bert-base-multilingual-uncased-sentiment\"\n",
"tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<Tip>\n",
"\n",
"See the [task summary](https://huggingface.co/docs/transformers/main/en/./task_summary) for which [AutoModel](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModel) class to use for which task.\n",
"\n",
"</Tip>\n",
"\n",
"Now you can pass your preprocessed batch of inputs directly to the model by passing the dictionary keys directly to the tensors:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tf_outputs = tf_model(tf_batch)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model outputs the final activations in the `logits` attribute. Apply the softmax function to the `logits` to retrieve the probabilities:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"\n",
"tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1)\n",
"tf_predictions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<Tip>\n",
"\n",
"All 🤗 Transformers models (PyTorch or TensorFlow) outputs the tensors *before* the final activation\n",
"function (like softmax) because the final activation function is often fused with the loss.\n",
"\n",
"</Tip>\n",
"\n",
"Models are a standard [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) so you can use them in your usual training loop. However, to make things easier, 🤗 Transformers provides a [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer) class for PyTorch that adds functionality for distributed training, mixed precision, and more. For TensorFlow, you can use the `fit` method from [Keras](https://keras.io/). Refer to the [training tutorial](https://huggingface.co/docs/transformers/main/en/./training) for more details.\n",
"\n",
"<Tip>\n",
"\n",
"🤗 Transformers model outputs are special dataclasses so their attributes are autocompleted in an IDE.\n",
"The model outputs also behave like a tuple or a dictionary (e.g., you can index with an integer, a slice or a string) in which case the attributes that are `None` are ignored.\n",
"\n",
"</Tip>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Save a model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once your model is fine-tuned, you can save it with its tokenizer using [PreTrainedModel.save_pretrained()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.save_pretrained):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pt_save_directory = \"./pt_save_pretrained\"\n",
"tokenizer.save_pretrained(pt_save_directory)\n",
"pt_model.save_pretrained(pt_save_directory)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you are ready to use the model again, reload it with [PreTrainedModel.from_pretrained()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pt_model = AutoModelForSequenceClassification.from_pretrained(\"./pt_save_pretrained\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once your model is fine-tuned, you can save it with its tokenizer using [TFPreTrainedModel.save_pretrained()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.TFPreTrainedModel.save_pretrained):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tf_save_directory = \"./tf_save_pretrained\"\n",
"tokenizer.save_pretrained(tf_save_directory)\n",
"tf_model.save_pretrained(tf_save_directory)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you are ready to use the model again, reload it with [TFPreTrainedModel.from_pretrained()](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tf_model = TFAutoModelForSequenceClassification.from_pretrained(\"./tf_save_pretrained\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One particularly cool 🤗 Transformers feature is the ability to save a model and reload it as either a PyTorch or TensorFlow model. The `from_pt` or `from_tf` parameter can convert the model from one framework to the other:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoModel\n",
"\n",
"tokenizer = AutoTokenizer.from_pretrained(tf_save_directory)\n",
"pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from transformers import TFAutoModel\n",
"\n",
"tokenizer = AutoTokenizer.from_pretrained(pt_save_directory)\n",
"tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True)"
]
}
],
"metadata": {},
"nbformat": 4,
"nbformat_minor": 4
}