notebooks/13_pytorch_xla.ipynb (189 lines of code) (raw):

{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "HuggingFace_on_PyTorch_XLA_TPUs.ipynb", "provenance": [], "collapsed_sections": [], "machine_shape": "hm" }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "accelerator": "TPU" }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "ACIOiyfujkQA" }, "source": [ "# How to Train PyTorch Hugging Face Transformers on Cloud TPUs\n", "\n", "Over the past several months the Hugging Face and Google [`pytorch/xla`](https://github.com/pytorch/xla) teams have been collaborating bringing first class support for training Hugging Face transformers on Cloud TPUs, with significant speedups.\n", "\n", "In this Colab we walk you through Masked Language Modeling (MLM) finetuning [RoBERTa](https://arxiv.org/abs/1907.11692) on the [WikiText-2 dataset](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) using free TPUs provided by Colab.\n", "\n", "Last Updated: February 8th, 2021" ] }, { "cell_type": "markdown", "metadata": { "id": "aNgqu5uxlOMr" }, "source": [ "### Install and clone depedencies" ] }, { "cell_type": "code", "metadata": { "id": "kl-oT5N8VwFq" }, "source": [ "!pip install transformers==4.2.2 \\\n", " torch==1.7.0 \\\n", " cloud-tpu-client==0.10 \\\n", " datasets==1.2.1 \\\n", " https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.7-cp37-cp37m-linux_x86_64.whl\n", "!git clone -b v4.2.2 https://github.com/huggingface/transformers" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "BROMMMaMpYtl" }, "source": [ "### Train the model\n", "\n", "All Cloud TPU training functionality has been built into [`trainer.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py) and so we'll use the [`run_mlm.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) script under `examples/language-modeling` to finetune our RoBERTa model on the WikiText-2 dataset." ] }, { "cell_type": "markdown", "metadata": { "id": "a9FLwlbCsWpI" }, "source": [ "Note that in the following command we use [`xla_spawn.py`](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py) to spawn 8 processes to train on the 8 cores a single v2-8/v3-8 Cloud TPU system has (Cloud TPU Pods can scale all the way up to 2048 cores). All `xla_spawn.py` does, is call [`xmp.spawn`](https://github.com/pytorch/xla/blob/master/torch_xla/distributed/xla_multiprocessing.py#L350), which sets up some environment metadata that's needed and calls `torch.multiprocessing.start_processes`.\n", "\n", "The below command ends up spawning 8 processes and each of those drives one TPU core. We've set the `per_device_train_batch_size=4` and `per_device_eval_batch_size=4`, which means that the global bactch size will be `32` (`4 examples/device * 8 devices/Colab TPU = 32 examples / Colab TPU`). You can also append the `--tpu_metrics_debug` flag for additional debug metrics (ex. how long it took to compile, execute one step, etc).\n", "\n", "The following cell should take around 10~15 minutes to run." ] }, { "cell_type": "code", "metadata": { "id": "GmmBgJmplL29" }, "source": [ "!python transformers/examples/xla_spawn.py \\\n", " --num_cores 8 \\\n", " transformers/examples/language-modeling/run_mlm.py \\\n", " --model_name_or_path roberta-base \\\n", " --dataset_name wikitext \\\n", " --dataset_config_name wikitext-2-raw-v1 \\\n", " --max_seq_length 512 \\\n", " --pad_to_max_length \\\n", " --logging_dir tensorboard \\\n", " --num_train_epochs 3 \\\n", " --do_train \\\n", " --do_eval \\\n", " --output_dir output \\\n", " --per_device_train_batch_size 4 \\\n", " --per_device_eval_batch_size 4 \\\n", " --logging_steps=50 \\\n", " --save_steps=5000" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "WOJ714gC1_cj" }, "source": [ "### Visualize Tensorboard Metrics" ] }, { "cell_type": "code", "metadata": { "id": "uKhXw5F9tdkB" }, "source": [ "%load_ext tensorboard\n", "%tensorboard --logdir tensorboard" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "9rBzGif5_Z0A" }, "source": [ "## 🎉🎉🎉 **Done Training!** 🎉🎉🎉\n" ] }, { "cell_type": "markdown", "metadata": { "id": "1EuTTqoM_2qk" }, "source": [ "## Run inference on finetuned model" ] }, { "cell_type": "code", "metadata": { "id": "ORdAiIepENHv" }, "source": [ "import torch_xla.core.xla_model as xm\n", "from transformers import pipeline\n", "from transformers import FillMaskPipeline\n", "from transformers import AutoModelForMaskedLM, AutoTokenizer\n", "\n", "tpu_device = xm.xla_device()\n", "model = AutoModelForMaskedLM.from_pretrained('output').to(tpu_device)\n", "tokenizer = AutoTokenizer.from_pretrained('output')\n", "fill_mask = FillMaskPipeline(model, tokenizer)\n", "fill_mask.device = tpu_device" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "IDCeqmbjWUNE" }, "source": [ "fill_mask('TPUs are much faster than <mask>!')" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "yJ1K2QvCC0z8" }, "source": [ "And just like that, you've just used Cloud TPUs to both fine-tuned your model and run predictions! 🎉" ] } ] }