notebooks/text-classification_multilabel.ipynb (897 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "id": "552ec552", "metadata": {}, "source": [ "# SetFit for Multilabel Text Classification" ] }, { "cell_type": "markdown", "id": "3af7f258-5aaf-47c2-b81e-2f10fc349812", "metadata": {}, "source": [ "In this notebook, we'll learn how to do few-shot text classification on a multilabel dataset with SetFit." ] }, { "cell_type": "markdown", "id": "c5604f73-f395-42cb-8082-9974a87ef9e9", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "markdown", "id": "26f09e23-2e1f-41f6-bb40-a30d447a0541", "metadata": {}, "source": [ "If you're running this Notebook on Colab or some other cloud platform, you will need to install the `setfit` library. Uncomment the following cell and run it:" ] }, { "cell_type": "code", "execution_count": 1, "id": "277ed9b9-459c-465e-b750-13759978e616", "metadata": {}, "outputs": [], "source": [ "# %pip install setfit" ] }, { "cell_type": "markdown", "id": "64e4d3b4-93cd-4774-8055-35a00b11f483", "metadata": {}, "source": [ "To be able to share your model with the community, there are a few more steps to follow.\n", "\n", "First, you have to store your authentication token from the Hugging Face Hub (sign up [here](https://huggingface.co/join) if you haven't already!). To do so, execute the following cell and input an [access token](https://huggingface.co/docs/hub/security-tokens) associated with your account:" ] }, { "cell_type": "code", "execution_count": null, "id": "526a3b86-db3c-4c27-bb6c-eb39d73326f2", "metadata": {}, "outputs": [], "source": [ "from huggingface_hub import notebook_login\n", "\n", "notebook_login()" ] }, { "cell_type": "markdown", "id": "9309b7d5-1736-46be-b721-eb4ee4ab9e67", "metadata": {}, "source": [ "Then you need to install Git-LFS, which you can do by uncommenting and running following command:" ] }, { "cell_type": "code", "execution_count": 3, "id": "5efb134e-fc40-42f2-b2a5-47112b6f2305", "metadata": {}, "outputs": [], "source": [ "# !apt install git-lfs" ] }, { "cell_type": "markdown", "id": "549981d3", "metadata": {}, "source": [ "Finally, you may need to configue Git on your system by providing details about who you are:" ] }, { "cell_type": "code", "execution_count": null, "id": "18b950f1", "metadata": {}, "outputs": [], "source": [ "# !git config --global user.email \"you@example.com\"\n", "# !git config --global user.name \"Your Name\"" ] }, { "cell_type": "markdown", "id": "2b2a8fcd-46fe-43e4-835e-57b84964358a", "metadata": {}, "source": [ "This notebook is designed to work with any multiclass [text classification dataset](https://huggingface.co/models?pipeline_tag=text-classification&sort=downloads) and pretrained [Sentence Transformer](https://huggingface.co/models?library=sentence-transformers&sort=downloads) on the Hub. Change the values below to try a different dataset / model!" ] }, { "cell_type": "code", "execution_count": 1, "id": "41542e15-d211-45e9-b428-e1532c525f5b", "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "588721bffc924a088b3639c815d89d3f", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading builder script: 0%| | 0.00/1.85k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "f53df096eff944b7bf703723da3267d9", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading metadata: 0%| | 0.00/940 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Downloading and preparing dataset ethos/multilabel (download: 61.36 KiB, generated: 77.26 KiB, post-processed: Unknown size, total: 138.62 KiB) to /home/lewis/.cache/huggingface/datasets/ethos/multilabel/1.0.0/898d3d005459ee3ff80dbeec2f169c6b7ea13de31a08458193e27dec3dd9ae38...\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "5cedd327e9a04b41b2feea4d7ae2529f", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "22e64f1068ff4fd28677a5fdb797d959", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading data: 0%| | 0.00/25.3k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "dc901197a0ec4fd4a88b5c5a402047b3", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Extracting data files: 0%| | 0/1 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Generating train split: 0%| | 0/433 [00:00<?, ? examples/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Dataset ethos downloaded and prepared to /home/lewis/.cache/huggingface/datasets/ethos/multilabel/1.0.0/898d3d005459ee3ff80dbeec2f169c6b7ea13de31a08458193e27dec3dd9ae38. Subsequent calls will reuse this data.\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "0b37b4f22ca74e8c9a7661c27b8e9afa", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/1 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from datasets import load_dataset\n", "\n", "model_id = \"sentence-transformers/paraphrase-mpnet-base-v2\"\n", "dataset = load_dataset(\"ethos\", \"multilabel\")" ] }, { "cell_type": "markdown", "id": "3e756be8-3b60-4c86-aa1b-7ef78289b8e2", "metadata": { "tags": [] }, "source": [ "## Loading and sampling the dataset" ] }, { "cell_type": "markdown", "id": "b68b26a3-1c56-4373-8060-97f747168ede", "metadata": {}, "source": [ "Most datasets on the Hub have many more labeled examples than those one encounters in few-shot settings. To simulate the effect of training on a limited number of examples, let's subsample the training set to have at least 8 labeled examples per feature.\n", "\n", "Note that if your dataset has differently formatted labels, you may need to adapt this section." ] }, { "cell_type": "code", "execution_count": 2, "id": "0eb9bc83-bf99-422c-b123-4fcaed168bcb", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['violence',\n", " 'directed_vs_generalized',\n", " 'gender',\n", " 'race',\n", " 'national_origin',\n", " 'disability',\n", " 'religion',\n", " 'sexual_orientation']" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "\n", "features = dataset[\"train\"].column_names\n", "features.remove(\"text\")\n", "features" ] }, { "cell_type": "code", "execution_count": 3, "id": "c099cb8b-2ad6-4782-a3e6-574630956d94", "metadata": {}, "outputs": [], "source": [ "num_samples = 8\n", "samples = np.concatenate(\n", " [np.random.choice(np.where(dataset[\"train\"][f])[0], num_samples) for f in features]\n", ")" ] }, { "cell_type": "markdown", "id": "a46662ff-aa4b-49c1-9c86-c8450e6d547b", "metadata": {}, "source": [ "We encode the emotions in a single `'label'` feature. " ] }, { "cell_type": "code", "execution_count": 4, "id": "ff9fd491-8b80-4959-adcd-630bd17f951d", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Parameter 'function'=<function encode_labels at 0x7ff5f6350700> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "4d30b18893d3419f954a4e4e6ee5c884", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/433 [00:00<?, ?ex/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "def encode_labels(record):\n", " return {\"labels\": [record[feature] for feature in features]}\n", "\n", "\n", "dataset = dataset.map(encode_labels)" ] }, { "cell_type": "markdown", "id": "d7e154e9-a255-4d1f-b8ff-53c46f91a142", "metadata": {}, "source": [ "Next, we use the samples we selected as our training set, and the others as our test set (since the ethos dataset does not have a test split on the hub).\n", "\n", "Here we have 64 total examples to train with since the `ethos` dataset has 8 classes." ] }, { "cell_type": "code", "execution_count": 5, "id": "2bc7f32d-7320-4721-9276-ec37a39d2630", "metadata": {}, "outputs": [], "source": [ "train_dataset = dataset[\"train\"].select(samples)\n", "eval_dataset = dataset[\"train\"].select(\n", " np.setdiff1d(np.arange(len(dataset[\"train\"])), samples)\n", ")" ] }, { "cell_type": "markdown", "id": "059bd547-0e39-43ab-adf0-5f5509217020", "metadata": {}, "source": [ "Okay, now we have the dataset, let's load and train a model!" ] }, { "cell_type": "markdown", "id": "37e7c839-1f06-4d35-aa34-6e13659db814", "metadata": {}, "source": [ "## Fine-tuning the model" ] }, { "cell_type": "markdown", "id": "78e8c41a", "metadata": {}, "source": [ "To train a SetFit model, the first thing to do is download a pretrained checkpoint from the Hub. We can do so by using the `from_pretrained()` method associated with the `SetFitModel` class.\n", "\n", "**Note that the `multi_target_strategy` parameter here signals to both the model and the trainer to expect a multi-labelled dataset.**" ] }, { "cell_type": "code", "execution_count": 6, "id": "33661c9d-46d3-42eb-9b15-8a2bc49d7f6c", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "model_head.pkl not found on HuggingFace Hub, initialising classification head with random weights. You should TRAIN this model on a downstream task to use it for predictions and inference.\n" ] } ], "source": [ "from setfit import SetFitModel\n", "\n", "model = SetFitModel.from_pretrained(model_id, multi_target_strategy=\"one-vs-rest\")" ] }, { "cell_type": "markdown", "id": "84e7521e-95ca-431a-8d7f-2f18e1de16ce", "metadata": {}, "source": [ "Here, we've downloaded a pretrained Sentence Transformer from the Hub and added a logistic classification head to the create the SetFit model. As indicated in the message, we need to train this model on some labeled examples. We can do so by using the `SetFitTrainer` class as follows:" ] }, { "cell_type": "code", "execution_count": 8, "id": "e44b7069-27b0-49ea-bc27-c44f94a98e2f", "metadata": {}, "outputs": [], "source": [ "from sentence_transformers.losses import CosineSimilarityLoss\n", "from setfit import SetFitTrainer\n", "\n", "trainer = SetFitTrainer(\n", " model=model,\n", " train_dataset=train_dataset,\n", " eval_dataset=eval_dataset,\n", " loss_class=CosineSimilarityLoss,\n", " num_iterations=20,\n", " column_mapping={\"text\": \"text\", \"labels\": \"label\"},\n", ")" ] }, { "cell_type": "markdown", "id": "cbd3e642", "metadata": {}, "source": [ "The main arguments to notice in the trainer is the following:\n", "\n", "* `loss_class`: The loss function to use for contrastive learning with the Sentence Transformer body\n", "* `num_iterations`: The number of text pairs to generate for contrastive learning\n", "* `column_mapping`: The `SetFitTrainer` expects the inputs to be found in a `text` and `label` column. This mapping automatically formats the training and evaluation datasets for us." ] }, { "cell_type": "markdown", "id": "b6e3c5ae-c287-4936-b1ac-5eca10c7f39c", "metadata": {}, "source": [ "Now that we've created a trainer, we can train it!" ] }, { "cell_type": "code", "execution_count": null, "id": "6b5a468b-2796-47c3-8907-c0147ee58dd5", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Applying column mapping to training dataset\n", "***** Running training *****\n", " Num examples = 3640\n", " Num epochs = 1\n", " Total optimization steps = 228\n", " Total train batch size = 16\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "4e0bdb3794e346f9a6e7e1f117e562d4", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Epoch: 0%| | 0/1 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "7c710261a8b5400abc6614d3c6265392", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Iteration: 0%| | 0/228 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "trainer.train()" ] }, { "cell_type": "markdown", "id": "e799f994", "metadata": {}, "source": [ "The final step is to compute the model's performance using the `evaluate()` method. The default metric measures 'subset accuracy', which measures the fraction of samples where we predict all 8 labels correctly." ] }, { "cell_type": "code", "execution_count": 10, "id": "453c11d0-a1e4-49c2-859a-cc70e033b4a6", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Applying column mapping to evaluation dataset\n", "***** Running evaluation *****\n" ] }, { "data": { "text/plain": [ "{'accuracy': 0.477088948787062}" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "metrics = trainer.evaluate()\n", "metrics" ] }, { "cell_type": "markdown", "id": "f021b168-02d2-4cc8-942d-65d3b821e253", "metadata": {}, "source": [ "And once the model is trained, you can push it to the Hub:" ] }, { "cell_type": "code", "execution_count": null, "id": "c420c4b9-1552-45a5-888c-cdbb78f8e4fc", "metadata": { "scrolled": true }, "outputs": [], "source": [ "trainer.push_to_hub(f\"setfit-ethos-multilabel-example\")" ] }, { "cell_type": "markdown", "id": "02173d18-4874-4148-8789-90ac695717bc", "metadata": {}, "source": [ "You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier `your-username/the-name-you-picked` so for instance:" ] }, { "cell_type": "code", "execution_count": 12, "id": "2b5e5623-0452-4d36-8eb0-cfb36c8664da", "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "3ef4ee556c3349079496121409707a00", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/668 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "c92a56d77527401ab97f600a65598f94", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/1.43k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "959481bb15a6491baee5daa6d53707ac", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/190 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "1c3633c298644eaea025cd97d54c0cfb", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/3.70k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "050de7e75d0048dca462b21851fa1102", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/668 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "765ae5d6f9774575b9bc04bbdec8f252", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/122 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "df34ece5072641aabd981f54db8b3c4a", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/52.8k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "cb343329e952476e82c955f55ecb794f", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/438M [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "cd446e806fdc48cca79dc90a7822e9e6", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/53.0 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a14ab7f486da4549bb77c6c200e2d421", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/280 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "5cfe3ce3810341b4a0b3229a5c4f83e3", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/712k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "368346a7bbfa45d783f7259b208f26fa", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/1.51k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "715d1ce4dc5b4927b6a4ee611259ef6d", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/232k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "98442fbd9765417d95cbd7f0bc555743", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/229 [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "7a19f8a77f64409c9a7fe00478477d7c", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading: 0%| | 0.00/52.8k [00:00<?, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from setfit import SetFitModel\n", "\n", "model = SetFitModel.from_pretrained(\"lewtun/setfit-ethos-multilabel-example\")" ] }, { "cell_type": "markdown", "id": "d9f0e4e5-6645-4306-a9ab-06d0db9e23d0", "metadata": {}, "source": [ "Run inference. As is usual in toxicity models, it tends to think any mention of topics such as race or gender are negative." ] }, { "cell_type": "code", "execution_count": 13, "id": "6e51180c-5a62-4dfa-a543-334982db2d69", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0, 0, 0, 0, 0, 0, 1, 0],\n", " [0, 1, 0, 0, 0, 0, 0, 0]])" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "preds = model(\n", " [\n", " \"Jewish people often don't eat pork.\",\n", " \"Is this lipstick suitable for people with dark skin?\"\n", " ]\n", ")\n", "preds" ] }, { "cell_type": "code", "execution_count": 14, "id": "ebc14708-c4f6-4137-b972-ac8c257bcb9d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[['religion'], ['directed_vs_generalized']]" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Show predicted labels, requires you to have stored the 'features' somewhere\n", "[[f for f, p in zip(features, ps) if p] for ps in preds]" ] }, { "cell_type": "code", "execution_count": null, "id": "e24010af-dd8e-427c-b11d-e6f1d557d0ca", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" }, "vscode": { "interpreter": { "hash": "1a53731e204626af339a5238c341a3f8c4bfd7cb5ccdda48ca3fe8366eef4175" } } }, "nbformat": 4, "nbformat_minor": 5 }