notebooks/en/enterprise_cookbook_argilla.ipynb (805 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "id": "c9a872bb-d364-4939-865e-6f01b16ca1f4", "metadata": {}, "source": [ "# Data Annotation with Argilla Spaces\n", "_Authored by: [Moritz Laurer](https://huggingface.co/MoritzLaurer)_" ] }, { "cell_type": "markdown", "id": "7bb4f796", "metadata": {}, "source": [ "This notebook illustrates the workflow for systematically evaluating LLM outputs and creating LLM training data. You can start by using this notebook to evaluate the zero-shot performance of your favorite LLM on your task without any fine-tuning. If you want to improve performance, you can then easily reuse this workflow to create training data.\n", "\n", "**Example use case: code generation.** In this tutorial, we demonstrate how to create high-quality test and train data for code generation tasks. The same workflow can, however, be adapted to any other task relevant to your specific use case.\n", "\n", "**In this notebook, we:**\n", "1. Download data for the example task.\n", "2. Prompt two LLMs to respond to these tasks. This results in \"synthetic data\" to speed up manual data creation. \n", "3. Create an Argilla annotation interface on HF Spaces to compare and evaluate the outputs from the two LLMs.\n", "4. Upload the example data and the zero-shot LLM responses into the Argilla annotation interface.\n", "5. Download the annotated data.\n", "\n", "You can adapt this notebook to your needs, e.g., using a different LLM and API provider for step (2) or adapting the annotation task in step (3)." ] }, { "cell_type": "markdown", "id": "a482a2f5-9f0d-4117-a606-6d6bf80c4c14", "metadata": {}, "source": [ "## Install required packages and connect to HF Hub" ] }, { "cell_type": "code", "execution_count": null, "id": "972076ae-2ad4-4afa-b9be-e3146ffbfe69", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install argilla~=2.0.0\n", "!pip install transformers~=4.40.0\n", "!pip install datasets~=2.19.0\n", "!pip install huggingface_hub~=0.23.2" ] }, { "cell_type": "code", "execution_count": null, "id": "dbc6293c-4f10-4cd3-b009-664929a3cbb9", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Login to the HF Hub. We recommend using this login method \n", "# to avoid the need to explicitly store your HF token in variables \n", "import huggingface_hub\n", "!git config --global credential.helper store\n", "huggingface_hub.login(add_to_git_credential=True)" ] }, { "cell_type": "markdown", "id": "b0443963-9704-49f9-97e6-48b0b8d7b7cc", "metadata": {}, "source": [ "## Download example task data" ] }, { "cell_type": "markdown", "id": "d7c2f97c-4a30-40ed-8057-6df595774ad9", "metadata": {}, "source": [ "First, we download an example dataset containing LLMs' code generation tasks. We want to evaluate how well two different LLMs perform on these code-generation tasks. We use instructions from the [bigcode/self-oss-instruct-sc2-exec-filter-50k](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k) dataset that was used to train the [StarCoder2-Instruct](https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1) model." ] }, { "cell_type": "code", "execution_count": 3, "id": "81644ec1-0bcc-44b5-b0c4-036b02bb54d7", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Dataset structure:\n", " Dataset({\n", " features: ['fingerprint', 'sha1', 'seed', 'response', 'concepts', 'prompt', 'instruction', 'id'],\n", " num_rows: 3\n", "}) \n", "\n", "Example instructions:\n", " ['Write a Python function named `get_value` that takes a matrix (represented by a list of lists) and a tuple of indices, and returns the value at that index in the matrix. The function should handle index out of range errors by returning None.', 'Write a Python function `check_collision` that takes a list of `rectangles` as input and checks if there are any collisions between any two rectangles. A rectangle is represented as a tuple (x, y, w, h) where (x, y) is the top-left corner of the rectangle, `w` is the width, and `h` is the height.\\n\\nThe function should return True if any pair of rectangles collide, and False otherwise. Use an iterative approach and check for collisions based on the bounding box collision detection algorithm. If a collision is found, return True immediately without checking for more collisions.']\n" ] } ], "source": [ "from datasets import load_dataset\n", "\n", "# Small sample for faster testing\n", "dataset_codetask = load_dataset(\"bigcode/self-oss-instruct-sc2-exec-filter-50k\", split=\"train[:3]\")\n", "print(\"Dataset structure:\\n\", dataset_codetask, \"\\n\")\n", "\n", "# We are only interested in the instructions/prompts provided in the dataset\n", "instructions_lst = dataset_codetask[\"instruction\"]\n", "print(\"Example instructions:\\n\", instructions_lst[:2])" ] }, { "cell_type": "markdown", "id": "d95b4013-b506-47d0-b85f-226c88f1ed0a", "metadata": {}, "source": [ "## Prompt two LLMs on the example task" ] }, { "cell_type": "markdown", "id": "1f9f69d8-7bf7-4d23-8509-ddf595704fd3", "metadata": {}, "source": [ "#### Formatting the instructions with a chat_template\n", "Before sending the instructions to an LLM API, we need to format the instructions with the correct `chat_template` for each of the models we want to evaluate. This essentially entails wrapping some special tokens around the instructions. See the [docs](https://huggingface.co/docs/transformers/main/en/chat_templating) on chat templates for details." ] }, { "cell_type": "code", "execution_count": 4, "id": "bb1c5904-0530-42ee-9499-87ece671bed2", "metadata": { "tags": [] }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\n", "/home/user/miniconda/lib/python3.9/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n", " warnings.warn(\n", "Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "First prompt formatted for mistralai/Mixtral-8x7B-Instruct-v0.1:\n", "\n", " <s>[INST] Write a Python function named `get_value` that takes a matrix (represented by a list of lists) and a tuple of indices, and returns the value at that index in the matrix. The function should handle index out of range errors by returning None. [/INST] \n", "\n", "\n", "First prompt formatted for meta-llama/Meta-Llama-3-70B-Instruct:\n", "\n", " <|begin_of_text|><|start_header_id|>user<|end_header_id|>\n", "\n", "Write a Python function named `get_value` that takes a matrix (represented by a list of lists) and a tuple of indices, and returns the value at that index in the matrix. The function should handle index out of range errors by returning None.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n", "\n", " \n", "\n", "\n" ] } ], "source": [ "# Apply correct chat formatting to instructions from the dataset \n", "from transformers import AutoTokenizer\n", "\n", "models_to_compare = [\"mistralai/Mixtral-8x7B-Instruct-v0.1\", \"meta-llama/Meta-Llama-3-70B-Instruct\"]\n", "\n", "def format_prompt(prompt, tokenizer):\n", " messages = [{\"role\": \"user\", \"content\": prompt}]\n", " messages_tokenized = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, return_tensors=\"pt\")\n", " return messages_tokenized\n", "\n", "\n", "prompts_formatted_dic = {}\n", "for model in models_to_compare:\n", " tokenizer = AutoTokenizer.from_pretrained(model)\n", "\n", " prompt_formatted = []\n", " for instruction in instructions_lst: \n", " prompt_formatted.append(format_prompt(instruction, tokenizer))\n", " \n", " prompts_formatted_dic.update({model: prompt_formatted})\n", "\n", "\n", "print(f\"\\nFirst prompt formatted for {models_to_compare[0]}:\\n\\n\", prompts_formatted_dic[models_to_compare[0]][0], \"\\n\\n\")\n", "print(f\"First prompt formatted for {models_to_compare[1]}:\\n\\n\", prompts_formatted_dic[models_to_compare[1]][0], \"\\n\\n\")\n" ] }, { "cell_type": "markdown", "id": "e161a9ae-680c-4daa-99fa-ca9d75d07bdc", "metadata": {}, "source": [ "#### Sending the instructions to the HF Inference API\n", "Now, we can send the instructions to the APIs for both LLMs to get outputs we can evaluate. We first define some parameters for generating the responses correctly. Hugging Face's LLM APIs are powered by [Text Generation Inference (TGI)](https://huggingface.co/docs/text-generation-inference/index) containers. See the TGI OpenAPI specifications [here](https://huggingface.github.io/text-generation-inference/#/Text%20Generation%20Inference/generate) and the explanations of different parameters in the Transformers Generation Parameters [docs](https://huggingface.co/docs/transformers/v4.30.0/main_classes/text_generation#transformers.GenerationConfig). " ] }, { "cell_type": "code", "execution_count": 5, "id": "a9dc6397-fc06-4b94-9bef-c7138d86f0e6", "metadata": { "tags": [] }, "outputs": [], "source": [ "generation_params = dict(\n", " # we use low temperature and top_p to reduce creativity and increase likelihood of highly probable tokens\n", " temperature=0.2,\n", " top_p=0.60,\n", " top_k=None,\n", " repetition_penalty=1.0,\n", " do_sample=True,\n", " max_new_tokens=512*2,\n", " return_full_text=False,\n", " seed=42,\n", " #details=True,\n", " #stop=[\"<|END_OF_TURN_TOKEN|>\"],\n", " #grammar={\"type\": \"json\"}\n", " max_time=None, \n", " stream=False,\n", " use_cache=False,\n", " wait_for_model=False,\n", ")" ] }, { "cell_type": "markdown", "id": "b2fee8a4-91e9-4cf4-8d5a-414ad0b17daa", "metadata": {}, "source": [ "Now, we can make a standard API request to the Serverless Inference API ([docs](https://huggingface.co/docs/api-inference/index)). Note that the Serverless Inference API is mostly for testing and is rate-limited. For testing without rate limits, you can create your own API via the HF Dedicated Endpoints ([docs](https://huggingface.co/docs/inference-endpoints/index)). See also our corresponding tutorials in the [Open Source AI Cookbook](https://huggingface.co/learn/cookbook/index)." ] }, { "cell_type": "markdown", "id": "1b072231", "metadata": {}, "source": [ "> [!TIP]\n", "> The code below will be updated once the Inference API recipe is finished." ] }, { "cell_type": "code", "execution_count": 6, "id": "40e03f80-16d4-41a6-9df8-4a22d7197936", "metadata": { "tags": [] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "655a7bc50f41468fb55ab507769dcd2c", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/3 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "6368ada8f257474e979b966773dbbd99", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/3 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "---First generation of mistralai/Mixtral-8x7B-Instruct-v0.1:\n", "Here's a Python function that meets your requirements:\n", "\n", "```python\n", "def get_value(matrix, indices):\n", " try:\n", " return matrix[indices[0]][indices[1]]\n", " except IndexError:\n", " return None\n", "```\n", "\n", "This function takes a matrix (represented by a list of lists) and a tuple of indices as input. It first tries to access the value at the given indices in the matrix. If the indices are out of range, it catches the `IndexError` exception and returns `None`.\n", "\n", "\n", "---First generation of meta-llama/Meta-Llama-3-70B-Instruct:\n", "Here is a Python function that does what you described:\n", "```\n", "def get_value(matrix, indices):\n", " try:\n", " row, col = indices\n", " return matrix[row][col]\n", " except IndexError:\n", " return None\n", "```\n", "Here's an explanation of how the function works:\n", "\n", "1. The function takes two arguments: `matrix` (a list of lists) and `indices` (a tuple of two integers, representing the row and column indices).\n", "2. The function tries to access the value at the specified indices using `matrix[row][col]`.\n", "3. If the indices are out of range (i.e., `row` or `col` is greater than the length of the corresponding dimension of the matrix), an `IndexError` exception is raised.\n", "4. The `except` block catches the `IndexError` exception and returns `None` instead of raising an error.\n", "\n", "Here's an example usage of the function:\n", "```\n", "matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\n", "\n", "print(get_value(matrix, (0, 0))) # prints 1\n", "print(get_value(matrix, (1, 1))) # prints 5\n", "print(get_value(matrix, (3, 0))) # prints None (out of range)\n", "print(get_value(matrix, (0, 3))) # prints None (out of range)\n", "```\n", "I hope this helps! Let me know if you have any questions.\n" ] } ], "source": [ "import requests\n", "from tqdm.auto import tqdm\n", "\n", "# Hint: use asynchronous API calls (and dedicated endpoints) to increase speed\n", "def query(payload=None, api_url=None):\n", " response = requests.post(api_url, headers=headers, json=payload)\n", " return response.json()\n", "\n", "headers = {\"Authorization\": f\"Bearer {huggingface_hub.get_token()}\"}\n", "\n", "output_dic = {}\n", "for model in models_to_compare:\n", " # Create API urls for each model\n", " # When using dedicated endpoints, you can reuse the same code and simply replace this URL\n", " api_url = \"https://api-inference.huggingface.co/models/\" + model\n", " \n", " # send requests to API \n", " output_lst = []\n", " for prompt in tqdm(prompt_formatted):\n", " output = query(\n", " payload={\n", " \"inputs\": prompt,\n", " \"parameters\": {**generation_params}\n", " },\n", " api_url=api_url \n", " )\n", " output_lst.append(output[0][\"generated_text\"])\n", " \n", " output_dic.update({model: output_lst})\n", "\n", "print(f\"---First generation of {models_to_compare[0]}:\\n{output_dic[models_to_compare[0]][0]}\\n\\n\")\n", "print(f\"---First generation of {models_to_compare[1]}:\\n{output_dic[models_to_compare[1]][0]}\")" ] }, { "cell_type": "markdown", "id": "9c6dfdfe-d1b5-48c2-9941-d108bdad4fa9", "metadata": {}, "source": [ "#### Store the LLM outputs in a dataset\n", "We can now store the LLM outputs in a dataset together with the original instructions." ] }, { "cell_type": "code", "execution_count": 7, "id": "0c3f94d4-a3d2-49e5-acf1-13e892d848dc", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "Dataset({\n", " features: ['instructions', 'response_model_1', 'response_model_2'],\n", " num_rows: 3\n", "})" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# create a HF dataset with the instructions and model outputs\n", "from datasets import Dataset\n", "\n", "dataset = Dataset.from_dict({\n", " \"instructions\": instructions_lst,\n", " \"response_model_1\": output_dic[models_to_compare[0]],\n", " \"response_model_2\": output_dic[models_to_compare[1]]\n", "})\n", "\n", "dataset" ] }, { "cell_type": "markdown", "id": "e1a8353e-a925-4c73-9c4c-c24d80d5048e", "metadata": {}, "source": [ "## Create and configure your Argilla dataset\n", "\n", "We use [Argilla](https://argilla.io/), a collaboration tool for AI engineers and domain experts who need to build high-quality datasets for their projects.\n", "\n", "We run Argilla via a HF Space, which you can set up with just a few clicks without any local setup. You can create the HF Argilla Space by following [these instructions](https://docs.argilla.io/latest/getting_started/quickstart/). For further configuration on HF Argilla Spaces, see also the detailed [documentation](https://docs.argilla.io/latest/getting_started/how-to-configure-argilla-on-huggingface/). If you want, you can also run Argilla locally via Argilla's docker containers (see [Argilla docs](https://docs.argilla.io/latest/getting_started/how-to-deploy-argilla-with-docker/)).\n", "\n", "![Argilla login screen](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/argilla-login-screen.png)" ] }, { "cell_type": "markdown", "id": "7ad026ef-909c-4756-a301-e9883c492407", "metadata": {}, "source": [ "#### Programmatically interact with Argilla\n", "\n", "Before we can tailor the dataset to our specific task and upload the data that will be shown in the UI, we need to first set up a few things.\n", "\n", "**Connecting this notebook to Argilla:** We can now connect this notebook to Argilla to programmatically configure your dataset and upload/download data. " ] }, { "cell_type": "code", "execution_count": 8, "id": "8e765940-9518-49ce-ac23-d45cada12ff2", "metadata": { "tags": [] }, "outputs": [], "source": [ "# After starting the Argilla Space (or local docker container) you can connect to the Space with the code below.\n", "import argilla as rg\n", "\n", "client = rg.Argilla(\n", " api_url=\"https://username-spacename.hf.space\", # Locally: \"http://localhost:6900\"\n", " api_key=\"your-apikey\", # You'll find it in the UI \"My Settings > API key\"\n", " # To use a private HF Argilla Space, also pass your HF token\n", " headers={\"Authorization\": f\"Bearer {huggingface_hub.get_token()}\"},\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "525c8790-bc0a-4089-b254-e064cc90f201", "metadata": { "tags": [] }, "outputs": [], "source": [ "user = client.me\n", "user" ] }, { "cell_type": "markdown", "id": "5b58564d-0711-4a9c-a7e0-5bab080f5ebe", "metadata": {}, "source": [ "#### Write good annotator guidelines \n", "Writing good guidelines for your human annotators is just as important (and difficult) as writing good training code. Good instructions should fulfill the following criteria: \n", "- **Simple and clear**: The guidelines should be simple and clear to understand for people who do not know anything about your task yet. Always ask at least one colleague to reread the guidelines to make sure that there are no ambiguities. \n", "- **Reproducible and explicit**: All information for doing the annotation task should be contained in the guidelines. A common mistake is to create informal interpretations of the guidelines during conversations with selected annotators. Future annotators will not have this information and might do the task differently than intended if it is not made explicit in the guidelines.\n", "- **Short and comprehensive**: The guidelines should as short as possible, while containing all necessary information. Annotators tend not to read long guidelines properly, so try to keep them as short as possible, while remaining comprehensive.\n", "\n", "Note that creating annotator guidelines is an iterative process. It is good practice to do a few dozen annotations yourself and refine the guidelines based on your learnings from the data before assigning the task to others. Versioning the guidelines can also help as the task evolves over time. See further tips in this [blog post](https://argilla.io/blog/annotation-guidelines-practices/)." ] }, { "cell_type": "code", "execution_count": 5, "id": "40e10eb0-f04e-4b52-be80-604f7f18615d", "metadata": { "tags": [] }, "outputs": [], "source": [ "annotator_guidelines = \"\"\"\\\n", "Your task is to evaluate the responses of two LLMs to code generation tasks. \n", "\n", "First, you need to score each response on a scale from 0 to 7. You add points to your final score based on the following criteria:\n", "- Add up to +2 points, if the code is properly commented, with inline comments and doc strings for functions.\n", "- Add up to +2 points, if the code contains a good example for testing. \n", "- Add up to +3 points, if the code runs and works correctly. Copy the code into an IDE and test it with at least two different inputs. Attribute one point if the code is overall correct, but has some issues. Attribute three points if the code is fully correct and robust against different scenarios. \n", "Your resulting final score can be any value between 0 to 7. \n", "\n", "If both responses have a final score of <= 4, select one response and correct it manually in the text field. \n", "The corrected response must fulfill all criteria from above. \n", "\"\"\"\n", "\n", "rating_tooltip = \"\"\"\\\n", "- Add up to +2 points, if the code is properly commented, with inline comments and doc strings for functions.\n", "- Add up to +2 points, if the code contains a good example for testing. \n", "- Add up to +3 points, if the code runs and works correctly. Copy the code into an IDE and test it with at least two different inputs. Attribute one point if the code works mostly correctly, but has some issues. Attribute three points if the code is fully correct and robust against different scenarios. \n", "\"\"\"" ] }, { "cell_type": "markdown", "id": "cc2fd8b1-5025-432f-aaad-81741c97b862", "metadata": {}, "source": [ "**Cumulative ratings vs. Likert scales:** Note that the guidelines above ask the annotators to do cumulative ratings by adding points for explicit criteria. An alternative approach are \"Likert scales\", where annotators are asked to rate responses on a continuous scale e.g. from 1 (very bad) to 3 (mediocre) to 5 (very good). We generally recommend cumulative ratings, because they force you and the annotators to make quality criteria explicit, while just rating a response as \"4\" (good) is ambiguous and will be interpreted differently by different annotators. " ] }, { "cell_type": "markdown", "id": "e7e99b32-e481-46cf-88b6-942c5e05fb2d", "metadata": {}, "source": [ "#### Tailor your Argilla dataset to your specific task\n", "\n", "We can now create our own `code-llm` task with the fields, questions, and metadata required for annotation. For more information on configuring the Argilla dataset, see the [Argilla docs](https://docs.argilla.io/latest/how_to_guides/dataset/#create-a-dataset).\n" ] }, { "cell_type": "code", "execution_count": null, "id": "063ae101", "metadata": {}, "outputs": [], "source": [ "dataset_argilla_name = \"code-llm\"\n", "workspace_name = \"argilla\"\n", "reuse_existing_dataset = False # for easier iterative testing\n", "\n", "# Configure your dataset settings\n", "settings = rg.Settings(\n", " # The overall annotation guidelines, which human annotators can refer back to inside of the interface\n", " guidelines=\"my guidelines\",\n", " fields=[\n", " rg.TextField(\n", " name=\"instruction\", title=\"Instruction:\", use_markdown=True, required=True\n", " ),\n", " rg.TextField(\n", " name=\"generation_1\",\n", " title=\"Response model 1:\",\n", " use_markdown=True,\n", " required=True,\n", " ),\n", " rg.TextField(\n", " name=\"generation_2\",\n", " title=\"Response model 2:\",\n", " use_markdown=True,\n", " required=True,\n", " ),\n", " ],\n", " # These are the questions we ask annotators about the fields in the dataset\n", " questions=[\n", " rg.RatingQuestion(\n", " name=\"score_response_1\",\n", " title=\"Your score for the response of model 1:\",\n", " description=\"0=very bad, 7=very good\",\n", " values=[0, 1, 2, 3, 4, 5, 6, 7],\n", " required=True,\n", " ),\n", " rg.RatingQuestion(\n", " name=\"score_response_2\",\n", " title=\"Your score for the response of model 2:\",\n", " description=\"0=very bad, 7=very good\",\n", " values=[0, 1, 2, 3, 4, 5, 6, 7],\n", " required=True,\n", " ),\n", " rg.LabelQuestion(\n", " name=\"which_response_corrected\",\n", " title=\"If both responses score below 4, select a response to correct:\",\n", " description=\"Select the response you will correct in the text field below.\",\n", " labels=[\"Response 1\", \"Response 2\", \"Combination of both\", \"Neither\"],\n", " required=False,\n", " ),\n", " rg.TextQuestion(\n", " name=\"correction\",\n", " title=\"Paste the selected response below and correct it manually:\",\n", " description=\"Your corrected response must fulfill all criteria from the annotation guidelines.\",\n", " use_markdown=True,\n", " required=False,\n", " ),\n", " rg.TextQuestion(\n", " name=\"comments\",\n", " title=\"Annotator Comments\",\n", " description=\"Add any additional comments here. E.g.: edge cases, issues with the interface etc.\",\n", " use_markdown=True,\n", " required=False,\n", " ),\n", " ],\n", " metadata=[\n", " rg.TermsMetadataProperty(\n", " name=\"source-dataset\",\n", " title=\"Original dataset source\",\n", " ),\n", " ],\n", " allow_extra_metadata=False,\n", ")\n", "\n", "if reuse_existing_dataset:\n", " dataset_argilla = client.datasets(dataset_argilla_name, workspace=workspace_name)\n", "else:\n", " dataset_argilla = rg.Dataset(\n", " name=dataset_argilla_name,\n", " settings=settings,\n", " workspace=workspace_name,\n", " )\n", " if client.datasets(dataset_argilla_name, workspace=workspace_name) is not None:\n", " client.datasets(dataset_argilla_name, workspace=workspace_name).delete()\n", " dataset_argilla = dataset_argilla.create()\n", "\n", "dataset_argilla" ] }, { "cell_type": "markdown", "id": "cadbd691-fe8d-422e-aaa0-2b3dd1b981b6", "metadata": {}, "source": [ "After running the code above, you will see the new custom `code-llm` dataset in Argilla (and any other dataset you might have created before).\n", "\n" ] }, { "cell_type": "markdown", "id": "0efbf95d-5ab5-4369-a53e-5a5be353ef83", "metadata": {}, "source": [ "#### Load the data to Argilla\n", "\n", "At this point, the dataset is still empty. Let's load some data with the code below." ] }, { "cell_type": "code", "execution_count": null, "id": "0b722748-dce4-48cf-99b1-e7217ea09dc0", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Iterate over the samples in the dataset\n", "records = [\n", " rg.Record(\n", " fields={\n", " \"instruction\": example[\"instructions\"],\n", " \"generation_1\": example[\"response_model_1\"],\n", " \"generation_2\": example[\"response_model_2\"],\n", " },\n", " metadata={\n", " \"source-dataset\": \"bigcode/self-oss-instruct-sc2-exec-filter-50k\",\n", " },\n", " # Optional: add suggestions from an LLM-as-a-judge system\n", " # They will be indicated with a sparkle icon and shown as pre-filled responses\n", " # It will speed up manual annotation\n", " # suggestions=[\n", " # rg.Suggestion(\n", " # question_name=\"score_response_1\",\n", " # value=example[\"llm_judge_rating\"],\n", " # agent=\"llama-3-70b-instruct\",\n", " # ),\n", " # ],\n", " )\n", " for example in dataset\n", "]\n", "\n", "try:\n", " dataset_argilla.records.log(records)\n", "except Exception as e:\n", " print(\"Exception:\", e)" ] }, { "cell_type": "markdown", "id": "e6488c2f-d30c-46ad-af7f-15cfc8b2baee", "metadata": {}, "source": [ "**The Argilla UI for annotation** will look similar to this:\n", "\n", "![Argilla UI](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/argilla-code-llm.png)" ] }, { "cell_type": "markdown", "id": "c998cd39-9a5e-4554-b577-fac62bd3bfe6", "metadata": {}, "source": [ "## Annotate" ] }, { "cell_type": "markdown", "id": "1501558a-0c96-4b01-9a25-b0c6d6903d68", "metadata": {}, "source": [ "That's it, we've created our Argilla dataset and we can now start annotating in the UI! By default, the records will be completed when they have 1 annotation. Check these guides, to know how to [automatically distribute the annotation task](https://docs.argilla.io/latest/how_to_guides/distribution/) and [annotate in Argilla](https://docs.argilla.io/latest/how_to_guides/annotate/).\n", "\n", "\n", "**Important**: If you use Argilla in a HF Space, you'd to activate persistent storage so that your data is safely stored and not automatically deleted after a while. For production settings, make sure that persistent storage is activated **before** making any annotations to avoid data loss. " ] }, { "cell_type": "markdown", "id": "a34e3e51-f68f-4980-89e6-d7fb6435109f", "metadata": {}, "source": [ "## Download annotated data\n", "After annotating, you can pull the data from Argilla and simply store and process them locally in any tabular format (see [docs here](https://docs.argilla.io/latest/how_to_guides/import_export/)). You can also download the filtered version of the dataset ([docs](https://docs.argilla.io/latest/how_to_guides/query/))." ] }, { "cell_type": "code", "execution_count": null, "id": "d6d0c227", "metadata": {}, "outputs": [], "source": [ "annotated_dataset = client.datasets(dataset_argilla_name, workspace=workspace_name)\n", "\n", "hf_dataset = annotated_dataset.records.to_datasets()\n", "\n", "# This HF dataset can then be formatted, stored and processed into any tabular data format\n", "hf_dataset.to_pandas()" ] }, { "cell_type": "code", "execution_count": null, "id": "740c62c5-d7d5-41f4-b957-8b7f1c49d4a5", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Store the dataset locally\n", "hf_dataset.to_csv(\"argilla-dataset-local.csv\") # Save as CSV\n", "#hf_dataset.to_json(\"argilla-dataset-local.json\") # Save as JSON\n", "#hf_dataset.save_to_disk(\"argilla-dataset-local\") # Save as a `datasets.Dataset` in the local filesystem\n", "#hf_dataset.to_parquet() # Save as Parquet" ] }, { "cell_type": "markdown", "id": "87f15fc3-2faf-43d0-92b9-3f91248420b6", "metadata": {}, "source": [ "## Next Steps\n", "\n", "That's it! You've created synthetic LLM data with the HF inference API, created a dataset in Argilla, uploaded the LLM data into Argilla, evaluated/corrected the data, and after annotation you have downloaded the data in a simple tabular format for downstream use. \n", "\n", "We have specifically designed the pipeline and the interface for **two main use-cases**: \n", "1. Evaluation: You can now simply use the numeric scores in the `score_response_1` and `score_response_2` columns to calculate which model was better overall. You can also inspect responses with very low or high ratings for a detailed error analysis. As you test or train different models, you can reuse this pipeline and track improvements of different models over time. \n", "2. Training: After annotating enough data, you can create a train-test split from the data and fine-tune your own model. You can either use highly rated response texts for supervised fine-tuning with the the [TRL SFTTrainer](https://huggingface.co/docs/trl/en/sft_trainer), or you can directly use the ratings for preference-tuning techniques like DPO with the [TRL DPOTrainer](https://huggingface.co/docs/trl/en/dpo_trainer). See the [TRL docs](https://huggingface.co/docs/trl/en/index) for the pros and cons of different LLM fine-tuning techniques. \n", "\n", "**Adapt and improve:** Many things can be improved to tailor this pipeline to your specific use-cases. For example, you can prompt an LLM to evaluate the outputs of the two LLMs with instructions very similar to the guidelines for human annotators (\"LLM-as-a-judge\" approach). This can help further speed up your evaluation pipeline. See our [LLM-as-a-judge recipe](https://huggingface.co/learn/cookbook/llm_judge) for an example implementation of LLM-as-a-judge and our overall [Open-Source AI Cookbook](https://huggingface.co/learn/cookbook/index) for many other ideas. \n", "\n", "\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 5 }