notebooks/text-generation/llama2-13b-chatbot.ipynb (461 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"id": "bae23f09",
"metadata": {},
"source": [
"# Create your own chatbot with llama-2-13B on AWS Inferentia\n",
"\n",
"This guide will detail how to export, deploy and run a **LLama-2 13B** chat model on AWS inferentia.\n",
"\n",
"You will learn how to:\n",
"- set up your AWS instance,\n",
"- export the Llama-2 model to the Neuron format,\n",
"- push the exported model to the Hugging Face Hub,\n",
"- deploy the model and use it in a chat application.\n",
"\n",
"Note: This tutorial was created on a inf2.48xlarge AWS EC2 Instance.\n",
"\n",
"## Prerequisite: Setup AWS environment\n",
"\n",
"*you can skip that section if you are already running this notebook on your instance.*\n",
"\n",
"In this example, we will use the *inf2.48xlarge* instance with 12 Neuron devices, corresponding to 24 Neuron Cores and the [Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2).\n",
"\n",
"This guide doesn’t cover how to create the instance in detail. You can refer to the [offical documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html). At step 4. you will select the\n",
"[Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) and at step 5. you will select an *inf2* instance type.\n",
"\n",
"Once the instance is up and running, you can ssh into it. But instead of developing inside a terminal you need to launch a Jupyter server to run this notebook.\n",
"\n",
"For this, you need first to add a port for forwarding in the ssh command, which will tunnel our localhost traffic to the AWS instance.\n",
"\n",
"From a local terminal, type the following commands:\n",
"\n",
"```shell\n",
"HOSTNAME=\"\" # IP address, e.g. ec2-3-80-....\n",
"KEY_PATH=\"\" # local path to key, e.g. ssh/trn.pem\n",
"\n",
"ssh -L 8080:localhost:8080 -i ${KEY_NAME}.pem ubuntu@$HOSTNAME\n",
"```\n",
"\n",
"On the instance, you can now start the jupyter server.\n",
"\n",
"```\n",
"python -m notebook --allow-root --port=8080\n",
"```\n",
"\n",
"You should see a familiar jupyter output with a URL.\n",
"\n",
"You can click on it, and a jupyter environment will open in your local browser.\n",
"\n",
"You can then browse to this notebook (`notebooks/text-generation/llama2-13-chatbot`) to continue with the guide.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "44142062",
"metadata": {},
"outputs": [],
"source": [
"# Special widgets are required for a nicer display\n",
"!{sys.executable} -m pip install ipywidgets"
]
},
{
"cell_type": "markdown",
"id": "bc76e858",
"metadata": {},
"source": [
"## 1. Export the Llama 2 model to Neuron\n",
"\n",
"For this guide, we will use the non-gated [NousResearch/Llama-2-13b-chat-hf](https://huggingface.co/NousResearch/Llama-2-13b-chat-hf) model, which is functionally equivalent to the original [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf).\n",
"\n",
"This model is part of the **Llama 2** family of models, and has been tuned to recognize chat interactions\n",
"between a *user* and an *assistant* (more on that later).\n",
"\n",
"As explained in the [optimum-neuron documentation](https://huggingface.co/docs/optimum-neuron/guides/export_model#why-compile-to-neuron-model)\n",
", models need to be compiled and exported to a serialized format before running them on Neuron devices.\n",
"\n",
"Fortunately, 🤗 **optimum-neuron** offers a [very simple API](https://huggingface.co/docs/optimum-neuron/guides/models#configuring-the-export-of-a-generative-model)\n",
"to export standard 🤗 [transformers models](https://huggingface.co/docs/transformers/index) to the Neuron format.\n",
"\n",
"When exporting the model, we will specify two sets of parameters:\n",
"\n",
"- using *compiler_args*, we specify on how many cores we want the model to be deployed (each neuron device has two cores), and with which precision (here *float16*),\n",
"- using *input_shapes*, we set the static input and output dimensions of the model. All model compilers require static shapes, and neuron makes no exception. Note that the\n",
"*sequence_length* not only constrains the length of the input context, but also the length of the Key/Value cache, and thus, the output length.\n",
"\n",
"Depending on your choice of parameters and inferentia host, this may take from a few minutes to more than an hour.\n",
"\n",
"For your convenience, we host a pre-compiled version of that model on the Hugging Face hub, so you can skip the export and start using the model immediately in section 2."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "612e39ad",
"metadata": {},
"outputs": [],
"source": [
"from optimum.neuron import NeuronModelForCausalLM\n",
"\n",
"\n",
"compiler_args = {\"num_cores\": 24, \"auto_cast_type\": 'fp16'}\n",
"input_shapes = {\"batch_size\": 1, \"sequence_length\": 2048}\n",
"model = NeuronModelForCausalLM.from_pretrained(\n",
" \"NousResearch/Llama-2-13b-chat-hf\",\n",
" export=True,\n",
" **compiler_args,\n",
" **input_shapes)"
]
},
{
"cell_type": "markdown",
"id": "25440470",
"metadata": {},
"source": [
"This probably took a while.\n",
"\n",
"Fortunately, you will need to do this only once because you can save your model and reload it later."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "63ddcd3a",
"metadata": {},
"outputs": [],
"source": [
"model.save_pretrained(\"llama-2-13b-chat-neuron\")"
]
},
{
"cell_type": "markdown",
"id": "e221d9ad",
"metadata": {},
"source": [
"Even better, you can push it to the [Hugging Face hub](https://huggingface.co/models).\n",
"\n",
"For that, you need to be logged in to a [HuggingFace account](https://huggingface.co/join).\n",
"\n",
"If you are not connected already on your instance, you will now be prompted for an access token."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "762a9e7d",
"metadata": {},
"outputs": [],
"source": [
"from huggingface_hub import notebook_login\n",
"\n",
"\n",
"notebook_login(new_session=False)"
]
},
{
"cell_type": "markdown",
"id": "856c4cc7",
"metadata": {},
"source": [
"By default, the model will be uploaded to your account (organization equal to your user name).\n",
"\n",
"Feel free to edit the cell below if you want to upload the model to a specific [Hugging Face organization](https://huggingface.co/docs/hub/organizations)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f79155c8",
"metadata": {},
"outputs": [],
"source": [
"from huggingface_hub import whoami\n",
"\n",
"\n",
"org = whoami()['name']\n",
"\n",
"repo_id = f\"{org}/llama-2-13b-chat-neuron\"\n",
"\n",
"model.push_to_hub(\"llama-2-13b-chat-neuron\", repository_id=repo_id)"
]
},
{
"cell_type": "markdown",
"id": "96303dbe",
"metadata": {},
"source": [
"### A few more words about export parameters.\n",
"\n",
"The minimum memory required to load a model can be computed with:\n",
"\n",
"```\n",
" memory = bytes per parameter * number of parameters\n",
"```\n",
"\n",
"The **Llama 2 13B** model uses *float16* weights (stored on 2 bytes) and has 13 billion parameters, which means it requires at least 2 * 13B or ~26GB of memory to store its weights.\n",
"\n",
"Each NeuronCore has 16GB of memory which means that a 26GB model cannot fit on a single NeuronCore.\n",
"\n",
"In reality, the total space required is much greater than just the number of parameters due to caching attention layer projections (KV caching).\n",
"This caching mechanism grows memory allocations linearly with sequence length and batch size.\n",
"\n",
"Here we set the *batch_size* to 1, meaning that we can only process one input prompt in parallel. We set the *sequence_length* to 2048, which corresponds to half the model maximum capacity (4096).\n",
"\n",
"The formula to evaluate the size of the KV cache is more involved as it also depends on parameters related to the model architecture, such as the width of the embeddings and the number of decoder blocks.\n",
"\n",
"Bottom-line is, to get very large language models to fit, tensor parallelism is used to split weights, data, and compute across multiple NeuronCores, keeping in mind that the memory on each core cannot exceed 16GB.\n",
"\n",
"Note that increasing the number of cores beyond the minimum requirement almost always results in a faster model.\n",
"Increasing the tensor parallelism degree improves memory bandwidth which improves model performance.\n",
"\n",
"To optimize performance it's recommended to use all cores available on the instance.\n",
"\n",
"In this guide we use all the 24 cores of the *inf2.48xlarge*, but this should be changed to 12 if you are\n",
"using a *inf2.24xlarge* instance."
]
},
{
"cell_type": "markdown",
"id": "10d21867",
"metadata": {},
"source": [
"## 2. Generate text using Llama 2 on AWS Inferentia2\n",
"\n",
"Once your model has been exported, you can generate text using the transformers library, as it has been described in [detail in this post](https://huggingface.co/blog/how-to-generate).\n",
"\n",
"If as suggested you skipped the first section, don't worry: we will use a precompiled model already present on the hub instead."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ac1a7c31",
"metadata": {},
"outputs": [],
"source": [
"from optimum.neuron import NeuronModelForCausalLM\n",
"\n",
"\n",
"try:\n",
" model\n",
"except NameError:\n",
" # Edit this to use another base model\n",
" model = NeuronModelForCausalLM.from_pretrained('aws-neuron/Llama-2-13b-chat-hf-neuron-latency')"
]
},
{
"cell_type": "markdown",
"id": "5a034c58",
"metadata": {},
"source": [
"We will need a *Llama 2* tokenizer to convert the prompt strings to text tokens."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "832d93bc",
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoTokenizer\n",
"\n",
"\n",
"tokenizer = AutoTokenizer.from_pretrained(\"NousResearch/Llama-2-13b-chat-hf\")"
]
},
{
"cell_type": "markdown",
"id": "76a048db",
"metadata": {},
"source": [
"The following generation strategies are supported:\n",
"\n",
"- greedy search,\n",
"- multinomial sampling with top-k and top-p (with temperature).\n",
"\n",
"Most logits pre-processing/filters (such as repetition penalty) are supported."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7947684c",
"metadata": {},
"outputs": [],
"source": [
"inputs = tokenizer(\"What is deep-learning ?\", return_tensors=\"pt\")\n",
"outputs = model.generate(**inputs,\n",
" max_new_tokens=128,\n",
" do_sample=True,\n",
" temperature=0.9,\n",
" top_k=50,\n",
" top_p=0.9)\n",
"tokenizer.batch_decode(outputs, skip_special_tokens=True)"
]
},
{
"cell_type": "markdown",
"id": "1df9e9bd",
"metadata": {},
"source": [
"## 3. Create a chat application using llama on AWS Inferentia2\n",
"\n",
"We specifically selected a **Llama 2** chat variant to illustrate the excellent behaviour of the exported model when the length of the encoding context grows.\n",
"\n",
"The model expects the prompts to be formatted following a specific template corresponding to the interactions between a *user* role and an *assistant* role.\n",
"\n",
"Each chat model has its own convention for encoding such contents, and we will not go into too much details in this guide, because we will directly use the [Hugging Face chat templates](https://huggingface.co/blog/chat-templates) corresponding to our model.\n",
"\n",
"The utility function below converts a list of exchanges between the user and the model into a well-formatted chat prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "db16c699",
"metadata": {},
"outputs": [],
"source": [
"def format_chat_prompt(message, history, max_tokens):\n",
" \"\"\" Convert a history of messages to a chat prompt\n",
" Args:\n",
" message(str): the new user message.\n",
" history (List[str]): the list of user messages and assistant responses.\n",
" max_tokens (int): the maximum number of input tokens accepted by the model.\n",
" Returns:\n",
" a `str` prompt.\n",
" \"\"\"\n",
" chat = []\n",
" # Convert all messages in history to chat interactions\n",
" for interaction in history:\n",
" chat.append({\"role\": \"user\", \"content\" : interaction[0]})\n",
" chat.append({\"role\": \"assistant\", \"content\" : interaction[1]})\n",
" # Add the new message\n",
" chat.append({\"role\": \"user\", \"content\" : message})\n",
" # Generate the prompt, verifying that we don't go beyond the maximum number of tokens\n",
" for i in range(0, len(chat), 2):\n",
" # Generate candidate prompt with the last n-i entries\n",
" prompt = tokenizer.apply_chat_template(chat[i:], tokenize=False)\n",
" # Tokenize to check if we're over the limit\n",
" tokens = tokenizer(prompt)\n",
" if len(tokens.input_ids) <= max_tokens:\n",
" # We're good, stop here\n",
" return prompt\n",
" # We shall never reach this line\n",
" raise SystemError"
]
},
{
"cell_type": "markdown",
"id": "92cac294",
"metadata": {},
"source": [
"We are now equipped to build a simplistic chat application.\n",
"\n",
"We simply store the interactions between the user and the assistant in a list that we use to generate\n",
"the input prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d0bf4952",
"metadata": {},
"outputs": [],
"source": [
"history = []\n",
"max_tokens = 1024\n",
"\n",
"def chat(message, history, max_tokens):\n",
" prompt = format_chat_prompt(message, history, max_tokens)\n",
" # Uncomment the line below to see what the formatted prompt looks like\n",
" #print(prompt)\n",
" inputs = tokenizer(prompt, return_tensors=\"pt\")\n",
" outputs = model.generate(**inputs,\n",
" max_length=2048,\n",
" do_sample=True,\n",
" temperature=0.9,\n",
" top_k=50,\n",
" repetition_penalty=1.2)\n",
" # Do not include the input tokens\n",
" outputs = outputs[0, inputs.input_ids.size(-1):]\n",
" response = tokenizer.decode(outputs, skip_special_tokens=True)\n",
" history.append([message, response])\n",
" return response"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3f70e487",
"metadata": {},
"outputs": [],
"source": [
"print(chat(\"My favorite color is blue. My favorite fruit is strawberry.\", history, max_tokens))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c9d344a6",
"metadata": {},
"outputs": [],
"source": [
"print(chat(\"Name a fruit that is on my favorite colour.\", history, max_tokens))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "33330967",
"metadata": {},
"outputs": [],
"source": [
"print(chat(\"What is the colour of my favorite fruit ?\", history, max_tokens))"
]
},
{
"cell_type": "markdown",
"id": "38df6da1",
"metadata": {},
"source": [
"**Warning**: While very powerful, Large language models can sometimes *hallucinate*. We call *hallucinations* generated content that is irrelevant or made-up but presented by the model as if it was accurate. This is a flaw of LLMs and is not a side effect of using them on Trainium / Inferentia."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0f8b4dc6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}