quick_start/legacy/07_prompt_engineering.ipynb (202 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"import os\n",
"from dotenv import load_dotenv\n",
"load_dotenv()\n",
"\n",
"openai.api_type = \"azure\"\n",
"openai.api_version = os.getenv(\"OPENAI_API_VERSION\")\n",
"openai.api_key = os.getenv(\"OPENAI_API_KEY\")\n",
"openai.api_base = os.getenv(\"OPENAI_API_BASE\")\n",
"model=os.getenv('CHAT_COMPLETION_NAME')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# A Few Shot Learning"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"👏,🌟,🤩,👍\n"
]
}
],
"source": [
"# Zero-shot classification\n",
"system_prompt =\"\"\"Predict up to 5 emojis as a response to a text chat message. The output\n",
"should only include emojis.\n",
"\n",
"input: The new visual design is blowing my mind 🤯\n",
"output: ➕,💘, ❤🔥\n",
"\n",
"input: Well that looks great regardless\n",
"output: ❤️,🪄\n",
"\n",
"input: Unfortunately this won't work\n",
"output: 💔,😔\n",
"\n",
"input: sounds good, I'll look into that\n",
"output: 🙏,👍\n",
"\n",
"input: 10hr cut of jeff goldblum laughing URL\n",
"output: 😂,💀,⚰️\n",
"\"\"\"\n",
"user_prompt = \"The new user interface is amazing!\"\n",
"response = openai.ChatCompletion.create(\n",
"engine=model,\n",
"messages = [{\"role\":\"system\", \"content\":system_prompt},\n",
" {\"role\":\"user\",\"content\": user_prompt,}])\n",
"print(response['choices'][0]['message']['content'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Prompt Engineering Best Practices\n",
"\n",
"## Write clear instructions\n",
"\n",
"Examples:\n",
"\n",
"-----------------------\n",
"Prompt:\n",
"\n",
"Write code to calculate the Fibonacci sequence.\n",
"\n",
"Better:\n",
"\n",
"Write a TypeScript function to efficiently calculate the Fibonacci sequence. Comment the code liberally to explain what each piece does and why it's written that way.\n",
"\n",
"----------------------\n",
"\n",
"Prompt:\n",
"\n",
"Summarize the meeting notes.\n",
"\n",
"Better:\n",
"\n",
"Summarize the meeting notes in a single paragraph. Then write a markdown list of the speakers and each of their key points. Finally, list the next steps or action items suggested by the speakers, if any.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Role Playing\n",
"\n",
"Examples:\n",
"\n",
"-----------------------\n",
"\n",
"System Message: When I ask for help to write something, you will reply with a document that contains at least one joke or playful comment in every paragraph.\n",
"\n",
"----------------------\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Segment input text\n",
"\n",
"Examples:\n",
"\n",
"------------------------\n",
"\n",
"user message: Summarize the text delimited by triple quotes with a haiku.\n",
"\n",
"\"\"\"insert text here\"\"\"\n",
"\n",
"------------------------\n",
"\n",
"system message: You will be provided with a pair of articles (delimited with XML tags) about the same topic. First summarize the arguments of each article. Then indicate which of them makes a better argument and explain why.\n",
"\n",
"user message: \n",
"\n",
"\\<article> insert first article here \\</article>\n",
"\n",
"\\<article> insert second article here \\</article>\n",
"\n",
"------------------------"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explain steps and processes to complete a task\n",
"\n",
"Examples:\n",
"\n",
"--------------------------------\n",
"\n",
"System Message:\n",
"Use the following step-by-step instructions to respond to user inputs.\n",
"\n",
"Step 1 - The user will provide you with text in triple quotes. Summarize this text in one sentence with a prefix that says \"Summary: \".\n",
"\n",
"Step 2 - Translate the summary from Step 1 into Spanish, with a prefix that says \"Translation: \".\n",
"\n",
"---------------------------------"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use few-shot learning\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "azureml_py38",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "6d65a8c07f5b6469e0fc613f182488c0dccce05038bbda39e5ac9075c0454d11"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}