gemini/prompts/intro_prompt_design.ipynb (890 lines of code) (raw):

{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "ur8xi4C7S06n" }, "outputs": [], "source": [ "# Copyright 2024 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "JAPoU8Sm5E6e" }, "source": [ "# Prompt Design - Best Practices\n", "\n", "<table align=\"left\">\n", " <td style=\"text-align: center\">\n", " <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/intro_prompt_design.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fprompts%2Fintro_prompt_design.ipynb\">\n", " <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n", " </a>\n", " </td> \n", " <td style=\"text-align: center\">\n", " <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/prompts/intro_prompt_design.ipynb\">\n", " <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/intro_prompt_design.ipynb\">\n", " <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n", " </a>\n", " </td>\n", " <td style=\"text-align: center\">\n", " <a href=\"https://goo.gle/4fWHlze\">\n", " <img width=\"32px\" src=\"https://cdn.qwiklabs.com/assets/gcp_cloud-e3a77215f0b8bfa9b3f611c0d2208c7e8708ed31.svg\" alt=\"Google Cloud logo\"><br> Open in Cloud Skills Boost\n", " </a>\n", " </td>\n", "</table>\n", "\n", "<div style=\"clear: both;\"></div>\n", "\n", "<b>Share to:</b>\n", "\n", "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/intro_prompt_design.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n", "</a>\n", "\n", "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/intro_prompt_design.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n", "</a>\n", "\n", "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/intro_prompt_design.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n", "</a>\n", "\n", "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/intro_prompt_design.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n", "</a>\n", "\n", "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/intro_prompt_design.ipynb\" target=\"_blank\">\n", " <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n", "</a> " ] }, { "cell_type": "markdown", "metadata": { "id": "84f0f73a0f76" }, "source": [ "| | |\n", "|-|-|\n", "|Author(s) | [Polong Lin](https://github.com/polong-lin), [Karl Weinmeister](https://github.com/kweinmeister) |" ] }, { "cell_type": "markdown", "metadata": { "id": "tvgnzT1CKxrO" }, "source": [ "## Overview\n", "\n", "This notebook covers the essentials of prompt engineering, including some best practices.\n", "\n", "Learn more about prompt design in the [official documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/text/text-overview).\n", "\n", "In this notebook, you learn best practices around prompt engineering -- how to design prompts to improve the quality of your responses.\n", "\n", "This notebook covers the following best practices for prompt engineering:\n", "\n", "- Be concise\n", "- Be specific and well-defined\n", "- Ask one task at a time\n", "- Turn generative tasks into classification tasks\n", "- Improve response quality by including examples" ] }, { "cell_type": "markdown", "metadata": { "id": "61RBz8LLbxCR" }, "source": [ "## Getting Started" ] }, { "cell_type": "markdown", "metadata": { "id": "No17Cw5hgx12" }, "source": [ "### Install Google Gen AI SDK\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tFy3H3aPgx12" }, "outputs": [], "source": [ "%pip install --upgrade --quiet google-genai" ] }, { "cell_type": "markdown", "metadata": { "id": "R5Xep4W9lq-Z" }, "source": [ "### Restart runtime\n", "\n", "To use the newly installed packages in this Jupyter runtime, you must restart the runtime. You can do this by running the cell below, which will restart the current kernel." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XRvKdaPDTznN" }, "outputs": [], "source": [ "import IPython\n", "\n", "app = IPython.Application.instance()\n", "app.kernel.do_shutdown(True)" ] }, { "cell_type": "markdown", "metadata": { "id": "SbmM4z7FOBpM" }, "source": [ "<div class=\"alert alert-block alert-warning\">\n", "<b>⚠️ The kernel is going to restart. Please wait until it is finished before continuing to the next step. ⚠️</b>\n", "</div>\n" ] }, { "cell_type": "markdown", "metadata": { "id": "dmWOrTJ3gx13" }, "source": [ "### Authenticate your notebook environment (Colab only)\n", "\n", "Authenticate your environment on Google Colab.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NyKGtVQjgx13" }, "outputs": [], "source": [ "import sys\n", "\n", "if \"google.colab\" in sys.modules:\n", " from google.colab import auth\n", "\n", " auth.authenticate_user()" ] }, { "cell_type": "markdown", "metadata": { "id": "06489bd14f16" }, "source": [ "### Import libraries\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "154137022fb6" }, "outputs": [], "source": [ "from IPython.display import Markdown, display\n", "from google import genai\n", "from google.genai.types import GenerateContentConfig" ] }, { "cell_type": "markdown", "metadata": { "id": "DF4l8DTdWgPY" }, "source": [ "### Set Google Cloud project information and create client\n", "\n", "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n", "\n", "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Nqwi-5ufWp_B" }, "outputs": [], "source": [ "import os\n", "\n", "PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n", "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n", " PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n", "\n", "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QqRWdPGmW3NJ" }, "outputs": [], "source": [ "client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)" ] }, { "cell_type": "markdown", "metadata": { "id": "OnFPpCRtXRl4" }, "source": [ "### Load model\n", "\n", "Learn more about all [Gemini models on Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#gemini-models)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IQYu_9SvXQah" }, "outputs": [], "source": [ "MODEL_ID = \"gemini-2.0-flash-001\" # @param {type: \"string\"}" ] }, { "cell_type": "markdown", "metadata": { "id": "cVOtUNJ5X0PY" }, "source": [ "## Prompt engineering best practices" ] }, { "cell_type": "markdown", "metadata": { "id": "uv_e0fEPX60q" }, "source": [ "Prompt engineering is all about how to design your prompts so that the response is what you were indeed hoping to see.\n", "\n", "The idea of using \"unfancy\" prompts is to minimize the noise in your prompt to reduce the possibility of the LLM misinterpreting the intent of the prompt. Below are a few guidelines on how to engineer \"unfancy\" prompts.\n", "\n", "In this section, you'll cover the following best practices when engineering prompts:\n", "\n", "* Be concise\n", "* Be specific, and well-defined\n", "* Ask one task at a time\n", "* Improve response quality by including examples\n", "* Turn generative tasks to classification tasks to improve safety" ] }, { "cell_type": "markdown", "metadata": { "id": "0pY4XX0OX9_Y" }, "source": [ "### Be concise" ] }, { "cell_type": "markdown", "metadata": { "id": "xlRpxyxGYA1K" }, "source": [ "🛑 Not recommended. The prompt below is unnecessarily verbose." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YKV4G-CfXdbi" }, "outputs": [], "source": [ "prompt = \"What do you think could be a good name for a flower shop that specializes in selling bouquets of dried flowers more than fresh flowers?\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "YrJexRHJYnmC" }, "source": [ "✅ Recommended. The prompt below is to the point and concise." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VHetn9lCYrXB" }, "outputs": [], "source": [ "prompt = \"Suggest a name for a flower shop that sells bouquets of dried flowers\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "eXTAvdOHY0OC" }, "source": [ "### Be specific, and well-defined" ] }, { "cell_type": "markdown", "metadata": { "id": "FTH4GEIgY1dp" }, "source": [ "Suppose that you want to brainstorm creative ways to describe Earth." ] }, { "cell_type": "markdown", "metadata": { "id": "o5BmXBiGY4KC" }, "source": [ "🛑 The prompt below might be a bit too generic (which is certainly OK if you'd like to ask a generic question!)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "eHBaMvv7Y6mR" }, "outputs": [], "source": [ "prompt = \"Tell me about Earth\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "4iyvEbteZnFL" }, "source": [ "✅ Recommended. The prompt below is specific and well-defined." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "JQ80z8urZnne" }, "outputs": [], "source": [ "prompt = \"Generate a list of ways that makes Earth unique compared to other planets\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "R5kmfZYHZsJ7" }, "source": [ "### Ask one task at a time" ] }, { "cell_type": "markdown", "metadata": { "id": "rsAezxeYZuUN" }, "source": [ "🛑 Not recommended. The prompt below has two parts to the question that could be asked separately." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ElywPXpuZtWf" }, "outputs": [], "source": [ "prompt = \"What's the best method of boiling water and why is the sky blue?\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "ejzahazBZ8vk" }, "source": [ "✅ Recommended. The prompts below asks one task a time." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "C5ckp2F0Z_Ba" }, "outputs": [], "source": [ "prompt = \"What's the best method of boiling water?\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "KwUzhud4aA89" }, "outputs": [], "source": [ "prompt = \"Why is the sky blue?\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "PJIL2RTQaGcT" }, "source": [ "### Watch out for hallucinations" ] }, { "cell_type": "markdown", "metadata": { "id": "8Y8kYxrSaHE9" }, "source": [ "Although LLMs have been trained on a large amount of data, they can generate text containing statements not grounded in truth or reality; these responses from the LLM are often referred to as \"hallucinations\" due to their limited memorization capabilities. Note that simply prompting the LLM to provide a citation isn't a fix to this problem, as there are instances of LLMs providing false or inaccurate citations. Dealing with hallucinations is a fundamental challenge of LLMs and an ongoing research area, so it is important to be cognizant that LLMs may seem to give you confident, correct-sounding statements that are in fact incorrect.\n", "\n", "Note that if you intend to use LLMs for the creative use cases, hallucinating could actually be quite useful." ] }, { "cell_type": "markdown", "metadata": { "id": "8NY5nAGeaJYS" }, "source": [ "Try the prompt like the one below repeatedly. We set the temperature to `1.0` so that it takes more risks in its choices. It's possible that it may provide an inaccurate, but confident answer." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QALPjEILaM62" }, "outputs": [], "source": [ "generation_config = GenerateContentConfig(temperature=1.0)\n", "\n", "prompt = \"What day is it today?\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "BRkwzbgRbhKt" }, "source": [ "Since LLMs do not have access to real-time information without further integrations, you may have noticed it hallucinates what day it is today in some of the outputs." ] }, { "cell_type": "markdown", "metadata": { "id": "3c811e310d02" }, "source": [ "### Using system instructions to guardrail the model from irrelevant responses\n", "\n", "How can we attempt to reduce the chances of irrelevant responses and hallucinations?\n", "\n", "One way is to provide the LLM with [system instructions](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/send-chat-prompts-gemini#system-instructions).\n", "\n", "Let's see how system instructions works and how you can use them to reduce hallucinations or irrelevant questions for a travel chatbot.\n", "\n", "Suppose we ask a simple question about one of Italy's most famous tourist spots." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "rB6zJU76biFK" }, "outputs": [], "source": [ "generation_config = GenerateContentConfig(temperature=1.0)\n", "\n", "chat = client.chats.create(\n", " model=MODEL_ID,\n", " config=GenerateContentConfig(\n", " system_instruction=[\n", " \"Hello! You are an AI chatbot for a travel web site.\",\n", " \"Your mission is to provide helpful queries for travelers.\",\n", " \"Remember that before you answer a question, you must check to see if it complies with your mission.\",\n", " \"If not, you can say, Sorry I can't answer that question.\",\n", " ]\n", " ),\n", ")\n", "\n", "prompt = \"What is the best place for sightseeing in Milan, Italy?\"\n", "\n", "response = chat.send_message(prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "WZa-Qcf9cF4A" }, "source": [ "Now let us pretend to be a user asks the chatbot a question that is unrelated to travel." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "AZKBIDr2cGnu" }, "outputs": [], "source": [ "prompt = \"What is the best place for sightseeing in Milan, Italy?\"\n", "\n", "response = chat.send_message(prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "JiUYIhwpctCy" }, "source": [ "You can see that this way, a guardrail in the prompt prevented the chatbot from veering off course." ] }, { "cell_type": "markdown", "metadata": { "id": "ZuuDhA37cvmP" }, "source": [ "### Turn generative tasks into classification tasks to reduce output variability" ] }, { "cell_type": "markdown", "metadata": { "id": "kUCUrsUzczmb" }, "source": [ "#### Generative tasks lead to higher output variability" ] }, { "cell_type": "markdown", "metadata": { "id": "a1xASHAkc46n" }, "source": [ "The prompt below results in an open-ended response, useful for brainstorming, but response is highly variable." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "nPfXQWIacwRf" }, "outputs": [], "source": [ "prompt = \"I'm a high school student. Recommend me a programming activity to improve my skills.\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "iAmm9wPYc_1o" }, "source": [ "#### Classification tasks reduces output variability" ] }, { "cell_type": "markdown", "metadata": { "id": "VvRpK_0GdCpf" }, "source": [ "The prompt below results in a choice and may be useful if you want the output to be easier to control." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "kYDKh0r2dAqo" }, "outputs": [], "source": [ "prompt = \"\"\"I'm a high school student. Which of these activities do you suggest and why:\n", "a) learn Python\n", "b) learn JavaScript\n", "c) learn Fortran\n", "\"\"\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "iTd60b1GdIsx" }, "source": [ "### Improve response quality by including examples" ] }, { "cell_type": "markdown", "metadata": { "id": "yJi44NejdJYE" }, "source": [ "Another way to improve response quality is to add examples in your prompt. The LLM learns in-context from the examples on how to respond. Typically, one to five examples (shots) are enough to improve the quality of responses. Including too many examples can cause the model to over-fit the data and reduce the quality of responses.\n", "\n", "Similar to classical model training, the quality and distribution of the examples is very important. Pick examples that are representative of the scenarios that you need the model to learn, and keep the distribution of the examples (e.g. number of examples per class in the case of classification) aligned with your actual distribution." ] }, { "cell_type": "markdown", "metadata": { "id": "sMbLginWdOKs" }, "source": [ "#### Zero-shot prompt" ] }, { "cell_type": "markdown", "metadata": { "id": "Crh2Loi2dQ0v" }, "source": [ "Below is an example of zero-shot prompting, where you don't provide any examples to the LLM within the prompt itself." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-7myRc-SdTQ4" }, "outputs": [], "source": [ "prompt = \"\"\"Decide whether a Tweet's sentiment is positive, neutral, or negative.\n", "\n", "Tweet: I loved the new YouTube video you made!\n", "Sentiment:\n", "\"\"\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "ucRtPn9SdL64" }, "source": [ "#### One-shot prompt" ] }, { "cell_type": "markdown", "metadata": { "id": "rs0gQH2vdYBi" }, "source": [ "Below is an example of one-shot prompting, where you provide one example to the LLM within the prompt to give some guidance on what type of response you want." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "iEq-KxGYdaT5" }, "outputs": [], "source": [ "prompt = \"\"\"Decide whether a Tweet's sentiment is positive, neutral, or negative.\n", "\n", "Tweet: I loved the new YouTube video you made!\n", "Sentiment: positive\n", "\n", "Tweet: That was awful. Super boring 😠\n", "Sentiment:\n", "\"\"\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "JnKLjJzmdfL_" }, "source": [ "#### Few-shot prompt" ] }, { "cell_type": "markdown", "metadata": { "id": "6Zv-9F5OdgI_" }, "source": [ "Below is an example of few-shot prompting, where you provide a few examples to the LLM within the prompt to give some guidance on what type of response you want." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "u37P9tG4dk9S" }, "outputs": [], "source": [ "prompt = \"\"\"Decide whether a Tweet's sentiment is positive, neutral, or negative.\n", "\n", "Tweet: I loved the new YouTube video you made!\n", "Sentiment: positive\n", "\n", "Tweet: That was awful. Super boring 😠\n", "Sentiment: negative\n", "\n", "Tweet: Something surprised me about this video - it was actually original. It was not the same old recycled stuff that I always see. Watch it - you will not regret it.\n", "Sentiment:\n", "\"\"\"\n", "\n", "response = client.models.generate_content(model=MODEL_ID, contents=prompt)\n", "display(Markdown(response.text))" ] }, { "cell_type": "markdown", "metadata": { "id": "wDMD3xb2dvX6" }, "source": [ "#### Choosing between zero-shot, one-shot, few-shot prompting methods" ] }, { "cell_type": "markdown", "metadata": { "id": "s92W0YpNdxJp" }, "source": [ "Which prompt technique to use will solely depends on your goal. The zero-shot prompts are more open-ended and can give you creative answers, while one-shot and few-shot prompts teach the model how to behave so you can get more predictable answers that are consistent with the examples provided." ] } ], "metadata": { "colab": { "name": "intro_prompt_design.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }