Skip to main content

Prompting Guide

Prompting Techniques

Prompt Engineering helps to effectively design and improve prompts to get better results on different tasks with LLMs.

Here's a summary of prompting techniques:

  • Zero-shot Prompting: Providing a prompt without any examples.
    • Example: Translate "Hello" to French.
  • Few-shot Prompting: Providing a few examples to guide the model.
    • Example:
      English: Happy
      French: Heureux
      English: Sad
      French: Triste
      English: Angry
      French:
  • Chain-of-Thought Prompting: Encouraging the model to explain its reasoning step-by-step.
    • Example: Solve this problem by explaining each step: What is 2 + 2 * 2?
  • Meta Prompting: Using prompts to design prompts or guide the model's behavior.
    • Example: You are an expert prompt engineer. Your task is to rewrite the following prompt to be more effective: "Tell me about the capital of France."
  • Self-Consistency: Generating multiple outputs and selecting the most consistent one.
    • Example: Ask a question multiple times and choose the most common answer.
  • Generate Knowledge Prompting: Incorporating external knowledge into the prompt.
    • Example: Answer the following question using information from Wikipedia: Who is the president of the United States?
  • Prompt Chaining: Combining multiple prompts to solve a complex task.
    • Example: First, ask the model to summarize a document. Then, ask it to translate the summary into another language.
  • Tree of Thoughts: Exploring multiple reasoning paths and backtracking when necessary.
    • Example: (Complex, requires a more detailed scenario) Imagine you are in a maze. Describe your possible moves at each intersection and the reasoning behind them.
  • Retrieval Augmented Generation (RAG): Retrieving relevant information from a knowledge base and incorporating it into the prompt.
    • Example: Using the following article, answer the question: [Article content]. Question: What is the main topic of the article?
  • Automatic Reasoning and Tool-use (ART): Allowing the model to use external tools to answer questions.
    • Example: Use a calculator to solve: 1234 * 5678.
  • Automatic Prompt Engineer (APE): Automatically generating prompts to optimize performance.
    • (This is a technique to create prompts, not a prompt itself. Requires a system to evaluate prompt performance.)
  • Active-Prompt: Iteratively improving prompts based on model feedback.
    • (This is a technique to improve prompts, not a prompt itself. Requires a feedback loop.)
  • Directional Stimulus Prompting (DSP): Guiding the model's response by providing specific instructions or constraints.
    • Example: Write a poem about the ocean, but do not use any words that contain the letter 'e'.
  • Program-Aided Language Models (PAL): Using code to help the model reason and generate responses.
    • Example: Write a Python function to calculate the square root of a number, then use it to find the square root of 144.
  • ReAct: Combining reasoning and acting to solve tasks.
    • Example: (Complex, requires a more detailed scenario) A robot needs to pick up a specific object. It first reasons about the object's location, then acts to move towards it.
  • Reflexion: Allowing the model to reflect on its past actions and improve its future performance.
    • (This is a technique to improve model performance over time, not a prompt itself. Requires a system to track past actions and outcomes.)
  • Multimodal CoT: Chain-of-thought prompting that incorporates multiple modalities (e.g., text and images).
    • Example: (Requires image input) Describe the image and explain why this is happening. (with an image of a traffic accident)
  • Graph Prompting: Representing information as a graph and using graph traversal to answer questions.
    • Example: (Requires a knowledge graph) Given a knowledge graph of historical figures, who were contemporaries of Leonardo da Vinci?

References