Zero-Shot
TechniquesThe ability of LLMs to perform tasks without any examples in the prompt. You simply describe what you want in natural language. Modern LLMs like GPT-4 and Claude excel at zero-shot tasks due to extensive pre-training on diverse data.
Few-Shot Learning
TechniquesA prompting technique where you provide 2-8 examples of the desired input-output format before the actual task. The model learns the pattern from examples and applies it to new inputs. More examples typically improve performance but consume more tokens.
Chain-of-Thought
TechniquesA prompting technique where you ask the LLM to show its reasoning step-by-step before giving the final answer. Research shows CoT can improve accuracy by 30-50% on complex reasoning tasks. Two variants: Zero-Shot CoT ("Let's think step by step") and Few-Shot CoT (providing examples with reasoning).
Self-Consistency
TechniquesAn enhancement to Chain-of-Thought where the model generates multiple reasoning paths for the same problem and selects the most common answer via majority voting. This reduces the impact of individual reasoning errors and improves reliability on complex tasks.
Tree of Thoughts
TechniquesA framework that extends Chain-of-Thought by exploring multiple reasoning paths organized as a tree. At each step, the model generates several possible thoughts, evaluates them, and selects the most promising branches to continue. Enables backtracking and strategic planning.
Meta-Prompting
TechniquesA technique where an LLM is used to generate, refine, or optimize prompts for itself or another model. The model acts as a prompt engineer, producing better instructions than a human might write. This can be iterative, with the model improving prompts based on output quality.
Reflexion
TechniquesA technique where the model reflects on its own output, identifies mistakes or areas for improvement, and generates a corrected response. This self-reflection loop can be repeated multiple times, progressively improving the quality of the answer.
Least-to-Most
TechniquesA prompting strategy that breaks down a complex problem into a series of simpler subproblems. Each subproblem is solved in order, with solutions to earlier subproblems feeding into later ones. Particularly effective for tasks requiring compositional generalization.
Program of Thought
TechniquesA technique where the model generates executable code (e.g., Python) to solve a reasoning problem instead of performing text-based chain-of-thought. The code is then executed to get the precise answer. Especially effective for mathematical and logical reasoning tasks.
Chain of Verification
TechniquesA technique where the model first generates a draft response, then creates verification questions about its claims, answers those questions independently, and revises the original response based on the verification. Reduces hallucinations and factual errors.
RAG
TechniquesRetrieval-Augmented Generation — a technique that combines information retrieval with text generation. Instead of relying solely on the model's training data, RAG retrieves relevant documents from a knowledge base and includes them in the prompt. This reduces hallucinations and allows LLMs to access up-to-date information.
Prompt Chaining
TechniquesA technique where a complex task is broken into a series of simpler LLM calls, with the output of one call feeding into the next. Each step can use a different prompt, model, or even include validation logic. Enables building reliable pipelines for complex workflows.
Structured Output
TechniquesTechniques for constraining LLM output to a specific format (JSON, XML, YAML, etc.). Can be achieved through prompt instructions, few-shot examples, or API features like JSON mode. Essential for integrating LLM outputs into software systems that expect structured data.
APE
TechniquesAutomatic Prompt Engineering — a technique where an LLM automatically generates, evaluates, and selects optimal prompts for a given task. The model proposes multiple prompt candidates, tests them against examples, and selects the best-performing one. Removes the need for manual prompt tuning.
Prompt Engineering
TechniquesThe practice of designing and optimizing text prompts to elicit desired behaviors and outputs from LLMs. Encompasses techniques like role assignment, few-shot examples, Chain-of-Thought, output formatting, and iterative refinement. A core skill for working effectively with AI models.