Prompt Elements
Anatomy of a prompt
The Problem: You ask the AI "Write something good" and get a generic response. But when you ask precisely, you get exactly what you need. What's the difference?
The Solution: Think Like a Recipe
A good prompt is like a recipe. If you just say "make something tasty", the chef won't know what you want. But a recipe has clear structure: ingredients, portions, steps, and the expected result.
Not All Parts Are Required
You don't need every element in every prompt. Simple questions need simple prompts. But for complex tasks, adding structure helps a lot:
- Quick question: "What's the capital of France?" — just the task
- Code review: role + context + task + examples
- Writing article: role + context + task + format + constraints
- Data analysis: context + task + format + examples
The Role element is often set via a system prompt, which defines the model's behavior before the user's message.
Think of it like a recipe with key ingredients:
- 1. Role: who should the AI be? "You are an experienced Python developer"
- 2. Context: what's the background? "I'm building a REST API for an e-commerce site"
- 3. Task: what exactly should be done? "Write a function to validate email addresses"
- 4. Format: how should the output look? "Return as a Python function with docstring and type hints"
- 5. Constraints: what limitations are there? "No external libraries, Python 3.9+"
- 6. Examples: show what you want. "For input 'test@email.com' return True"
Pro Tip: The order matters! Put the most important info at the beginning and end of your prompt — models pay more attention to those parts. This is called the "primacy and recency effect" in cognitive science.
Where Is This Used?
Fun Fact: The key ingredients of a prompt are like building blocks — Role, Context, Task, Format, Constraints, and Examples. Mix and match them based on your needs!
Try It Yourself!
Click on prompt elements to toggle them on or off. Notice how adding each element makes the output more specific!
Toggle prompt elements on and off — see how it affects the model response quality. Try different combinations!
What the model should do. A clear task or command.
Additional information that helps the model understand the task better.
The data that the model should process.
A hint about the expected response format.
You are a professional translator. Preserve the style and tone of the original.
Translate the following text to Spanish.
Text: "Today is a beautiful day for a walk in the park."
Translation:
Build a prompt and click "Generate"
Context → Instruction → Input → Output Indicator. This order helps the model understand the task better.
Simple queries may contain only an instruction. Add other elements as the task becomes more complex.
Clear separation of prompt elements helps the model understand what is expected. Instruction answers "what to do", context — "how to do it", input — "what to work with", and output indicator guides the response format.
Try it yourself
Interactive demo of this technique
Write a function to calculate the factorial of a number
def factorial(n):
if n == 0:
return 1
return n * factorial(n-1)
def factorial(n: int) -> int:
"""Calculate factorial of n.
Args:
n: Non-negative integer
Returns:
Factorial of n
Raises:
ValueError: If n is negative
"""
if n < 0:
raise ValueError("n must be non-negative")
if n == 0 or n == 1:
return 1
return n * factorial(n - 1)
Low temperature (0.1) for code ensures more consistent, documented, and safe results.
Create a free account to solve challenges
4 AI-verified challenges for this lesson
This lesson is part of a structured LLM course.
My Learning Path