Chain of Verification
Fact-checking each claim
The Problem: AI can confidently give wrong answers (hallucinations). How can we make AI double-check its own work before delivering the final answer?
The Solution: Double-Check Like an Accountant
Chain of Verification (CoVe) makes AI generate verification questions about its answer, answer them independently, and fix any inconsistencies. It's like an accountant checking their calculations twice before submitting. This is especially useful for reducing hallucinations and pairs well with Chain-of-Thought reasoning.
Think of it like an accountant double-checking work:
- 1. Initial answer: "The total is $5,420"
- 2. Generate checks: "Does row 3 add up? Is the tax correct?"
- 3. Verify each: Check each part independently
- 4. Fix if needed: "Row 3 was wrong, corrected total: $5,320"
Where Is This Used?
- Fact-Checking: Verifying claims in generated content
- Data Extraction: Confirming extracted information is accurate
- Summarization: Checking summaries don't introduce errors
- Question Answering: Validating answers against source material
Fun Fact: Chain of Verification can reduce hallucinations by up to 50% on fact-based tasks! The key is that the verification questions are answered independently, so the AI doesn't just confirm its original mistakes.
Try It Yourself!
Use the interactive example below to see how verification questions help catch and correct errors in AI responses.
🔍 Chain of Verification — after generating an answer, the model verifies each factual claim and corrects errors. Especially useful for questions requiring accurate data.
Tell me about the Eiffel Tower.
The Eiffel Tower is an iron lattice tower in Paris, built in 1889. It is 324 meters tall. It's the tallest structure in France and the world's most visited tourist attraction.
Tell me about the Eiffel Tower. After answering: 1. Extract each factual claim 2. Verify each claim for accuracy 3. Correct errors in the final answer
LLMs often "hallucinate" — confidently stating incorrect facts. Chain of Verification forces the model to critically verify each claim separately, reducing factual errors by 50-70%.
Try it yourself
Interactive demo of this technique
Name 3 facts about the Eiffel Tower.
- The Eiffel Tower was built in 1889
- Its height is 324 meters
- It was the tallest building in the world until 1930
FACTS:
- The Eiffel Tower was built in 1889
- Its height is 324 meters
- It was the tallest building in the world until 1930
🔍 VERIFICATION:
Fact 1: 1889 ✅ Confidence: HIGH Built for the 1889 World's Fair (centennial of French Revolution) Status: CONFIRMED
Fact 2: 324 meters ⚠️ Confidence: MEDIUM Check: Original height was 300m. With antenna — 330m (modern).
📝 CORRECTION: Height: 330 meters (with antenna), 300m without.
Fact 3: Tallest until 1930 ⚠️ Confidence: MEDIUM Check: Chrysler Building (319m) built in 1930, but Empire State (381m) — in 1931.
📝 CLARIFICATION: Was the tallest structure until 1930 (Chrysler Building).
✅ VERIFIED ANSWER:
- Built in 1889 for World's Fair ✓
- Height: 330m with antenna (300m without) — corrected
- Tallest structure in the world until 1930 ✓
Chain of Verification makes the model fact-check and honestly indicate confidence level. Found inaccuracy in height.
Create a free account to solve challenges
3 AI-verified challenges for this lesson
This lesson is part of a structured LLM course.
My Learning Path