Factuality & Hallucinations
Understand why LLMs hallucinate and strategies to improve factual accuracy
The Problem: AI can make up facts that sound completely believable (hallucinations). How can you verify AI claims and ensure accuracy?
The Solution: Be a Fact-Checker
Factuality is about verifying AI outputs against reliable sources and detecting when the AI is making things up. It's like being a fact-checker at a news organization — don't publish until you verify. Hallucinations are the core problem, and RAG with grounding are the best defenses.
Think of it like a newsroom fact-checker:
- 1. Receive claim: AI says: "According to Smith v. Jones (2019), the regulation requires..."
- 2. Verify citations: Does "Smith v. Jones (2019)" exist? Check DOI, author names, court records. LLMs fabricate plausible-sounding references
- 3. Cross-reference facts: Compare claims against 2+ independent sources. Self-consistency: ask the same question 3 ways — do answers agree?
- 4. Classify confidence: Tag each fact: [VERIFIED] (found in source), [PLAUSIBLE] (consistent but unchecked), [UNVERIFIED] (no source found)
Where Is This Used?
- Source Attribution: Ask AI to cite its sources
- RAG: Ground responses in retrieved documents
- Self-Consistency: Ask the same question multiple ways
- External Validation: Cross-check with search engines or databases
Fun Fact: Even the best LLMs hallucinate about 3-5% of the time on factual questions. The rate increases significantly for obscure topics, recent events, or highly specific technical details. Always verify important facts!
Try It Yourself!
See how to detect and handle AI hallucinations in practice.
LLM hallucinations are confidently generated content that does not correspond to reality. The model can invent facts, quotes, statistics, and even scientific studies that do not exist, while sounding absolutely convincing.
LLMs predict the most likely text continuation, they don't "know" facts. They don't have real-time internet access, their knowledge is limited to training data cutoff, and they're optimized for fluency, not accuracy.
Signs of hallucinations: overly specific details, non-existent references, contradictions on repeated queries, information about events after the model's training date. Always verify critical information from independent sources.
Effective approaches: RAG (retrieval-augmented generation) for source grounding, Chain of Thought (CoT) for step-by-step verification, self-verification — asking the model to check its claims, requiring sources and confidence levels.
LLM Hallucinations & Fact-Checking
Explore types of hallucinations and prevention strategies
Types of Hallucinations
"Who invented the telephone?"
"Cite a study about AI safety"
"If A > B and B > C, what about A and C?"
"What happened in tech in 2024?"
Spot the Hallucination
An LLM generated the following response about the Eiffel Tower. Click each statement to mark it as Real or Hallucinated, then check your answers.
The Eiffel Tower is 330 meters tall.
It was designed by Gustave Eiffel and completed in 1889.
Albert Einstein visited the tower in 1905 and gave a famous physics lecture from the top.
The tower receives about 7 million visitors per year.
In 2019, a second tower was built next to it called 'Tour Lumiere'.
Hallucination Mitigation Strategies
Ground responses in retrieved documents
Step-by-step reasoning reduces errors
Ask model to verify its own claims
Require sources for factual claims
Best Practices
- • Always verify facts from external sources
- • Use RAG for working with current data
- • Ask the model to indicate confidence levels
- • Break complex questions into simpler ones
When generating content:
- • Ask for sources
- • Use RAG for facts
- • Break down complex questions
When verifying answers:
- • Double-check citations
- • Ask clarifying questions
- • Use external sources
Try it yourself
Interactive demo of this technique
Hallucination detection — confident but incorrect model response
The book "Quantum Algorithms for Everyday Life" was written by Professor Michael Vershinin from MIT and published in 2021 by O'Reilly. It became a bestseller in the popular science category.
I cannot confirm the existence of a book with the exact title "Quantum Algorithms for Everyday Life." You might be thinking of a different book on quantum computing. I recommend checking on Google Scholar, Amazon, or a library catalog.
Models default to providing an answer even when there is none. Explicit instructions to "acknowledge uncertainty" and "do not fabricate" are critical for factual accuracy.
Create a free account to solve challenges
7 AI-verified challenges for this lesson
This lesson is part of a structured LLM course.
My Learning Path