ExplainLLM
LLM FundamentalsAI AgentsContext EngineeringClaude CodePlayground
Home
Playground

LLM Fundamentals

Understand the core concepts behind Large Language Models

1
Tokenization
How AI reads text

Learn how text is broken down into tokens that AI can understand

2
Embeddings
Meaning as numbers

Explore how words become vectors in a high-dimensional space

3
Attention Mechanism
What to focus on

Understand how models decide which parts of the input matter most

4
Transformers
The complete picture

See how all components work together in a transformer architecture

5
Inference & KV-Cache
Login required
How LLMs generate text

Understand prefill vs generation phases, KV-Cache, and why the first token is slow

6
Decoding Strategies
Login required
Token selection methods

Learn about Greedy, Beam Search, Temperature, Top-k, and Top-p sampling

7
LLM Settings
Login required
Temperature, Top-p & more

Learn how to control LLM behavior with generation parameters

8
Prompt Elements
Login required
Anatomy of a prompt

Understand the key components that make up effective prompts

9
Prompt Basics
Fundamentals of prompting

Learn the core principles of writing effective prompts for LLMs

10
Best Practices
Login required
Tips & tricks

Master proven techniques for writing high-quality prompts

11
Quantization
Login required
Shrinking models

Reduce model size with FP16, INT8, INT4 while preserving quality

12
Fine-tuning vs Prompting
Login required
When to train your model

Learn when to use prompting, LoRA, or full fine-tuning for your use case

Subscribe to updates

Get notified about new lessons and materials.

Legal

Terms of ServicePrivacy Policy

© 2024-2026 ExplainLLM. All rights reserved.