Prompt Battle
Compare prompting techniques side by side
Prompt Battle
Summarize a complex text into key points
Zero-ShotZS
Direct question without examples
Few-ShotFS
With examples
Chain of ThoughtCoT
Step by step reasoning
Tree of ThoughtsToT
Multiple reasoning paths
Self-ConsistencySC
Multiple attempts, pick best
Least-to-MostL2M
Break into sub-problems
ReflexionREF
Self-critique and improve
Role PlayRP
Expert persona
Step-BackSB
Abstract first, then answer
Chain of VerificationCoV
Verify own answer
Program of ThoughtPoT
Write code to solve
Structured OutputSO
JSON/structured format
Analogical ReasoningAR
Use analogies to reason
Socratic MethodSM
Reason through questions
ContrastiveCON
Define what NOT to do
Rephrase & RespondRaR
Rephrase question first
Multi-Persona DebateMPD
Experts debate, then conclude
Constraint-BasedCB
Strict constraints on output
Emotional PromptingEP
Add emotional context
Chain of DensityCoD
Iteratively condense
Selected: 2 (⚡ 2 requests)
The transformer architecture, introduced in the 2017 paper "Attention is All You Need," revolutionized natural language processing. Unlike previous sequence-to-sequence models that relied on recurrent neural networks, transformers use self-attention mechanisms to process entire sequences in parallel. This enables much faster training and better handling of long-range dependencies. The architecture consists of an encoder and decoder, each made up of multiple layers of self-attention and feed-forward networks. Key innovations include multi-head attention, which allows the model to attend to different positions simultaneously, and positional encoding, which preserves sequence order information. Transformers form the basis of models like BERT, GPT, and T5, which have achieved state-of-the-art results across many NLP benchmarks.
⚡ 2 requests