LangChain & LlamaIndex
LLM Application Frameworks
The Problem: Building agents from scratch means reinventing the wheel every time. You need to write code for tool calling, memory, chains, error handling... And when you switch to a different LLM, you rewrite everything again.
The Solution: Use a Framework
Frameworks like LangChain give you pre-built components for common agent patterns. It's like the difference between building with LEGO vs. carving wooden blocks by hand. Out of the box you get function calling adapters, ReAct agents, and RAG pipelines.
Think of it like LEGO vs. wooden blocks:
- 1. Wooden blocks (from scratch): Carve each piece yourself, design your own connectors, total flexibility but slow, hard to share with others
- 2. LEGO (framework): Standard pieces ready to use, they snap together easily, fast to build still flexible, everyone understands the system
What Frameworks Provide
- Abstractions: Unified interface for different LLMs (OpenAI, Claude, etc.)
- Components: Ready-made tools, memory systems, retrievers
- Chains: Ways to combine LLM calls in sequences
- Agents: Pre-built patterns like ReAct, plan-and-execute
- Integrations: Connect to databases, APIs, vector stores
Fun Fact: LangChain started as a weekend project and became one of the most starred AI repos on GitHub in just months! It proved that developers needed standardized building blocks for AI apps. Now there are many alternatives: LlamaIndex, Semantic Kernel, Haystack — each with different strengths.
Try It Yourself!
Explore the interactive comparison below to see how frameworks simplify agent development. Compare writing raw code vs. using framework abstractions!
Try it yourself
Interactive demo of this technique
Create a chain of LLM calls: generate an idea → write a plan → create a draft.
from openai import OpenAI
client = OpenAI()
# Step 1
idea = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Come up with an article idea"}]
).choices[0].message.content
# Step 2
plan = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Write a plan for: {idea}"}]
).choices[0].message.content
# Step 3
draft = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Write a draft following: {plan}"}]
).choices[0].message.content
print(draft)
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI(model="gpt-4")
parser = StrOutputParser()
# Define steps as prompt templates
idea_prompt = ChatPromptTemplate.from_template(
"Come up with an article idea about: {topic}"
)
plan_prompt = ChatPromptTemplate.from_template(
"Write an article plan based on the idea:\n{idea}"
)
draft_prompt = ChatPromptTemplate.from_template(
"Write a draft following this plan:\n{plan}"
)
# Compose chain using LCEL
chain = (
idea_prompt | llm | parser
| (lambda idea: {"idea": idea})
| plan_prompt | llm | parser
| (lambda plan: {"plan": plan})
| draft_prompt | llm | parser
)
# Run — single line
result = chain.invoke({"topic": "AI in education"})
print(result)
LangChain advantages:
- Declarative chain instead of imperative code
- Built-in streaming, logging, retry
- Easy to add a new step or swap models
A framework does not change the logic — it simplifies orchestration. Instead of manually "gluing" calls you describe the chain declaratively and get streaming, logging, and retry for free.
Create a free account to solve challenges
3 AI-verified challenges for this lesson
This lesson is part of a structured LLM course.
My Learning Path