Multi-Agent Research Team with CrewAI
CrewAI is a framework where multiple AI agents work as a real team: each with a dedicated role, tools, and area of responsibility. We break down how to build a research crew with a coordinator and specialists — from role design to debugging your first real scenario.
IntermediateAI Agents25 minCrewAI, Python, SerperDev API
1
Do you actually need a team?
Every extra agent adds execution time, token cost, and debugging complexity. So the first question is not "how to set up CrewAI" but "should I bother?" A team is justified in three cases: the task exceeds a single context window, different parts need different tools, or you need built-in review where one agent checks another. In all other cases, a single agent with a good prompt will be cheaper and faster.
Do you need a multi-agent team?
Task does not fit in a single context window
Different parts need different tools (search + code + text)
Built-in review needed: one agent critiques another
Simple linear task with a single tool
Speed matters more than quality
Two-attempt rule: first solve it with one agent. If the result is bad due to complexity (not bad prompting) — then move to a team.
2
Roles are job descriptions, not decorations
Each agent has three components: goal (what it maximizes), backstory (how it thinks), tools (what it can call). The most underrated one is backstory — it is not a pretty log line, it is a hidden system prompt. "You are a critical editor who looks for weak spots" gives very different results than "You are a helpful assistant who improves text." The most common mistake is not describing what an agent does NOT do. Without that, agents creep into each other's territory and duplicate work.
| Role | Does | Does NOT do | Tools |
|---|---|---|---|
| Researcher | Finds facts, sources, data | Does not interpret or conclude | SerperDev, web scraper |
| Analyst | Structures and compares data | Does not search new sources | Context only |
| Writer | Writes the final text | Does not fact-check | Context only |
Исследователь:
цель: найти проверенные факты и первоисточники
личность: "дотошный фактчекер, не интерпретирует"
инструменты: [поиск]
НЕ делает: выводы, рекомендации, текст
Автор:
цель: написать текст на основе чужого анализа
личность: "доверяет данным аналитика полностью"
инструменты: [] (только контекст предыдущего шага)
НЕ делает: поиск фактов, проверку источников3
Sequential, not parallel — and here is why
Intuition says parallel = faster. But in research tasks each step depends on the previous one, and parallelism here is a source of conflicts, not speedup. Process.sequential in CrewAI passes each agent's output to the next via context automatically. Predictable, easy to debug, powerful enough for 80% of use cases. Try parallelism only when you have truly independent branches — e.g. a market researcher and a competitor researcher working simultaneously.
Researcher
facts
Analyst
analysis
Writer
text
Output
задача_1: "Найди 5 фактов о {тема} с источниками"
исполнитель: исследователь
ожидаемый результат: "факт + URL + дата"
задача_2: "Напиши резюме на 300 слов"
исполнитель: автор
контекст: результат задачи_1
ожидаемый результат: "текст без выдумок"
порядок: последовательный (результат → контекст)Always specify expected_output — CrewAI uses it as the success criterion. Without it, the agent does not know when to stop and may loop forever.
4
Extra tools, extra problems
Every tool you give an agent is another fork where it spends tokens and might take a wrong turn. Researcher gets SerperDev — it needs search. Analyst and writer get nothing — they work only with context from the previous step. If you give the writer a search tool, it will start googling instead of writing from the analyst's data.
A separate trap: combining SerperDev and WebsiteSearchTool in one agent. Sounds like "more capabilities," but in practice the agent gets confused about which to call and often picks the suboptimal one. Better to have two specialized agents with one tool each — one searches, the other scrapes specific pages.
If an agent hallucinates facts — first check: does it have a search tool? An agent without search, tasked with "find facts", will invent them from training data. Not an agent bug — a configuration mistake.
5
Verbose is your X-ray. Without it you are blind
First run — always with verbose=True. This is not a debug option but your eyes inside the system: how each agent reasons, which tools it calls, what it passes along. Three signals to watch: agent "thinks" for a long time without calling tools — task too abstract; calls the same tool in a loop — expected_output is unstructured; passes huge text as context — next agent will get confused.
❌ Symptom
- Agent invents facts
- Ignores previous context
- Calls tool in a loop
- Result worse than single agent
✅ Fix
- Add search tool to agent
- Specify format in expected_output
- Make task description concrete
- Revisit role boundaries (step 2)
запуск с verbose=True → смотрим:
кто какой инструмент вызвал
сколько итераций потратил каждый агент
что передал следующему в контекст
три сигнала проблем:
долго думает без вызова инструментов → задача абстрактная
вызывает инструмент по кругу → expected_output нечёткий
передаёт слишком много текста → фильтровать контекстSet max_iter=5 during development. Default is 15 — an infinite loop will cost you. A typical first run with 3-4 agents takes 2-5 minutes.
Result
A working multi-agent team: researcher finds facts, analyst structures them, writer drafts. You understand when a team is needed versus a single agent, how to design roles, and how to read verbose output.