Building a multi-source autonomous research agent with LangGraph, ThreadPoolExecutor and Ollama
I wanted a tool that could research any topic deeply — not just one web search, but Wikipedia, arXiv, Semantic Scholar, GitHub, Hacker News, Stack Overflow, Reddit, YouTube and local documents, all...

Source: DEV Community
I wanted a tool that could research any topic deeply — not just one web search, but Wikipedia, arXiv, Semantic Scholar, GitHub, Hacker News, Stack Overflow, Reddit, YouTube and local documents, all at once. So I built it. This post covers the architecture decisions, the parallel execution model, the self-correction loop, and a few things that didn't work before I got it right. Live demo: https://huggingface.co/spaces/ecerocg/research-agent Source: https://github.com/RobertoDeLaCamara/Research-Agent The problem with sequential research agents Most agent examples I found do this: search web → process → search wiki → process → search arxiv → process → synthesize If each source takes 5–10 seconds (network + LLM processing), a 10-source agent takes 50–100 seconds minimum — before synthesis. The fix is obvious: run everything in parallel. Architecture overview initialize_state │ plan_research ←──────────────────────┐ │ │ parallel_search re-plan │ │ consolidate ──→ evaluate ─────────────┘ │ │