If you build LLM-powered apps, you’ll see two patterns again and again: sequential chains (LangChain) and stateful agent graphs (LangGraph). Both let large language models (LLMs) solve problems, but they target different complexity and workflow needs. Here’s a compact guide to what each does, their core components, and when to use which.
What is LangChain (the chain pattern)?
LangChain is the classic pattern for building LLM-powered applications and chatbots. At its heart a LangChain app follows a sequential flow — a chain of steps that execute one after another (a directed acyclic graph).
Key components:
- Retriever / Data injection: Load documents from PDFs, CSVs, web pages or APIs. Use document loaders to parse and normalize input.
- Text splitting: Chunk large documents to respect LLM context windows.
- Vector DB + embeddings: Convert chunks to vectors and store them for semantic search and context retrieval.
- Prompt → LLM → Context: The chain executes prompt creation, calls the LLM, and supplies retrieved context sequentially.
- Memory / Output handling: Optionally persist memory and post-process outputs.
Use LangChain when you need reliable, straightforward RAG-style retrieval and a predictable, linear execution path — for example, FAQ bots, document Q&A, or any app where tasks run in a fixed order.
What is LangGraph (the agentic graph pattern)?
LangGraph is centered on building stateful, multi-agent workflows. Instead of one long chain, you model tasks as graph nodes and let multiple AI agents interact, communicate, and re-execute nodes. Edges represent data flow and conditions, so execution doesn’t have to be strictly linear.
Key components:
- Nodes (tasks): Each node runs a distinct agent or function (e.g., requirement analysis, code generation, test).
- Edges (flows): Conditional edges move outputs between nodes and support feedback loops.
- Shared persistent memory / state: Memory is accessible across nodes, enabling richer context sharing and incremental updates.
- Agentic decision-making: Agents decide whether to call tools, query DBs, or ask for human feedback.
Choose LangGraph for complex workflows that need branching, retries, parallel agents, or human-in-the-loop checks — e.g., multi-step software development flows, orchestration across tools, or any system that benefits from agent collaboration.
RAG vs Agentic RAG — practical difference
- Traditional RAG (LangChain style): An LLM retrieves context from a vector store and generates an answer. Flow is simple and sequential.
- Agentic RAG (LangGraph style): One or more agents decide whether to fetch data, call tools, update memory, or re-query other agents. This is richer and better for workflows requiring conditional actions.
When to pick which (quick checklist)
- Use LangChain if:
- Your app is mostly document retrieval → answer.
- You need predictable, easy-to-debug sequential flow.
- You want fast MVPs with standard RAG patterns.
- Use LangGraph if:
- Your application requires multiple agents, branching, or retries.
- You need shared, stateful memory across tasks.
- You’re building orchestration across tools, tests, or human feedback loops.
Final takeaway
LangChain = sequential RAG & chains (simple, reliable).
LangGraph = agentic graphs & stateful workflows (powerful, flexible).
Pick the pattern to match your problem complexity: start with LangChain for document Q&A and graduate to LangGraph when you need agent collaboration, conditional flows, or persistent cross-task state.