k
kritak
back to agents
/>Frameworks & SDKsframeworkOpen sourceOpen source

LangGraph

by LangChain

LangChain's graph-based framework for building stateful, multi-actor AI agents — control flow, memory, human-in-the-loop, and durable execution as first-class concepts.

Notable for
Made checkpointing and human-in-the-loop first-class instead of afterthoughts, which turned out to be the bottleneck for production agents.

$ cat curator-note.md

LangGraph's core insight is that production agents need state machines, not chains. Real agents need to pause for human approval, persist across crashes, replay from a known good checkpoint, and stream their reasoning to a UI — all of which are awkward bolt-ons in older agent frameworks but native primitives in LangGraph. The checkpointing system in particular is a quiet superpower: you can pause an agent mid-run, store the state in Postgres, restart it days later, and have it pick up exactly where it left off. That's the difference between a demo and a system.

The price for that power is a steeper learning curve than the alternatives. LangGraph thinking — nodes, edges, channels, reducers — takes a few days to internalize, and the docs assume you've already adopted that mental model. The framework is also tied to LangChain's broader ecosystem, which is a strength when you want pre-built integrations and a weakness when you want a small dependency surface. Some teams find it heavyweight for simple use cases; for those, a plain agent loop with tool calls is genuinely simpler.

Use LangGraph when you're building something that needs to run reliably in production — long-running agents, agents that pause for human input, agents that need to be debugged after the fact. The checkpointing alone is worth the adoption cost if you're shipping to real users. For prototypes, demos, or single-shot agents, CrewAI or a hand-rolled loop will get you to a working result faster. LangGraph's payoff is in the second month, not the first afternoon.