Table of contents
Open Table of contents
what is it?
other than its official documentation, i see LangGraph as a framework meant for building single or multi agent system with abstractions that make sense.
instead of spending time figuring out how to: integrate with different model providers and tools; manage the agent memory; add observability and so much more.
we can spend our time focused on coming up with the agent logic and architecture.
the abstractions
the framework provides packages for both tools and model providers enabling developers to swap between them with ease.
instead of building a model integration from scratch like so:
// Different auth, different formats, different protocols
const openaiResponse = await fetch('https://api.openai.com/v1/chat/completions', {
headers: { 'Authorization': `Bearer ${OPENAI_KEY}` },
body: JSON.stringify({
model: 'gpt-4',
messages: [...],
tools: [...] // OpenAI format
})
});
const anthropicResponse = await fetch('https://api.anthropic.com/v1/messages', {
headers: { 'x-api-key': ANTHROPIC_KEY },
body: JSON.stringify({
model: 'claude-sonnet-4',
messages: [...],
tools: [...] // Anthropic format - completely different
})
});
we can use the packages LangGraph that provides the different model providers:
import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
// Swap providers with one line
const llm = new ChatOpenAI({ model: "gpt-4" });
// or
const llm = new ChatAnthropic({ model: "claude-sonnet-4" });
// Everything else stays the same - LangGraph handles the rest
const llmWithTools = llm.bindTools(tools);
const response = await llmWithTools.invoke(messages);
and here is the full list of integrations with multiple providers and dozens of tools, all with a consistent interface.
the core components: nodes, edges, and state
LangGraph also provides us the building blocks required to create an agent.
in the framework, an agent is made up of three components:
- nodes - a function which contains the actual llm or tool calls
- edges - a function which controls how each node should be routed
- state - an object which represents the short term memory of the agent for the current session
here is what this looks like:
import { StateGraph, START, END, MessagesAnnotation } from "@langchain/langgraph";
// Define the nodes (we'll implement these in the next post)
const llmCallNode = async (state) => {
// Call the LLM and return response
};
const toolNode = async (state) => {
// Execute tools and return results
};
const shouldContinue = (state) => {
// Decide whether to use tools or finish
};
const agent = new StateGraph(MessagesAnnotation)
.addNode("llmCall", llmCallNode)
.addNode("toolNode", toolNode)
.addEdge(START, "llmCall")
.addConditionalEdges("llmCall", shouldContinue, ["toolNode", END])
.addEdge("toolNode", "llmCall")
.compile();
next steps
now that you understand what LangGraph is and how it structures agents, let’s build one.
in the next post, we’ll create a researcher agent that searches the web and consolidate its finding. you’ll get to see how nodes, edges, and state work together.