LangGraph Adapter
Use this adapter when your application organizes LLM work as LangGraph nodes or LangChain runnable flows.
Integration shape
LangGraph is usually best integrated through one shared helper rather than by scattering gate logic across nodes.
That helper should:
- read request context from runnable metadata
- derive
feature_code - perform
authorize - execute the model call
- extract usage from the final response
- perform
commit - cancel best-effort on failure
Why this shape works
Graph-based systems often have:
- multiple semantic LLM nodes
- optional streaming
- tool nodes between model nodes
- graph-level retries and branching
One shared helper keeps the runtime semantics consistent across all nodes.
Runtime rules
- treat each model node invocation as its own billable operation
- pass context through
RunnableConfig.metadataor equivalent per-run state - do not attach billing only at the outer graph entrypoint if multiple model calls happen inside
Artifact roles
vluna_adapter.*- This is the file to copy if your project already uses LangGraph or LangChain.
- It is intended as the reusable helper layer for gated node calls.
example.*- This is a demo graph showing how to route node-level model calls through the adapter.
- Use it to understand invocation shape, not as the primary production file.
Downloadable artifacts
Python:
TypeScript: