OpenAI Agents SDK Adapter
Use this adapter when your application runs on the OpenAI Agents runtime rather than calling the OpenAI SDK directly.
Integration shape
This adapter is about choosing the right injection level.
The common options are:
- global client installation
- run-scoped model provider
- agent-scoped model
How to choose
Use global installation when:
- one process mostly shares one runtime shape
- you want the lowest-friction integration
Use run-scoped provider when:
- different runs may need different request context
- you want explicit per-run wiring without global mutation
Use agent-scoped model when:
- you want the clearest explicit ownership
- different agents may use different gated configurations
Runtime rules
- create a fresh gate context per turn or per logical operation
- do not reuse one lease across a whole session
- keep session memory in the framework, but keep billing at the model and tool operation level
Tool calls
If tools are billable in your product:
- gate them separately from the model call path
- use separate
feature_codevalues for tool operations
Artifact roles
vluna_adapter.*- This is the main integration template.
- Copy it when your runtime already uses OpenAI Agents and you want a production-oriented starting point.
example.*- This is only a demo runner that shows how to install the gated client or model and execute turns.
- Use it to understand the injection mode and request-context flow.
Downloadable artifacts
Python:
TypeScript: