Docs
OpenAI Wrapping
OpenAI Wrapping
How wrapOpenAI keeps your existing client while attaching policy, estimation, spend reconciliation, and spans.
OpenAI Wrapping
wrapOpenAI() preserves the client shape you already use, then inserts Captar's
runtime control steps around each model request.
The main benefit is adoption speed: you keep your existing OpenAI integration, but you get budget checks, policy enforcement, trace spans, and usage reconciliation without rebuilding your request layer.
const openai = captar.wrapOpenAI(new OpenAI({ apiKey }), { session });
await openai.responses.create({
model: "gpt-4.1-mini",
input: "Summarize this support thread.",
max_output_tokens: 180,
});What Captar adds
- Model and retry validation against call policy
- Estimated spend checks before execution
- Timeout-aware provider execution
- Request spans with provider, model, method, and request metadata
- Spend reconciliation after the provider response returns
Good wrapping habits
- Wrap the client once, near your app runtime.
- Pass the active session explicitly instead of relying on hidden globals.
- Keep request metadata descriptive enough to debug a bad trace later.
- Use the same wrapper for jobs, API routes, and background tasks when possible.
Call policy example
policy: {
call: {
allowedModels: ["gpt-4.1-mini"],
maxEstimatedCostUsd: 0.35,
maxOutputTokens: 300,
retriesCeiling: 1,
timeoutMs: 15_000,
},
}Current provider scope
The v1 docs and SDK surface are optimized for OpenAI and OpenAI-compatible request flows. Do not treat this as a general multi-provider abstraction yet.