WhyOps acts as a seamless proxy for the OpenAI API. You can continue using the official OpenAI SDKs—simply change theDocumentation Index
Fetch the complete documentation index at: https://whyops.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
baseURL to point to the WhyOps proxy and provide your WhyOps API Key.
For the best results, combine this with Agent Init and Registration so traces are bound to versioned agent configs from day one.
Use the TypeScript SDK
Prefer
@whyops/sdk if you want agent init, proxy patching, and runtime events from one package.Use the Python SDK
Prefer
whyops if you want the same proxy flow plus sync and async trace helpers.Setup
-
Obtain API Keys:
- Get an API key from your WhyOps Dashboard.
- Add your OpenAI API key in the Providers section of the WhyOps Dashboard.
- Configure SDK:
- TypeScript
- Python
Supported Endpoints
The proxy fully supports and parses the following OpenAI endpoints:/chat/completions: Standard chat completions, including streaming and tool calling./responses: The newer Responses API format./embeddings: Generating embeddings for text./models: Listing available models (proxies to OpenAI to verify your credentials).
Required and optional headers
| Header | Required | Purpose |
|---|---|---|
Authorization: Bearer <WHYOPS_API_KEY> | Yes | Authenticates to WhyOps, not directly to OpenAI |
X-Agent-Name | Yes | Tells WhyOps which registered agent is making the request |
X-Trace-ID | No | Explicit trace continuity across requests |
X-Thread-ID | No | Alternative continuity header; treated similarly to X-Trace-ID |
What the proxy adds to responses
WhyOps preserves upstream responses but adds trace continuity metadata:X-Trace-IDX-Thread-ID
_whyops_trace_id into returned tool call arguments so your runtime can propagate the same trace into downstream tool execution.
Event pipeline created automatically
When you send an OpenAI request through the proxy, WhyOps emits the following event flow towhyops-analyse:
user_messageortool_resultllm_responseerrorif the upstream provider failsembedding_requestandembedding_responsefor/embeddings
How WhyOps handles OpenAI Data
- Invisible Signatures: WhyOps will inject zero-width unicode characters into the
contentof the assistant’s response. When your agent sends the next message in the conversation, WhyOps strips these characters before forwarding the request to OpenAI. - Tool Call Tracing: For non-streaming requests, WhyOps injects a
_whyops_trace_idargument into the JSON arguments of tool calls. - Streaming Parsing: If you set
stream: true, WhyOps uses a custom SSE parser (OpenAIParser) to silently read the stream as it passes through to your application, ensuring no latency is added while still capturing complete telemetry. - Provider Resolution: WhyOps resolves the correct upstream provider credentials from your configured provider records based on requested model and environment.
- Server-Timing: WhyOps adds
Server-Timingheaders including auth time and total request duration for easier latency debugging.
When to add manual events on top
OpenAI proxying is enough for prompt/response observability. Add manual events when you want visibility into:- actual tool execution latency and outputs
- runtime retries not visible to the model
- framework state changes
- app-side failures after an LLM returns a tool call