TheDocumentation Index
Fetch the complete documentation index at: https://whyops.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
whyops package gives Python services one integration surface for agent registration, OpenAI or Anthropic proxying, and direct runtime event emission. This page is the basic setup path.
Proxy Helpers
Read this next if you want the exact API-key flow and what
sdk.openai() or sdk.anthropic() changes.Runtime Events
Add sync or async trace events for tools, thinking blocks, embeddings, and errors.
Advanced Patterns
Hybrid tracing, self-hosted overrides, prompt caching, and common mistakes.
Before you start
| You need | Why |
|---|---|
WHYOPS_API_KEY | Authenticates agent init, manual events, and proxied model traffic to WhyOps |
A stable agent_name | Keeps proxy traffic and runtime events attached to the same agent identity |
systemPrompt and tool metadata | Lets WhyOps version the agent and show the correct configuration in the UI |
| Your provider key in the WhyOps dashboard | Lets WhyOps authenticate upstream when it forwards OpenAI or Anthropic traffic |
| A stable session or trace ID | Best for explicit continuity between proxied model calls and later tool or runtime events |
In Python, the cleanest flow is: create the WhyOps client, call
init_agent_sync() or await init_agent() during startup, then patch OpenAI or Anthropic, and only after that add manual runtime events.1. Install the package
- pip
- uv
- poetry
2. Create the WhyOps client once
If you include
inputSchema or outputSchema, pass JSON strings. If the agent has no tools, set tools: [] explicitly so the registered definition stays clear.3. Initialize the agent during startup
- Sync
- Async
4. Make your first proxied model call
- OpenAI
- Anthropic
The helper mutates the provider client in place. Go to Python SDK Proxy Helpers for the exact key flow, header behavior, and sync versus async details.
The proxy can generate a trace automatically, but the backend checks
X-Trace-ID and X-Thread-ID first. If your app later emits tool or runtime events, reusing the same explicit trace ID is the cleaner and more reliable setup.5. Add runtime traces only where you need more visibility
Start with proxy-only instrumentation first. Addtrace() events when you need:
- tool execution latency and outputs
- retries inside your framework
- runtime failures after the model returns
- prompt caching-aware usage on manual
llm_response()calls - exposed thinking blocks or orchestration milestones
Next pages
Proxy Helpers
Understand which API key goes where and what the helper changes on the provider client.
Runtime Events
Add manual event coverage for tools, thinking blocks, embeddings, and errors.
Advanced Patterns
Finish with hybrid tracing, self-hosting, prompt caching, and event IDs.