WhyOps supports Anthropic’s Claude models via a drop-in API proxy. You can use the official Anthropic SDKs by pointing theDocumentation Index
Fetch the complete documentation index at: https://whyops.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
baseURL to the WhyOps proxy and using your WhyOps API Key.
Use the TypeScript SDK
Prefer
@whyops/sdk if you want Anthropic proxy patching plus manual runtime traces from one client.Use the Python SDK
Prefer
whyops if you want the same Anthropic proxy flow with sync and async instrumentation helpers.Setup
-
Obtain API Keys:
- Get an API key from your WhyOps Dashboard.
- Add your Anthropic API key in the Providers section of the WhyOps Dashboard.
- Configure SDK:
- TypeScript
- Python
Supported Endpoints
The proxy fully supports and parses the following Anthropic endpoints:/messages: Standard message creation, including streaming, tool calling, and Claude’s extended thinking (chain of thought).
How WhyOps handles Anthropic Data
- Extended Thinking: WhyOps fully supports Anthropic’s
<thinking>blocks. When using models like Claude 3.5 Sonnet or Opus, WhyOps parses and captures these internal reasoning steps as discretellm_thinkingevents in your decision graph. - Streaming Parsing: If you use the streaming API, WhyOps uses a custom SSE parser (
AnthropicParser) to reconstruct the full message, tool calls, and thinking blocks as they stream back to your application, ensuring zero added latency. - Trace IDs: WhyOps automatically handles trace grouping across Anthropic message arrays.