@whyops/langchain-js is the WhyOps integration for LangChain.js. It plugs into LangChain’s native callback system via a single tracer class, so you get full observability across any model provider, chain, or agent without changing your application logic.
This package is in beta. The core API is stable but provider coverage and event mapping will expand. Please open an issue if you encounter a provider or pattern not yet handled.
TypeScript SDK
Create and initialize the base WhyOps client first, then come back here to add the LangChain tracer.
Runtime Events
Add manual runtime events when your app has steps outside LangChain that also need tracing.
What this package captures
When you passWhyOpsLangChainTracer in the callbacks array on any LangChain invocation, it captures:
user_message— the initial human input (only on the first turn, not on tool-result continuation rounds)llm_response— model output, token usage, tool call declarations, finish reason, and latencytool_call_request— start of each tool execution with argumentstool_call_response— result of each tool execution with latency, paired to its request by span IDerror— LLM errors, tool errors, and chain errors
@whyops/langchain-js does not replace @whyops/sdk. You still create the WhyOps client with @whyops/sdk, then pass it to the tracer. The tracer sends events to WhyOps on every LangChain callback.Install
- npm
- pnpm
- yarn
- bun
@langchain/core >= 0.3.0.
1. Create the WhyOps client and tracer
initAgent() once during startup. After that, create a tracer instance and pass it into any LangChain call via { callbacks: [tracer] }.
2. Wrap any LangChain call
callbacks is accepted by every LangChain primitive — ChatModel.invoke, chain.invoke, agent.invoke, tool.invoke, and so on. The tracer receives all events from the invocation and any nested calls.
Optional tracer options
traceId is optional. If omitted, WhyOps uses the root LangChain run ID. Pass an explicit value when you want this trace to share a thread with other events emitted via whyops.trace() outside LangChain.
externalUserId is attached to every event so you can filter traces by user in the WhyOps dashboard.
3. Agent loops with tool calls
Pass the tracer on both the LLM invocation and each tool invocation so WhyOps sees the full execution:tool_call_request when a tool starts and tool_call_response when it completes. The two events are paired by a shared span ID so they appear as a single tool span in the trace inspector.
When the model requests multiple tools in one step, the tracer awaits each
tool_call_request before sending tool_call_response — guaranteeing correct ordering in the trace even when tools run concurrently.4. Multi-turn conversations
Reuse the same tracer instance with a stabletraceId across turns to group all events on one thread:
Provider notes
Provider name and model ID are extracted automatically from LangChain’s serialized component metadata. Supported providers:| LangChain class | Provider string |
|---|---|
ChatOpenAI / OpenAI | openai |
AzureChatOpenAI | azure_openai |
ChatAnthropic / Anthropic | anthropic |
ChatGoogleGenerativeAI / ChatVertexAI | google |
ChatMistralAI | mistral |
ChatOllama / OllamaLLM | ollama |
ChatBedrock / BedrockChat | bedrock |
ChatCohere | cohere |
ChatGroq | groq |
ChatFireworks | fireworks |
ChatTogether | together |
| Any other class | unknown |
llmOutput.tokenUsage (OpenAI style) first, then from usage_metadata on the response message (standardized LangChain format, which includes prompt cache read and creation tokens when available).
API surface
| Export | Purpose |
|---|---|
WhyOpsLangChainTracer | Main tracer class — extend BaseTracer from @langchain/core |
WhyOpsLangChainTracerOptions | Constructor options type |
{ callbacks: [tracer] } on any LangChain call. Events are sent in the background and never block your application. All network failures are logged with a [whyops] prefix and swallowed.