πPrompt Capture
Prompt Capture allows you to view the complete context of your AI interactions, including system prompts, input messages, and AI-generated responses. This feature is essential for debugging, auditing, and optimizing your AI workflows.

## What is Prompt Capture?
Prompt Capture stores and displays the full content of AI completions:
System Prompt: The initial instructions given to the AI model
Input Messages: User messages and conversation history sent to the model
Output Response: The AI-generated response
This visibility helps you:
Debug unexpected AI behavior
Audit AI interactions for compliance
Optimize prompts for better results
Understand token usage and costs
How to Enable Prompt Capture
Step 1: Enable in Team Settings
Required Role: Only users with the Tenant Administrator role can enable or disable Prompt Capture.
Navigate to Management β Teams
Select your team
Find the AI Prompt Capture section
Toggle Enable Prompt Capture to ON
Click Save Changes
Step 2: Configure Your SDK
Prompt capture must also be enabled in your SDK integration. Here's how to enable it for each SDK:
Set the environment variable before initializing your middleware:
Then initialize your Go middleware as usual. This works with all Revenium Go middlewares:
revenium-middleware-openai-gorevenium-middleware-anthropic-go
Set the environment variable:
Both settings are required: Prompt capture must be enabled in your Team Settings AND in your SDK configuration for prompts to be stored.
Viewing Prompt Data
Once enabled, you can view prompt data in two places:
AI Transaction Log
Go to Logs β AI Transaction Log
Click the expand icon (β) next to the delete button to open the transaction details modal
Look for the Prompt Data section
Click View Prompt Data to open the full viewer
Traces Page
Go to Traces
Select a trace from the table
Click on a transaction in the Transaction Table
In the drawer, click View Prompt Data
Understanding the Prompt Viewer
The Prompt Viewer modal displays your AI interaction in three tabs:
System Prompt
The initial instructions that define the AI's behavior and context
Input Messages
The conversation history including user messages and any previous assistant responses
Output Response
The AI-generated response for this completion
Context Summary
At the top of the viewer, you'll see key metrics:
Model: The AI model used (e.g., claude-3-5-sonnet, gpt-4o)
Provider: The AI provider (e.g., Anthropic, OpenAI)
Cost: Total cost of this completion
Tokens: Input, output, and cached token counts
Duration: Request processing time
Prompt Truncation
To manage storage efficiently, prompts may be truncated if they exceed the configured limit.
How to Identify Truncated Data
A warning banner appears at the top of the Prompt Viewer
The
promptsTruncatedfield is set totruein the API response
Truncation Limits
Prompts exceeding 50,000 characters are automatically truncated to manage storage efficiently. This is a system-wide limit.
Privacy and Security
Data Security: All prompt data is encrypted at rest and in transit. Prompts are stored securely and only accessible to authorized team members.
Troubleshooting
Prompts Not Appearing?
Check Team Settings: Ensure Prompt Capture is enabled
Check SDK Config: Verify the capture prompts setting is enabled in your SDK:
Python:
capture_prompts=TrueNode.js:
capturePrompts: trueGo/MCP:
REVENIUM_CAPTURE_PROMPTS=trueenvironment variable
Check hasPromptData: The transaction must have
hasPromptData: trueRecent Transactions: Only transactions after enabling capture will have data
"No Prompt Data Available" Message
This appears when:
Prompt capture was not enabled when the transaction occurred
The SDK did not send prompt data
The transaction is from before prompt capture was enabled
Last updated
Was this helpful?