# Prompt Capture

Prompt Capture allows you to view the complete context of your AI interactions, including system prompts, input messages, and AI-generated responses. This feature is essential for debugging, auditing, and optimizing your AI workflows.

<figure><img src="/files/iZ3VDgWyUgc91zHCqoey" alt=""><figcaption></figcaption></figure>

## What is Prompt Capture?

Prompt Capture stores and displays the full content of AI interactions across all modalities:

**Text Completions:**

* **System Prompt**: The initial instructions given to the AI model
* **Input Messages**: User messages and conversation history sent to the model
* **Output Response**: The AI-generated response

**Multimodal Transactions:**

* **Audio**: Transcription inputs (STT) and speech outputs (TTS)
* **Image**: Image generation prompts and parameters
* **Video**: Video generation prompts and configuration

This visibility helps you:

* Debug unexpected AI behavior
* Audit AI interactions for compliance
* Optimize prompts for better results
* Understand token usage and costs
* Review multimodal generation parameters

## How to Enable Prompt Capture

### Step 1: Enable in Team Settings

{% hint style="info" %}
**Required Role**: Only users with the **Tenant Administrator** role can enable or disable Prompt Capture.
{% endhint %}

1. Navigate to **Management** → **Teams**
2. Select your team
3. Find the **AI Prompt Capture** section
4. Toggle **Enable Prompt Capture** to ON
5. Click **Save Changes**

### Step 2: Configure Your SDK

Prompt capture must also be enabled in your SDK integration. Here's how to enable it for each SDK:

{% tabs %}
{% tab title="Python" %}

```python
from revenium import Revenium

client = Revenium(
    api_key="your-api-key",
    capture_prompts=True  # Enable prompt capture
)
```

{% endtab %}

{% tab title="Node.js" %}

```javascript
const Revenium = require("@revenium/sdk");

const client = new Revenium({
  apiKey: "your-api-key",
  capturePrompts: true, // Enable prompt capture
});
```

{% endtab %}

{% tab title="Go" %}
Set the environment variable before initializing your middleware:

```bash
export REVENIUM_CAPTURE_PROMPTS=true
```

Then initialize your Go middleware as usual. This works with all providers in the [Revenium Go SDK](https://github.com/revenium/revenium-go-sdk):

```go
import reveniumopenai "github.com/revenium/revenium-go-sdk/openai"

func main() {
    if err := reveniumopenai.Initialize(); err != nil {
        log.Fatalf("Failed to initialize: %v", err)
    }
    // Your AI calls will now capture prompts
}
```

{% endtab %}

{% tab title="MCP Server" %}
Set the environment variable:

```bash
export REVENIUM_CAPTURE_PROMPTS=true
```

{% endtab %}
{% endtabs %}

{% hint style="warning" %}
**Both settings are required**: Prompt capture must be enabled in your Team Settings AND in your SDK configuration for prompts to be stored.
{% endhint %}

## Viewing Prompt Data

Once enabled, you can view prompt data in two places:

### AI Transaction Log

1. Go to **Logs** → **AI Transaction Log**
2. Click the **expand icon** (↔) next to the delete button to open the transaction details modal
3. Look for the **Prompt Data** section
4. Click **View Prompt Data** to open the full viewer

### Traces Page

1. Go to **Traces**
2. Select a trace from the table
3. Click on a transaction in the Transaction Table
4. In the drawer, click **View Prompt Data**

### Multimodal Transactions

For audio, image, and video transactions, prompt data is available from the same locations:

1. Find the multimodal transaction in the AI Transaction Log or Traces page
2. Click **View Prompt Data** to see the generation parameters

The viewer displays modality-specific information:

| Modality        | Captured Data                                             |
| --------------- | --------------------------------------------------------- |
| **Audio (STT)** | Input audio metadata, transcription settings              |
| **Audio (TTS)** | Input text, voice settings, output format                 |
| **Image**       | Generation prompt, model parameters, style settings       |
| **Video**       | Generation prompt, duration, resolution, style parameters |

## Understanding the Prompt Viewer

The Prompt Viewer modal displays your AI interaction in three tabs:

| Tab                 | Description                                                                           |
| ------------------- | ------------------------------------------------------------------------------------- |
| **System Prompt**   | The initial instructions that define the AI's behavior and context                    |
| **Input Messages**  | The conversation history including user messages and any previous assistant responses |
| **Output Response** | The AI-generated response for this completion                                         |

### Context Summary

At the top of the viewer, you'll see key metrics:

* **Model**: The AI model used (e.g., claude-3-5-sonnet, gpt-4o)
* **Provider**: The AI provider (e.g., Anthropic, OpenAI)
* **Cost**: Total cost of this completion
* **Tokens**: Input, output, and cached token counts
* **Duration**: Request processing time

## Prompt Truncation

To manage storage efficiently, prompts may be truncated if they exceed the configured limit.

### How to Identify Truncated Data

* A warning banner appears at the top of the Prompt Viewer
* The `promptsTruncated` field is set to `true` in the API response

### Truncation Limits

Prompts exceeding 50,000 characters are automatically truncated to manage storage efficiently. This is a system-wide limit.

## Privacy and Security

{% hint style="info" %}
**Data Security**: All prompt data is encrypted at rest and in transit. Prompts are stored securely and only accessible to authorized team members.
{% endhint %}

### User-Level Viewing Permissions

Prompt data may contain sensitive information. Administrators can control which users are allowed to view prompt data on a per-user basis.

#### Configuring Prompt Viewing Permission

1. Navigate to **Management** → **Users**
2. Select the user you want to configure
3. Find the **Can View Prompt Data** toggle
4. Enable or disable as appropriate
5. Click **Save Changes**

{% hint style="warning" %}
**Default Behavior**: By default, users cannot view prompt data. Administrators must explicitly grant this permission to each user who needs access.
{% endhint %}

#### What Users See Without Permission

Users without the "Can View Prompt Data" permission will see:

* All transaction metadata (cost, tokens, model, duration, etc.)
* A message indicating prompt data is restricted
* No access to the Prompt Viewer modal

This allows you to share cost and usage analytics broadly while restricting access to potentially sensitive prompt content.

## Troubleshooting

### Prompts Not Appearing?

1. **Check Team Settings**: Ensure Prompt Capture is enabled
2. **Check SDK Config**: Verify the capture prompts setting is enabled in your SDK:
   * Python: `capture_prompts=True`
   * Node.js: `capturePrompts: true`
   * Go/MCP: `REVENIUM_CAPTURE_PROMPTS=true` environment variable
3. **Check hasPromptData**: The transaction must have `hasPromptData: true`
4. **Recent Transactions**: Only transactions after enabling capture will have data

### "No Prompt Data Available" Message

This appears when:

* Prompt capture was not enabled when the transaction occurred
* The SDK did not send prompt data
* The transaction is from before prompt capture was enabled

### "Prompt Data Restricted" Message

This appears when:

* Your user account does not have the "Can View Prompt Data" permission
* Contact your administrator to request access if needed


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.revenium.io/prompt-capture.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
