# AI Assistants Dashboard

The AI Assistants Dashboard helps engineering leaders understand how their team is adopting AI-assisted development. Track usage patterns across multiple AI coding and cowork tools, identify productivity opportunities, and ensure your subscriptions are delivering value—all from a single view designed for measuring AI adoption at scale.

***

## Overview

You've invested in AI assistant subscriptions for your team. But are developers actually using them? Who's getting the most value? Are there adoption gaps you should address?

The AI Assistant Dashboard answers these questions:

* **Who's using AI assistants?** See adoption rates across your team
* **How deeply are they using them?** Measure session frequency, duration, and complexity
* **Which tools are being used?** Compare adoption across Claude Code, Cowork, Gemini CLI, and other assistants
* **Where is AI making an impact?** Understand which workflows benefit most
* **Are we getting value from our subscriptions?** Benchmark usage against subscription costs

{% hint style="info" %}
Unlike usage-based AI services, AI coding assistants are typically fixed subscription costs. The goal isn't minimizing usage—it's maximizing adoption and ensuring your team gets full value from the investment.
{% endhint %}

***

## Supported AI Coding Assistants

The AI Coding Dashboard currently supports:

| Tool              | Description                                     |
| ----------------- | ----------------------------------------------- |
| **Claude Code**   | Anthropic's AI coding assistant                 |
| **Claude Cowork** | Anthropic's agentic companion in Claude Desktop |
| **Gemini CLI**    | Google's command-line AI coding assistant       |
| **Cursor**        | Cursor IDE with Admin API sync                  |

***

## Dashboard Views

### Organization vs. User View

Toggle between two perspectives using the view selector at the top of the dashboard:

* **User View**: See metrics broken down by individual developers (default)
* **Organization View**: Aggregate metrics across your entire organization for executive-level reporting

{% hint style="info" %}
Organization View is useful for comparing AI adoption across teams, departments, or business units without drilling into individual user data.
{% endhint %}

### Adoption Overview

Monitor how AI-assisted development is spreading across your organization:

* **Active Users**: Count of developers using AI coding assistants in the selected period
* **Adoption Rate**: Percentage of licensed users who are actively using the tools
* **New Users**: Developers who started using AI assistants recently
* **Usage Frequency**: How often each developer engages with AI coding tools

Spot adoption gaps early. If certain teams or individuals aren't using their subscriptions, you can provide training, share best practices, or reallocate licenses.

#### Developer Segments

The Value by User table organizes developers into five segments based on their engagement level and output:

| Segment                    | What it means                                              |
| -------------------------- | ---------------------------------------------------------- |
| **Power Users**            | High output with strong engagement                         |
| **Efficient Contributors** | Good output relative to spend                              |
| **Cost Concerns**          | High spend with lower output — candidates for optimization |
| **Coaching Targets**       | Low engagement or output — candidates for onboarding help  |
| **Standard**               | Typical usage patterns                                     |

An **efficiency quadrant** scatter plot on the Adoption tab shows all developers plotted by AI cost versus merged PRs, giving an at-a-glance view of how the team distributes across these segments.

#### Session Health

Session metrics surface how deeply developers are engaging in individual AI conversations:

* **Tokens per Session**: Average conversation depth — low values may indicate shallow or interrupted sessions
* **Requests per Session**: Average number of back-and-forth exchanges in a session
* **Session Duration**: How long developers spend in AI-assisted work

#### Productivity Metrics (requires GitHub integration)

When your GitHub organization is connected, the Claude Code Adoption tab adds output-side metrics:

* **PRs Merged**: Pull requests merged by each developer in the period
* **PRs with Coding Tool**: PRs where at least one commit includes a co-authored AI signature
* **Cost per PR**: AI spend divided by PRs merged — a basic productivity ratio

These metrics let you cross-reference subscription spend against delivered code. See [GitHub Integration](/ai-coding-dashboard/github-integration.md) for setup.

### Usage Patterns

Understand how your team works with AI coding assistants:

* **Sessions per User**: Average number of AI conversations per developer
* **Session Depth**: Token consumption as a proxy for conversation complexity
* **Peak Activity Times**: When does your team rely on AI assistance most?
* **Workflow Distribution**: Code generation, debugging, code review, documentation, etc.
* **Tool Distribution**: Usage split across Claude Code, Gemini CLI, Cursor, and other supported tools

These patterns help you understand where AI is adding the most value and identify opportunities to expand usage to other workflows.

### Developer Usage Breakdown

See sessions, tokens, and cost broken down by individual developer — giving engineering managers visibility into where spend is coming from, who is engaging deeply, and where coaching opportunities exist.

* **Usage by developer**: Sessions, token consumption, and cost per developer in the selected period
* **Segment classification**: Each developer is placed in a segment (Power Users, Efficient Contributors, Cost Concerns, Coaching Targets, Standard) based on their usage and output patterns
* **Usage trends**: How individual engagement is changing over time — useful for spotting developers who are ramping up or disengaging
* **Team rollups**: Aggregate by team or department for manager-level reporting

{% hint style="info" %}
Efficient Contributors are good candidates to share their workflow with the broader team. Developers in Cost Concerns may be getting less out of the tool than their spend suggests and may benefit from pair programming with more efficient developers.
{% endhint %}

### Model Usage

Track which AI models your team uses:

* **Model Distribution**: Usage breakdown by model (Sonnet, Opus, Haiku, Gemini Pro, etc.)
* **Model by Workflow**: Which models are used for which types of tasks
* **Model Trends**: How model preferences evolve over time

This helps you understand whether your team is using the right models for their tasks and can inform decisions about which model tiers to include in subscriptions.

***

## Measuring Subscription Value

### Are We Getting ROI?

For fixed-cost subscriptions, the value equation is simple: **more usage = better ROI**.

The dashboard helps you calculate value by showing:

* **Cost per Active User**: Subscription cost divided by developers actually using the tools
* **Usage Intensity**: Tokens and sessions per subscription dollar
* **Adoption Trajectory**: Is usage growing, flat, or declining?

If half your licensed users never open their AI coding assistant, you're paying double the effective per-user cost. The dashboard surfaces these gaps so you can address them.

### Identifying Underutilization

Watch for warning signs:

* **Zero-session users**: Licensed developers with no activity
* **Declining usage**: Users whose engagement is dropping off
* **Shallow sessions**: Very low token counts may indicate limited adoption

These signals help you target training, share success stories, or right-size your license count.

***

## Key Metrics

| Metric               | Description                                                                    |
| -------------------- | ------------------------------------------------------------------------------ |
| Active Users         | Developers who used AI coding assistants in the selected period                |
| Adoption Rate        | Active users as a percentage of total licensed users                           |
| Total Sessions       | Number of AI conversations across all users                                    |
| Sessions/User        | Average sessions per active developer                                          |
| Total Tokens         | Combined input and output tokens processed                                     |
| Tokens/Session       | Average conversation depth per session                                         |
| Requests/Session     | Average number of exchanges per coding session                                 |
| Session Duration     | Average length of a coding assistant session                                   |
| Session Count        | Number of coding sessions (Organization View)                                  |
| PRs Merged           | Pull requests merged by developer in the period (requires GitHub integration)  |
| PRs with Coding Tool | PRs with at least one AI co-authored commit (requires GitHub integration)      |
| Cost per PR          | AI spend ÷ PRs merged — output efficiency metric (requires GitHub integration) |

***

## Getting Started

### Prerequisites

To use the AI Coding Dashboard, you need:

1. **Revenium SDK Integration**: Your AI coding assistant usage must be metered through Revenium's SDK
2. **User Identification**: Configure your integration to pass user identifiers for attribution
3. **Session Tracking**: Enable session-level tracking for conversation analytics

### Connecting Claude Code

Claude Code exports OpenTelemetry (OTLP) telemetry to Revenium directly — no proprietary agents or background processes. There are two ways to configure it:

| Approach                            | Best for                                                                                                               | Setup                                      |
| ----------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
| **Organization-wide** (recommended) | Teams or Enterprise customers who want one-time central configuration delivered to every developer                     | Configure once in the Claude admin console |
| **Per-developer**                   | Individual developers, environments that don't qualify for organization-wide delivery, or backfilling historical usage | Install the Revenium CLI on each machine   |

{% hint style="info" %}
**Both approaches produce identical Revenium data.** You can also mix them — for example, use organization-wide delivery for steady-state metering and the Revenium CLI for one-time backfills.
{% endhint %}

***

#### Option 1: Organization-Wide Setup (Recommended)

Claude Code supports centrally-managed configuration through the Claude admin console. An administrator defines the settings once; Anthropic delivers them to every authenticated user on next startup. No per-developer install required.

**Requirements:**

* Claude for Teams or Claude for Enterprise plan
* Claude Code 2.1.38+ (Teams) or 2.1.30+ (Enterprise)
* Administrator role of **Primary Owner** or **Owner** in the Claude admin console
* Direct connection to `api.anthropic.com` (see compatibility note at the end of this section)

{% hint style="info" %}
**Claude Code evolves quickly.** Anthropic updates environment variables, managed-settings fields, and admin console paths frequently. We keep this page current, but if something doesn't match what you see in the Claude admin console, cross-reference Anthropic's [Claude Code monitoring docs](https://code.claude.com/docs/en/monitoring-usage) and [server-managed settings docs](https://code.claude.com/docs/en/server-managed-settings) for the latest. The Revenium-specific values (endpoint URL, API key header, resource attributes) are stable — the Claude-side wrapping around them is what changes.
{% endhint %}

**Steps:**

1. Sign in to [Claude.ai](https://claude.ai) as Primary Owner or Owner.
2. Navigate to the Managed settings screen (the path differs by plan):
   * *Teams plan:* **Admin Settings → Claude Code → Managed settings**
   * *Enterprise plan:* **Organization settings → Claude Code → Managed settings (settings.json)**
3. Paste the following JSON, substituting your Revenium API key:

```json
{
  "env": {
    "CLAUDE_CODE_ENABLE_TELEMETRY": "1",
    "OTEL_EXPORTER_OTLP_ENDPOINT": "https://api.revenium.ai/meter/v2/otlp",
    "OTEL_EXPORTER_OTLP_HEADERS": "x-api-key=YOUR_KEY_HERE",
    "OTEL_EXPORTER_OTLP_PROTOCOL": "http/json",
    "OTEL_LOGS_EXPORTER": "otlp",
    "OTEL_METRICS_EXPORTER": "none",
    "OTEL_LOGS_EXPORT_INTERVAL": "60000"
  }
}
```

4. Click **Add settings**.
5. Wait for the next config push from Anthropic to developer machines.

On first launch after the settings are picked up, each developer sees a one-time security approval dialog listing the managed environment variables — they click to approve. This is a Claude Code security feature that ensures users are aware of administrator-delivered configuration.

**Verifying Your Configuration**

Ask a developer to run a brief Claude Code session. Within approximately an hour (propagation times depend on Anthropic and the availability of each target machine), their usage should appear in your Revenium dashboard. Your Revenium tenant is identified automatically by the API key, and individual developers are distinguished by the email address Claude Code attaches to every event. **No additional attribution configuration is required for the dashboard to work correctly.**

If no sessions appear after configuring per the steps above, check:

* The API key you've used is active in your Revenium account
* The OTLP endpoint matches your Revenium environment
* The developer fully quit and relaunched Claude Code after the config push \[belt & suspenders approach]

{% hint style="info" %}
**Field reference**

* **`OTEL_EXPORTER_OTLP_ENDPOINT`** — Revenium's OTLP endpoint. For most customers this is `https://api.revenium.ai/meter/v2/otlp`. If your organization uses a dedicated or non-standard Revenium instance, your endpoint is listed alongside your API keys under **Settings → API Keys** in Revenium.
* **`OTEL_EXPORTER_OTLP_HEADERS`** — Your Revenium API key, prefixed with `x-api-key=`. Find your key under **Settings → API Keys** in Revenium.
* **`OTEL_EXPORTER_OTLP_PROTOCOL`** — Must be `http/json`.
* **`OTEL_LOGS_EXPORTER`** — Must be `otlp`. This enables log export, which is how Revenium receives per-call telemetry.
* **`OTEL_METRICS_EXPORTER`** — Must be `none`. Revenium bills from log events only; leaving metrics enabled is unnecessary and increases HTTP traffic without adding data.
* **`OTEL_LOGS_EXPORT_INTERVAL`** — Milliseconds between log-batch flushes. `60000` (60 seconds) is the recommended production value. Lower values add HTTP overhead without providing meaningfully fresher data.
  {% endhint %}

***

**Optional: Tag events with your own organization or product labels**

The required configuration above is enough for correct metering and dashboard attribution — you don't need to look up or define anything else. If you want to sub-segment your own Claude Code usage within your Revenium tenant (for example, to compare spend across business units, teams, or internal product lines), add an `OTEL_RESOURCE_ATTRIBUTES` line to the `env` block with labels of your choosing:

```json
{
  "env": {
    "CLAUDE_CODE_ENABLE_TELEMETRY": "1",
    "OTEL_EXPORTER_OTLP_ENDPOINT": "https://api.revenium.ai/meter/v2/otlp",
    "OTEL_EXPORTER_OTLP_HEADERS": "x-api-key=YOUR_KEY_HERE",
    "OTEL_EXPORTER_OTLP_PROTOCOL": "http/json",
    "OTEL_LOGS_EXPORTER": "otlp",
    "OTEL_METRICS_EXPORTER": "none",
    "OTEL_LOGS_EXPORT_INTERVAL": "60000",
    "OTEL_RESOURCE_ATTRIBUTES": "organization.name=Engineering,product.name=internal-claude-code"
  }
}
```

These values are **yours to define** — they are not matched against any existing list in Revenium. Choose whatever labels make sense for your internal reporting. Typical uses:

* `organization.name` — a business unit, department, or cost center (e.g., `Engineering`, `DataPlatform`, `Marketing`)
* `product.name` — the application or use case the usage is attributed to (e.g., `internal-claude-code` to distinguish internal developer productivity from customer-facing AI products)

If you omit `OTEL_RESOURCE_ATTRIBUTES` entirely, all events land under a default product within your tenant, which is fine for most customers.

{% hint style="warning" %}
**`OTEL_RESOURCE_ATTRIBUTES` formatting constraints:**

* **No spaces in values.** Claude Code's OTLP handling silently truncates values with spaces. Use underscores, camelCase, or percent-encoding (`%20`) instead — for example, `organization.name=Data_Platform` or `organization.name=DataPlatform`, not `organization.name=Data Platform`.
* **Values are case-sensitive.** `Engineering` and `engineering` are treated as separate labels. Pick a canonical form and use it consistently across all configuration and any other integrations.
  {% endhint %}

{% hint style="info" %}
**Third-party provider compatibility.** Managed settings are delivered from Anthropic's servers and require a direct connection to `api.anthropic.com`. They are **not** delivered when Claude Code is routed through Amazon Bedrock, Google Vertex AI, Microsoft Foundry, or any custom endpoint via `ANTHROPIC_BASE_URL` or an LLM gateway. If your organization uses one of these providers, use the per-developer setup described below.
{% endhint %}

{% hint style="info" %}
**Managed settings override user-level configuration.** Values defined in the managed settings `env` block take precedence over any shell-exported environment variables on each user's machine, including any the Revenium CLI may have previously set. Users cannot override these values. Claude Code refreshes managed settings at startup and polls for updates hourly during long-running sessions.
{% endhint %}

***

#### Option 2: Per-Developer Setup via the Revenium CLI

The Revenium CLI (`@revenium/cli`) configures Claude Code on a per-machine basis. Use this approach when organization-wide setup is not available (non-Teams/Enterprise plans, third-party Anthropic providers) or when you want individual developers to manage their own Revenium configuration.

Install the Revenium CLI globally:

```bash
npm install -g @revenium/cli
```

{% hint style="warning" %}
**Migration from `@revenium/claude-code-metering`:** If you previously installed that package, uninstall it and install `@revenium/cli` instead. All commands remain the same.
{% endhint %}

Then run the interactive setup wizard:

```bash
revenium-metering setup
```

The wizard prompts for your Revenium API key, email address, and subscription tier, then configures Claude Code to export OpenTelemetry telemetry to Revenium automatically.

**Non-interactive setup:**

```bash
revenium-metering setup \
  --api-key XXXX_your_key_here \
  --email you@company.com \
  --tier max_20x \
  --endpoint https://api.revenium.ai
```

| Option                      | Description                                                                        |
| --------------------------- | ---------------------------------------------------------------------------------- |
| `-k, --api-key <key>`       | Revenium API key                                                                   |
| `-e, --email <email>`       | Email address for usage attribution                                                |
| `-t, --tier <tier>`         | Subscription tier: `pro`, `max_5x`, `max_20x`, `team_premium`, `enterprise`, `api` |
| `--endpoint <url>`          | Revenium API endpoint URL (defaults to production)                                 |
| `-o, --organization <name>` | Organization name for cost attribution                                             |
| `-p, --product <name>`      | Product name for cost attribution                                                  |
| `--skip-shell-update`       | Skip automatic shell profile update                                                |

**Verify the installation:**

```bash
revenium-metering status   # Check configuration and endpoint connectivity
revenium-metering test     # Send a test metric
```

**Organization and product attribution:**

Pass during setup:

```bash
revenium-metering setup --organization my-org --product my-product
```

Or edit `~/.claude/revenium.env` (generated by setup) to adjust `OTEL_RESOURCE_ATTRIBUTES` directly. Your shell profile sources this file automatically, so changes take effect in the next Claude Code session.

{% hint style="info" %}
**How it works:** `revenium-metering setup` writes configuration to `~/.claude/revenium.env` and appends a line to your shell profile (`~/.zshrc`, `~/.bashrc`, or `~/.config/fish/config.fish`) that sources it. Once loaded, Claude Code's built-in OpenTelemetry exporter sends usage data continuously — no polling, no background process.
{% endhint %}

***

#### Backfilling Historical Claude Code Usage

Both setup paths capture usage from the moment they're activated. If your team has weeks or months of prior Claude Code usage, the Revenium CLI can import that history.

The backfill command scans local Claude Code session files (stored in `~/.claude/projects/`) and sends historical usage data to Revenium. Records are deduplicated by deterministic transaction ID, so running backfill multiple times will not create duplicates.

**Requirements:**

* `@revenium/cli` installed (see Option 2 above) — backfill is a CLI-only capability
* The session files must exist on the machine where you run the command
* A valid Revenium API key

**Running a backfill:**

```bash
# Preview what would be sent, without sending anything
revenium-metering backfill --dry-run

# Import the last 30 days
revenium-metering backfill --since 30d

# Import everything
revenium-metering backfill
```

| Option             | Description                                                                                   |
| ------------------ | --------------------------------------------------------------------------------------------- |
| `--dry-run`        | Preview what would be sent without sending any data                                           |
| `--since <date>`   | Only backfill records after this date — `7d`, `2w`, `1m`, `1y`, or ISO date like `2025-01-15` |
| `--batch-size <n>` | Records per API batch (default: 100)                                                          |
| `-v, --verbose`    | Show detailed progress and sample payloads                                                    |

{% hint style="info" %}
**When to run backfill:**

* After enabling organization-wide setup, to capture usage from before the rollout
* When onboarding a new team member with historical Claude Code usage
* After a gap in data (SDK temporarily uninstalled, managed settings disabled, etc.)
  {% endhint %}

{% hint style="info" %}
**Privacy:** Only usage metadata is sent — token counts, model names, timestamps, and session identifiers. Prompts, code content, and conversation text are **never** transmitted. See the [AI Coding Data Reference](/ai-coding-dashboard/ai-coding-data-reference.md) for the complete list of collected data points.
{% endhint %}

### Connecting Claude Cowork

Claude Cowork is Anthropic's agentic companion that lives inside Claude Desktop. Unlike Claude Code, Gemini CLI, or Cursor, Cowork is configured **entirely through the Claude Desktop admin UI** — your developers do not need to install anything on their machines.

#### Setup

All configuration happens once, in Claude Desktop's admin console. Anthropic propagates these settings to every developer's Claude Desktop automatically.

1. In Claude Desktop, open **Admin → Cowork → Monitoring**.
2. Configure these three fields:

| Field             | Value                                   |
| ----------------- | --------------------------------------- |
| **OTLP endpoint** | `https://api.revenium.ai/meter/v2/otlp` |
| **OTLP protocol** | `http/json`                             |
| **OTLP headers**  | `x-api-key=your_key_here`               |

3. Save the configuration.
4. Wait for the next config push from Anthropic to employee machines.

{% hint style="info" %}
**No employee setup needed.** You do not need to install the Revenium CLI, distribute environment files, or ask employees to run any commands. User attribution (email) comes from each developer's authenticated Claude Desktop account automatically.
{% endhint %}

#### Verifying Your Configuration

Send a prompt in a Cowork session. Within approximately an hour (propagation times depend on Anthropic and the availability of each target machine), users' sessions should appear in the AI Coding Dashboard filtered to **Claude Cowork**.

If no sessions appear after configuring per the steps above, check:

* The API key you've used is active in your Revenium account
* The OTLP endpoint matches your Revenium environment
* Claude Desktop was fully quit and relaunched after the admin UI change \[belt & suspenders approach]

#### Organization and Product Attribution

To attribute Cowork usage to a specific organization or product in your Revenium dashboard, add them to the admin UI's **Resource attributes** field (same Monitoring panel). Use comma-separated `key=value` syntax:

```
organization.name=acme-corp,product.name=mobile-app
```

{% hint style="warning" %}
**Cowork attribution is central only.** Unlike Claude Code CLI — which supports per-developer attribution by editing `~/.claude/revenium.env` — Cowork reads attribution **only** from the admin UI. Process-level environment variables (`OTEL_RESOURCE_ATTRIBUTES`) are ignored by the Cowork exporter.

Every developer using Cowork will share the same `organization.name` and `product.name` values. If you need per-team or per-business-unit attribution for Cowork, check with Anthropic whether their admin console supports separate Cowork configurations per team or business unit.
{% endhint %}

{% hint style="info" %}
**Privacy:** Only usage metadata is sent — token counts, model names, timestamps, and session identifiers. Prompts and code content are **never** ingested by Revenium.
{% endhint %}

***

### Connecting Gemini CLI

Install the Revenium CLI globally:

```bash
npm install -g @revenium/cli
```

Then run the interactive setup wizard:

```bash
revenium-gemini setup
```

The wizard prompts for your Revenium API key, email address, and optional organization details, then configures Gemini CLI to export OpenTelemetry telemetry to your Revenium endpoint automatically.

#### Setup Options

For non-interactive or scripted installs, you can pass options directly:

```bash
revenium-gemini setup \
  --api-key XXXX_your_key_here \
  --email you@company.com \
  --endpoint https://api.revenium.ai
```

| Option                      | Description                                        |
| --------------------------- | -------------------------------------------------- |
| `-k, --api-key <key>`       | Revenium API key                                   |
| `-e, --email <email>`       | Email address for usage attribution                |
| `--endpoint <url>`          | Revenium API endpoint URL (defaults to production) |
| `-o, --organization <name>` | Organization name for cost attribution             |
| `-p, --product <name>`      | Product name for cost attribution                  |
| `--skip-shell-update`       | Skip automatic shell profile update                |

#### Verifying Your Configuration

After setup, verify connectivity and check your current configuration:

```bash
revenium-gemini status

revenium-gemini test
```

{% hint style="info" %}
**How it works:** The `revenium-gemini setup` command creates `~/.gemini/revenium.env` with the required telemetry environment variables and updates your shell profile (`.zshrc`, `.bashrc`, or `config.fish`) to source it automatically. Use `--skip-shell-update` if you prefer to source the file manually. Once loaded, Gemini CLI exports usage data continuously to Revenium — no sync or polling needed.
{% endhint %}

{% hint style="info" %}
**Privacy:** Only usage metadata is sent — token counts, model names, timestamps, and session identifiers. Prompts and code content are **never** transmitted.
{% endhint %}

### Connecting Cursor

Install the Revenium CLI globally:

```bash
npm install -g @revenium/cli
```

Then run the interactive setup wizard:

```bash
revenium-cursor setup
```

The wizard prompts for your Cursor Admin API key, Revenium API key, email address, and subscription tier, then configures metering for your Cursor IDE usage.

#### Setup Options

For non-interactive or scripted installs, you can pass options directly:

```bash
revenium-cursor setup \
  --cursor-api-key cur_XXXX_your_key_here \
  --api-key your_key_here \
  --email you@company.com \
  --subscription-tier business \
  --endpoint https://api.revenium.ai
```

| Option                       | Description                                                      |
| ---------------------------- | ---------------------------------------------------------------- |
| `--cursor-api-key <key>`     | Cursor Admin API key                                             |
| `-k, --api-key <key>`        | Revenium API key                                                 |
| `-e, --email <email>`        | Email address for usage attribution                              |
| `--subscription-tier <tier>` | Cursor subscription tier: `pro`, `business`, `enterprise`, `api` |
| `--endpoint <url>`           | Revenium API endpoint URL (defaults to production)               |
| `-o, --organization <name>`  | Organization name for cost attribution                           |
| `-p, --product <name>`       | Product name for cost attribution                                |
| `--sync-interval <minutes>`  | Sync interval in minutes (default: 5)                            |

#### Verifying Your Configuration

After setup, verify connectivity and check your current configuration:

```bash
revenium-cursor status

revenium-cursor test
```

{% hint style="info" %}
**Privacy:** Only usage metadata is sent — token counts, model names, timestamps, and session identifiers. Prompts and code content are **never** transmitted.
{% endhint %}

#### Syncing Cursor Usage

Unlike Claude Code and Gemini CLI, Cursor requires active syncing to pull usage data from the Cursor Admin API. Run a one-time sync:

```bash
revenium-cursor sync
```

Or run continuously with automatic polling:

```bash
revenium-cursor sync --watch
```

| Option          | Description                                      |
| --------------- | ------------------------------------------------ |
| `-w, --watch`   | Run continuously at the configured sync interval |
| `--from <date>` | Start date for sync range (ISO 8601)             |
| `--to <date>`   | End date for sync range (ISO 8601)               |

{% hint style="info" %}
Only one sync process can run at a time. A lock file prevents concurrent syncs. Events are deduplicated automatically using SHA-256 hashing.
{% endhint %}

To reset sync state and force a fresh sync:

```bash
revenium-cursor reset
```

#### Backfilling Historical Cursor Usage

After installing `@revenium/cli`, new sync cycles capture data going forward. To include historical Cursor usage in your dashboard, run the backfill command.

```bash
revenium-cursor backfill --dry-run

revenium-cursor backfill --since 30d
```

| Option             | Description                                                                                                            |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------- |
| `--dry-run`        | Preview what would be sent without sending any data                                                                    |
| `--since <date>`   | Only backfill records after this date (e.g., `30d`, `2w`, `2025-01-15`)                                                |
| `--to <date>`      | End date for backfill range (defaults to now)                                                                          |
| `--batch-size <n>` | Events per API batch, 1-100 (default: 10, lower than Claude Code's default of 100 due to Cursor Admin API rate limits) |
| `-v, --verbose`    | Show detailed progress and sample payloads                                                                             |

{% hint style="info" %}
Backfill defaults to the last 30 days if `--since` is not specified. It is idempotent — running it multiple times won't create duplicate records thanks to SHA-256 deduplication.
{% endhint %}

***

## Filtering and Analysis

### Time Range Options

* Last 24 hours
* Last 7 days
* Last 30 days
* Last 90 days
* Last 6 months
* Last 12 months
* Custom date range

{% hint style="info" %}
**Custom Date Ranges**: Use the date picker to select any arbitrary date range for historical analysis or period-over-period comparisons.
{% endhint %}

### Filter Dimensions

Slice AI coding data by:

* **User**: Individual developers or teams
* **Team/Department**: Compare adoption across groups
* **Tool**: Filter to specific AI coding assistants (Claude Code, Gemini CLI, Cursor)
* **Model**: Specific model versions
* **Project**: If project metadata is passed via SDK

***

## Best Practices

### Track Adoption, Not Just Usage

Focus on metrics that indicate healthy adoption:

* Is the number of active users growing?
* Are new hires onboarding with AI coding assistants?
* Are power users emerging across different teams?

### Share Success Stories

When you identify power users:

* Learn what workflows they're using AI assistants for
* Share their techniques with the broader team
* Consider having them lead internal AI adoption sessions

### Right-Size Your Licenses

Use the dashboard to inform licensing decisions:

* If adoption is low, invest in training before buying more seats
* If everyone is active and hitting limits, consider expanding
* Reallocate unused licenses to teams showing interest

### Set Adoption Goals

Combine the dashboard with [Cost & Performance Alerts](/cost-and-performance-alerts.md) to:

* Get notified when adoption drops below target thresholds
* Alert when new users haven't engaged within their first week
* Track progress toward team-wide adoption goals

***

## Summary

The AI Coding Dashboard shifts the conversation from "how much are we spending?" to "are we getting value from our investment?" By measuring adoption, usage depth, and user engagement across multiple AI coding tools, engineering leaders can ensure their subscriptions translate into real productivity gains—and identify opportunities to help more developers benefit from AI-assisted development.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.revenium.io/ai-coding-dashboard.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
