# AI Coding Dashboard

The AI Coding Dashboard helps engineering leaders understand how their team is adopting AI-assisted development. Track usage patterns across multiple AI coding tools, identify productivity opportunities, and ensure your subscriptions are delivering value—all from a single view designed for measuring AI adoption at scale.

***

## Overview

You've invested in AI coding assistant subscriptions for your team. But are developers actually using them? Who's getting the most value? Are there adoption gaps you should address?

The AI Coding Dashboard answers these questions:

* **Who's using AI coding assistants?** See adoption rates across your team
* **How deeply are they using them?** Measure session frequency, duration, and complexity
* **Which tools are being used?** Compare adoption across Claude Code, Gemini CLI, and other assistants
* **Where is AI making an impact?** Understand which workflows benefit most
* **Are we getting value from our subscriptions?** Benchmark usage against subscription costs

{% hint style="info" %}
Unlike usage-based AI services, AI coding assistants are typically fixed subscription costs. The goal isn't minimizing usage—it's maximizing adoption and ensuring your team gets full value from the investment.
{% endhint %}

***

## Supported AI Coding Assistants

The AI Coding Dashboard currently supports:

| Tool            | Description                               |
| --------------- | ----------------------------------------- |
| **Claude Code** | Anthropic's AI coding assistant           |
| **Gemini CLI**  | Google's command-line AI coding assistant |
| **Cursor**      | Cursor IDE with Admin API sync            |

***

## Dashboard Views

### Organization vs. User View

Toggle between two perspectives using the view selector at the top of the dashboard:

* **User View**: See metrics broken down by individual developers (default)
* **Organization View**: Aggregate metrics across your entire organization for executive-level reporting

{% hint style="info" %}
Organization View is useful for comparing AI adoption across teams, departments, or business units without drilling into individual user data.
{% endhint %}

### Adoption Overview

Monitor how AI-assisted development is spreading across your organization:

* **Active Users**: Count of developers using AI coding assistants in the selected period
* **Adoption Rate**: Percentage of licensed users who are actively using the tools
* **New Users**: Developers who started using AI assistants recently
* **Usage Frequency**: How often each developer engages with AI coding tools

Spot adoption gaps early. If certain teams or individuals aren't using their subscriptions, you can provide training, share best practices, or reallocate licenses.

### Usage Patterns

Understand how your team works with AI coding assistants:

* **Sessions per User**: Average number of AI conversations per developer
* **Session Depth**: Token consumption as a proxy for conversation complexity
* **Peak Activity Times**: When does your team rely on AI assistance most?
* **Workflow Distribution**: Code generation, debugging, code review, documentation, etc.
* **Tool Distribution**: Usage split across Claude Code, Gemini CLI, Cursor, and other supported tools

These patterns help you understand where AI is adding the most value and identify opportunities to expand usage to other workflows.

### User Leaderboard

See which developers are getting the most from their AI coding subscriptions:

* **Top Users by Sessions**: Most active AI coding assistant users
* **Top Users by Tokens**: Developers with the deepest AI-assisted workflows
* **Usage Trends**: Individual adoption trajectories over time
* **Team Comparisons**: Compare adoption across teams or departments

{% hint style="info" %}
High usage often correlates with productivity gains. Consider pairing power users with teammates who are still ramping up on AI-assisted development.
{% endhint %}

### Model Usage

Track which AI models your team uses:

* **Model Distribution**: Usage breakdown by model (Sonnet, Opus, Haiku, Gemini Pro, etc.)
* **Model by Workflow**: Which models are used for which types of tasks
* **Model Trends**: How model preferences evolve over time

This helps you understand whether your team is using the right models for their tasks and can inform decisions about which model tiers to include in subscriptions.

***

## Measuring Subscription Value

### Are We Getting ROI?

For fixed-cost subscriptions, the value equation is simple: **more usage = better ROI**.

The dashboard helps you calculate value by showing:

* **Cost per Active User**: Subscription cost divided by developers actually using the tools
* **Usage Intensity**: Tokens and sessions per subscription dollar
* **Adoption Trajectory**: Is usage growing, flat, or declining?

If half your licensed users never open their AI coding assistant, you're paying double the effective per-user cost. The dashboard surfaces these gaps so you can address them.

### Identifying Underutilization

Watch for warning signs:

* **Zero-session users**: Licensed developers with no activity
* **Declining usage**: Users whose engagement is dropping off
* **Shallow sessions**: Very low token counts may indicate limited adoption

These signals help you target training, share success stories, or right-size your license count.

***

## Key Metrics

| Metric         | Description                                                     |
| -------------- | --------------------------------------------------------------- |
| Active Users   | Developers who used AI coding assistants in the selected period |
| Adoption Rate  | Active users as a percentage of total licensed users            |
| Total Sessions | Number of AI conversations across all users                     |
| Sessions/User  | Average sessions per active developer                           |
| Total Tokens   | Combined input and output tokens processed                      |
| Tokens/Session | Average conversation depth                                      |
| Session Count  | Number of coding sessions (Organization View)                   |

***

## Getting Started

### Prerequisites

To use the AI Coding Dashboard, you need:

1. **Revenium SDK Integration**: Your AI coding assistant usage must be metered through Revenium's SDK
2. **User Identification**: Configure your integration to pass user identifiers for attribution
3. **Session Tracking**: Enable session-level tracking for conversation analytics

### Connecting Claude Code

Install the Revenium CLI globally:

```bash
npm install -g @revenium/cli
```

{% hint style="warning" %}
**Migration:** If you previously installed `@revenium/claude-code-metering`, uninstall it and install `@revenium/cli` instead. All commands remain the same.
{% endhint %}

Then run the interactive setup wizard:

```bash
revenium-metering setup
```

The wizard prompts for your Revenium API key, email address, and subscription tier, then configures Claude Code to export OpenTelemetry telemetry to your Revenium endpoint automatically.

#### Setup Options

For non-interactive or scripted installs, you can pass options directly:

```bash
revenium-metering setup \
  --api-key hak_XXXX_your_key_here \
  --email you@company.com \
  --tier max_20x \
  --endpoint https://api.revenium.ai
```

| Option                      | Description                                                                        |
| --------------------------- | ---------------------------------------------------------------------------------- |
| `-k, --api-key <key>`       | Revenium API key (must start with `hak_`)                                          |
| `-e, --email <email>`       | Email address for usage attribution                                                |
| `-t, --tier <tier>`         | Subscription tier: `pro`, `max_5x`, `max_20x`, `team_premium`, `enterprise`, `api` |
| `--endpoint <url>`          | Revenium API endpoint URL (defaults to production)                                 |
| `-o, --organization <name>` | Organization name for cost attribution                                             |
| `-p, --product <name>`      | Product name for cost attribution                                                  |
| `--skip-shell-update`       | Skip automatic shell profile update                                                |

#### Verifying Your Configuration

After setup, verify connectivity and check your current configuration:

```bash
# Check configuration and endpoint connectivity
revenium-metering status

# Send a test metric to verify the integration
revenium-metering test
```

{% hint style="info" %}
**Privacy:** Only usage metadata is sent — token counts, model names, timestamps, and session identifiers. Prompts and code content are **never** transmitted. See the [AI Coding Data Reference](https://docs.revenium.io/ai-coding-data-reference) for the complete list of collected data points, or the [package documentation](https://www.npmjs.com/package/@revenium/cli) for CLI options and supported tiers.
{% endhint %}

#### Organization and Product Attribution

To attribute Claude Code usage to a specific organization or product in your Revenium dashboard, either pass them during setup:

```bash
revenium-metering setup --organization my-org --product my-product
```

Or edit the `OTEL_RESOURCE_ATTRIBUTES` line in `~/.claude/revenium.env` (generated by setup) to include `organization.name` and/or `product.name`:

```bash
# In ~/.claude/revenium.env
export OTEL_RESOURCE_ATTRIBUTES="cost_multiplier=0.08,organization.name=my-org,product.name=my-product"
```

Your shell profile sources this file automatically, so changes take effect in the next Claude Code session.

{% hint style="warning" %}
**Important:** The SDK only captures usage **from the moment it's installed**. To include historical Claude Code usage in your dashboard, you must run the backfill command (see below).
{% endhint %}

***

### Backfilling Historical Claude Code Usage

After installing `@revenium/cli`, new sessions are captured automatically. But your team likely has weeks or months of prior usage that won't appear in the dashboard without a backfill.

The backfill command scans local Claude Code session files and sends historical usage data to Revenium.

#### Running a Backfill

```bash
# Step 1: Preview what will be sent (recommended first)
revenium-metering backfill --dry-run

# Step 2: Backfill the last 30 days
revenium-metering backfill --since 30d

# Or backfill everything
revenium-metering backfill
```

#### Backfill Options

| Option             | Description                                                             |
| ------------------ | ----------------------------------------------------------------------- |
| `--dry-run`        | Preview what would be sent without sending any data                     |
| `--since <date>`   | Only backfill records after this date (e.g., `30d`, `2w`, `2025-01-15`) |
| `--batch-size <n>` | Records per API batch (default: 100)                                    |
| `-v, --verbose`    | Show detailed progress and sample payloads                              |

{% hint style="info" %}
**Date formats:** Use relative formats like `7d` (7 days), `2w` (2 weeks), `1m` (1 month), `1y` (1 year), or ISO dates like `2025-01-15`.
{% endhint %}

#### Prerequisites for Backfill

* Initial setup must be completed via `revenium-metering setup`
* A valid Revenium API key must be configured
* Claude Code session data must exist locally (stored in `~/.claude/projects/`)

#### When to Run Backfill

* **After initial SDK install** — to capture all prior usage
* **When onboarding a new team member** — run on their machine after setup to include their history
* **After a gap in data** — if the SDK was temporarily uninstalled or misconfigured

{% hint style="info" %}
Backfill is idempotent — running it multiple times won't create duplicate records. It uses deterministic transaction IDs based on session content.
{% endhint %}

### Connecting Gemini CLI

Install the Revenium CLI globally:

```bash
npm install -g @revenium/cli
```

Then run the interactive setup wizard:

```bash
revenium-gemini setup
```

The wizard prompts for your Revenium API key, email address, and optional organization details, then configures Gemini CLI to export OpenTelemetry telemetry to your Revenium endpoint automatically.

#### Setup Options

For non-interactive or scripted installs, you can pass options directly:

```bash
revenium-gemini setup \
  --api-key hak_XXXX_your_key_here \
  --email you@company.com \
  --endpoint https://api.revenium.ai
```

| Option                      | Description                                        |
| --------------------------- | -------------------------------------------------- |
| `-k, --api-key <key>`       | Revenium API key (must start with `hak_`)          |
| `-e, --email <email>`       | Email address for usage attribution                |
| `--endpoint <url>`          | Revenium API endpoint URL (defaults to production) |
| `-o, --organization <name>` | Organization name for cost attribution             |
| `-p, --product <name>`      | Product name for cost attribution                  |
| `--skip-shell-update`       | Skip automatic shell profile update                |

#### Verifying Your Configuration

After setup, verify connectivity and check your current configuration:

```bash
revenium-gemini status

revenium-gemini test
```

{% hint style="info" %}
**How it works:** The `revenium-gemini setup` command creates `~/.gemini/revenium.env` with the required telemetry environment variables and updates your shell profile (`.zshrc`, `.bashrc`, or `config.fish`) to source it automatically. Use `--skip-shell-update` if you prefer to source the file manually. Once loaded, Gemini CLI exports usage data continuously to Revenium — no sync or polling needed.
{% endhint %}

{% hint style="info" %}
**Privacy:** Only usage metadata is sent — token counts, model names, timestamps, and session identifiers. Prompts and code content are **never** transmitted.
{% endhint %}

### Connecting Cursor

Install the Revenium CLI globally:

```bash
npm install -g @revenium/cli
```

Then run the interactive setup wizard:

```bash
revenium-cursor setup
```

The wizard prompts for your Cursor Admin API key, Revenium API key, email address, and subscription tier, then configures metering for your Cursor IDE usage.

#### Setup Options

For non-interactive or scripted installs, you can pass options directly:

```bash
revenium-cursor setup \
  --cursor-api-key cur_XXXX_your_key_here \
  --api-key hak_XXXX_your_key_here \
  --email you@company.com \
  --subscription-tier business \
  --endpoint https://api.revenium.ai
```

| Option                       | Description                                                      |
| ---------------------------- | ---------------------------------------------------------------- |
| `--cursor-api-key <key>`     | Cursor Admin API key                                             |
| `-k, --api-key <key>`        | Revenium API key (must start with `hak_`)                        |
| `-e, --email <email>`        | Email address for usage attribution                              |
| `--subscription-tier <tier>` | Cursor subscription tier: `pro`, `business`, `enterprise`, `api` |
| `--endpoint <url>`           | Revenium API endpoint URL (defaults to production)               |
| `-o, --organization <name>`  | Organization name for cost attribution                           |
| `-p, --product <name>`       | Product name for cost attribution                                |
| `--sync-interval <minutes>`  | Sync interval in minutes (default: 5)                            |

#### Verifying Your Configuration

After setup, verify connectivity and check your current configuration:

```bash
revenium-cursor status

revenium-cursor test
```

{% hint style="info" %}
**Privacy:** Only usage metadata is sent — token counts, model names, timestamps, and session identifiers. Prompts and code content are **never** transmitted.
{% endhint %}

#### Syncing Cursor Usage

Unlike Claude Code and Gemini CLI, Cursor requires active syncing to pull usage data from the Cursor Admin API. Run a one-time sync:

```bash
revenium-cursor sync
```

Or run continuously with automatic polling:

```bash
revenium-cursor sync --watch
```

| Option          | Description                                      |
| --------------- | ------------------------------------------------ |
| `-w, --watch`   | Run continuously at the configured sync interval |
| `--from <date>` | Start date for sync range (ISO 8601)             |
| `--to <date>`   | End date for sync range (ISO 8601)               |

{% hint style="info" %}
Only one sync process can run at a time. A lock file prevents concurrent syncs. Events are deduplicated automatically using SHA-256 hashing.
{% endhint %}

To reset sync state and force a fresh sync:

```bash
revenium-cursor reset
```

#### Backfilling Historical Cursor Usage

After installing `@revenium/cli`, new sync cycles capture data going forward. To include historical Cursor usage in your dashboard, run the backfill command.

```bash
revenium-cursor backfill --dry-run

revenium-cursor backfill --since 30d
```

| Option             | Description                                                                                                            |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------- |
| `--dry-run`        | Preview what would be sent without sending any data                                                                    |
| `--since <date>`   | Only backfill records after this date (e.g., `30d`, `2w`, `2025-01-15`)                                                |
| `--to <date>`      | End date for backfill range (defaults to now)                                                                          |
| `--batch-size <n>` | Events per API batch, 1-100 (default: 10, lower than Claude Code's default of 100 due to Cursor Admin API rate limits) |
| `-v, --verbose`    | Show detailed progress and sample payloads                                                                             |

{% hint style="info" %}
Backfill defaults to the last 30 days if `--since` is not specified. It is idempotent — running it multiple times won't create duplicate records thanks to SHA-256 deduplication.
{% endhint %}

***

## Filtering and Analysis

### Time Range Options

* Last 24 hours
* Last 7 days
* Last 30 days
* Last 90 days
* Last 6 months
* Last 12 months
* Custom date range

{% hint style="info" %}
**Custom Date Ranges**: Use the date picker to select any arbitrary date range for historical analysis or period-over-period comparisons.
{% endhint %}

### Filter Dimensions

Slice AI coding data by:

* **User**: Individual developers or teams
* **Team/Department**: Compare adoption across groups
* **Tool**: Filter to specific AI coding assistants (Claude Code, Gemini CLI, Cursor)
* **Model**: Specific model versions
* **Project**: If project metadata is passed via SDK

***

## Best Practices

### Track Adoption, Not Just Usage

Focus on metrics that indicate healthy adoption:

* Is the number of active users growing?
* Are new hires onboarding with AI coding assistants?
* Are power users emerging across different teams?

### Share Success Stories

When you identify power users:

* Learn what workflows they're using AI assistants for
* Share their techniques with the broader team
* Consider having them lead internal AI adoption sessions

### Right-Size Your Licenses

Use the dashboard to inform licensing decisions:

* If adoption is low, invest in training before buying more seats
* If everyone is active and hitting limits, consider expanding
* Reallocate unused licenses to teams showing interest

### Set Adoption Goals

Combine the dashboard with [Cost & Performance Alerts](https://docs.revenium.io/cost-and-performance-alerts) to:

* Get notified when adoption drops below target thresholds
* Alert when new users haven't engaged within their first week
* Track progress toward team-wide adoption goals

***

## Summary

The AI Coding Dashboard shifts the conversation from "how much are we spending?" to "are we getting value from our investment?" By measuring adoption, usage depth, and user engagement across multiple AI coding tools, engineering leaders can ensure their subscriptions translate into real productivity gains—and identify opportunities to help more developers benefit from AI-assisted development.
