# Provider Dashboard

The Provider Dashboard gives teams complete visibility into AI spending across all providers, workspaces, API keys, and models. Track costs in real-time, identify efficiency opportunities, and manage workspace configurations—all from a unified interface designed for both technical and financial stakeholders.

Unlike the AI Analytics module (which is based on data captured by our SDK integrations and focused on transaction-level analysis across all providers), the Provider Dashboard is purpose-built for provider-specific cost tracking and workspace management using data automatically synced from your AI provider accounts.

The data here will help you see what percentage of spending is currently captured via Revenium's SDK integrations and what percentage is coming from other sources.

***

## How It Works

The Provider Dashboard automatically syncs billing and usage data directly from your AI provider accounts to provide accurate, provider-native cost tracking.

### Supported Providers

* **OpenAI** – GPT models, DALL-E, embeddings, and all OpenAI API services
* **Anthropic** – Claude models and all Anthropic API services
* **AWS Bedrock** – Amazon's managed AI service with access to multiple foundation models (synced via AWS Cost Explorer API, requires IAM credentials)
* **Google Vertex AI** – Google Cloud's AI platform with Gemini and other models (synced via BigQuery Billing Export)
* **fal.ai** – 600+ AI models including image, video, audio, and LLM-based (synced via fal.ai Platform API, requires admin key)
* **Runway** – Video and image generation (Gen-4, Gen-4 Turbo, etc.), with usage and cost tracking (synced via Runway API key)
* **OpenRouter** – Multi-provider AI gateway with access to 200+ models from OpenAI, Anthropic, Google, Meta, and more (synced via OpenRouter API)
* **LiteLLM** – Self-hosted proxy for unified access to 100+ LLM providers with cost tracking (synced via LiteLLM Proxy API)

{% hint style="info" %}
**Note:** Google Vertex AI cost data has a 24-48 hour delay due to BigQuery export processing time. This is expected behavior, not a sync issue.
{% endhint %}

### Data Synchronization

1. **Connect Provider Accounts**: Link your AI provider accounts via the "Manage AI Accounts" button
2. **Automatic Sync**: Revenium syncs workspace, API key, and usage data from provider billing systems
3. **Real-Time Updates**: Data refreshes automatically, with manual refresh available
4. **Historical Tracking**: Compare current period costs against previous periods to identify trends

***

## Connecting Provider Accounts

To start syncing cost data, connect your AI provider accounts via **Settings → Manage AI Accounts → AI Platforms**. Each provider requires specific credentials.

### OpenAI Setup

{% hint style="warning" %}
**Important:** You need an **Organization Admin API key**, not a project-scoped key. Project keys can only make inference calls—they cannot access billing or usage data.
{% endhint %}

**Credential Format:** `sk-xxxxxxxxxxxxxxxxxxxxxxxx`

**Setup Steps:**

1. Go to [OpenAI API Keys](https://platform.openai.com/api-keys)
2. Click **"Create new secret key"**
3. Ensure you have **Admin** permissions on your OpenAI organization
4. Copy the key (starts with `sk-`)
5. In Revenium, go to **Settings → Manage AI Accounts → AI Platforms**
6. Click **Add Provider** and select **OpenAI**
7. Paste your API key and save

**What Gets Synced:**

* All projects/workspaces in your organization
* API key usage and costs
* Model-level spending (GPT-4, GPT-4o, DALL-E, etc.)
* Token counts and request volumes

***

### Anthropic Setup

**Credential Format:** `sk-ant-xxxxxxxxxxxxxxxxxxxxxxxx`

**Setup Steps:**

1. Go to [Anthropic Console → API Keys](https://console.anthropic.com/settings/keys)
2. Click **"Create Key"**
3. Copy the key (starts with `sk-ant-`)
4. In Revenium, go to **Settings → Manage AI Accounts → AI Platforms**
5. Click **Add Provider** and select **Anthropic**
6. Paste your API key and save

**What Gets Synced:**

* All workspaces in your organization
* API key usage and costs
* Model-level spending (Claude 3.5, Claude 3, etc.)
* Token counts

{% hint style="info" %}
**Note:** Request counts may show as "N/A" for Anthropic data. While Anthropic tracks requests in their console, this metric is not available through their billing API.
{% endhint %}

***

### AWS Bedrock Setup

{% hint style="warning" %}
**Critical:** AWS Bedrock requires **IAM user credentials** (Access Key ID + Secret Access Key), not Bedrock API keys.

AWS has two credential types:

* **Bedrock API keys** (start with `ABSK...`) — For model invocation only, cannot access billing
* **IAM credentials** (Access Key ID starts with `AKIA...`) — Required for Cost Explorer API access

You need IAM credentials to sync billing data.
{% endhint %}

**Setup Steps:**

1. **Create an IAM User with Billing Access**
   * Go to [AWS Console → IAM → Users](https://console.aws.amazon.com/iam/home#/users)
   * Click **"Create user"**
   * Name it something like `revenium-billing-reader`
2. **Attach the Required Policy**
   * Attach the `AWSBillingReadOnlyAccess` managed policy
   * Or create a custom policy with `ce:GetCostAndUsage` permission
3. **Create Access Keys**
   * Go to the user's **Security credentials** tab
   * Click **"Create access key"**
   * Select **"Third-party service"** as the use case
   * Copy both the **Access Key ID** (starts with `AKIA`) and **Secret Access Key**
4. **Add to Revenium**
   * In Revenium, go to **Settings → Manage AI Accounts → AI Platforms**
   * Click **Add Provider** and select **AWS Bedrock**
   * Fill in the three required fields:
     * **Access Key ID**: Enter your IAM Access Key ID (starts with `AKIA`)
     * **Secret Access Key**: Enter your IAM Secret Access Key
     * **Region**: Select `us-east-1` from the dropdown

{% hint style="info" %}
**Why us-east-1?** The AWS Cost Explorer API only operates in the us-east-1 region, regardless of where your Bedrock resources are deployed.
{% endhint %}

**What Gets Synced:**

* Bedrock model usage costs from AWS Cost Explorer
* Model-level spending breakdown
* Daily cost aggregations

{% hint style="info" %}
**Why IAM Credentials?** Revenium uses the AWS Cost Explorer API to retrieve billing data. This API requires IAM credentials with billing permissions—Bedrock's model invocation keys cannot access Cost Explorer.
{% endhint %}

***

### Google Vertex AI Setup

{% hint style="warning" %}
**Prerequisites Required:** Before connecting Vertex AI, you must enable BigQuery Billing Export in your Google Cloud project. This is how Google makes billing data available programmatically.
{% endhint %}

**Credential Format:** Google Cloud Service Account JSON key

**Setup Steps:**

1. **Enable BigQuery Billing Export** (if not already enabled)
   * Go to [Google Cloud Console → Billing → Billing export](https://console.cloud.google.com/billing)
   * Select your billing account
   * Under **BigQuery export**, click **Edit settings**
   * Enable **Standard usage cost** export (recommended)
   * Select or create a BigQuery dataset (default name: `billing_export`)
   * Save and wait 24-48 hours for data to populate
2. **Create a Service Account**
   * Go to [IAM & Admin → Service Accounts](https://console.cloud.google.com/iam-admin/serviceaccounts)
   * Select the project you want to connect, you may edit an existing credential if you have one, but it is recommended to create a new one for security purposes.
   * Click **"Create Service Account"**
   * Name it something like `revenium-billing-reader`
3. **Grant Required Roles**
   * **BigQuery Data Viewer** (`roles/bigquery.dataViewer`) – to read billing data
   * **BigQuery Job User** (`roles/bigquery.jobUser`) – to run queries
4. **Create and Download JSON Key**
   * Click on your new service account
   * Go to the **Keys** tab
   * Click **"Add Key" → "Create new key"**
   * Select **JSON** format
   * Download the key file
5. **Add to Revenium**
   * In Revenium, go to **Settings → Manage AI Accounts → AI Platforms**
   * Click **Add Provider** and select **Google Vertex AI**
   * Paste the entire contents of the JSON key file
   * Save

**What Gets Synced:**

* Vertex AI model usage from BigQuery billing export
* Model-level spending (Gemini, PaLM, etc.)
* Project-level cost aggregations

{% hint style="info" %}
**Auto-Discovery:** Revenium automatically finds your billing export table (tables starting with `gcp_billing_export_v1_`). If you use a non-default dataset name, you can add `"billing_dataset": "your_dataset_name"` to your service account JSON before pasting.
{% endhint %}

{% hint style="info" %}
**24-48 Hour Delay:** Google Vertex AI cost data has a 24-48 hour delay because it comes from BigQuery billing exports, which are processed in batch. This is expected behavior—if you don't see recent data immediately, wait a day and check again.
{% endhint %}

***

### fal.ai Setup

**Credential Format:** `{uuid}:{hex}` (e.g., `6efcdb9c-a9ce-464b-adc3-2561380ac473:38cab27415bf376b8e0e63445a5d92cb`)

{% hint style="warning" %}
**IMPORTANT:** You must use an **ADMIN API key** from fal.ai for billing and usage data access. Regular user keys **cannot** retrieve cost or usage breakdowns.
{% endhint %}

**Setup Steps:**

1. Go to [fal.ai dashboard → API Keys](https://fal.ai/dashboard/keys)
2. Click **"Create new key"** and select **ADMIN** permissions
3. Copy your new key (format: `uuid:hex`)
4. In Revenium, go to **Settings → Manage AI Accounts → AI Platforms**
5. Click **Add Provider** and select **fal.ai**
6. Paste your admin API key and save

**What Gets Synced:**

* Usage logs and costs for all supported fal.ai endpoints—image, video, audio, and LLMs
* Per-endpoint, per-day cost and usage breakdown
* Time-series and summary usage per workspace (based on credential metadata)

***

### Runway Setup

**Credential Format:** `key_xxxxxxxxxxxxxxxxxxxxxxxx` (API key from Runway)

**Setup Steps:**

1. Go to [Runway Dashboard → Settings → API](https://runwayml.com/dashboard)
2. Click **"Create new API key"** (keys start with `key_`)
3. Copy your API key
4. In Revenium, go to **Settings → Manage AI Accounts → AI Platforms**
5. Click **Add Provider** and select **Runway**
6. Paste your API key and save

**What Gets Synced:**

* Usage and cost breakdown by model (e.g., Gen-4, Gen-4 Turbo)
* Video and image generation counts
* Per-day and per-model credit usage (converted to dollars—1 credit = $0.01)

***

### OpenRouter Setup

OpenRouter is a multi-provider AI gateway that gives you access to 200+ models from OpenAI, Anthropic, Google, Meta, Mistral, and more through a single API.

{% hint style="info" %}
**Two Types of API Keys:** OpenRouter supports two key types with different access levels:

* **API Key** (required) – Standard key for basic usage data and model access
* **Provisioning Key** (optional) – Provides detailed usage breakdowns (Team/Enterprise plans only)
  {% endhint %}

**Credential Format:** JSON object

```json
{
  "apiKey": "sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  "provisioningKey": "sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
```

**Setup Steps:**

1. **Get Your API Key**
   * Go to [OpenRouter Keys](https://openrouter.ai/keys)
   * Click **"Create Key"**
   * Copy the key (starts with `sk-or-v1-`)
2. **Get Your Provisioning Key (Optional – Team/Enterprise Only)**
   * If you have a Team or Enterprise plan, you can create a provisioning key for detailed usage data
   * Go to [OpenRouter Keys](https://openrouter.ai/keys)
   * Create a key with provisioning permissions
   * This enables per-user and per-request usage breakdowns
3. **Add to Revenium**
   * In Revenium, go to **Settings → Manage AI Accounts → AI Platforms**
   * Click **Add Provider** and select **OpenRouter**
   * Paste your credentials as JSON (format shown above)
   * The `provisioningKey` field is optional—include it only if you have Team/Enterprise access
   * Save

**What Gets Synced:**

* Usage and cost data across all models
* Credit balance and spending
* Model-level cost breakdowns
* Request counts and token usage

***

### LiteLLM Setup

LiteLLM is a self-hosted proxy that provides unified access to 100+ LLM providers through a single API. Connect your LiteLLM proxy to Revenium for centralized cost tracking across all your proxied providers.

{% hint style="warning" %}
**Prerequisite:** You must have a running LiteLLM proxy server. Revenium connects to your proxy's API to retrieve usage data—it does not host LiteLLM for you.
{% endhint %}

**Credential Format:** JSON object with API key and proxy URL

```json
{
  "apiKey": "sk-your-litellm-api-key",
  "baseUrl": "https://your-litellm-proxy.com"
}
```

**Setup Steps:**

1. **Get Your LiteLLM Proxy URL and API Key**
   * Identify your LiteLLM proxy URL (e.g., `https://litellm.yourcompany.com`)
   * Get or create an API key from your LiteLLM proxy admin settings
   * Keys typically start with `sk-`
2. **Ensure Your Proxy Has Usage Tracking Enabled**
   * LiteLLM must be configured to track usage data
   * Verify your proxy settings include spend tracking and logging
3. **Add to Revenium**
   * In Revenium, go to **Settings → Manage AI Accounts → AI Platforms**
   * Click **Add Provider** and select **LiteLLM**
   * Paste your credentials as JSON (format shown above)
   * Save

**What Gets Synced:**

* Usage and cost data across all proxied providers
* API key-level spending
* Model-level cost breakdowns
* Request counts and token usage

{% hint style="info" %}
**Data Retention:** The amount of historical data available depends on your LiteLLM proxy configuration. Revenium syncs whatever usage data your proxy exposes through its API.
{% endhint %}

***

### Credential Quick Reference

| Provider         | Format          | Key Pattern                                 | Where to Get                                                                       |
| ---------------- | --------------- | ------------------------------------------- | ---------------------------------------------------------------------------------- |
| OpenAI           | API Key         | `sk-...`                                    | [platform.openai.com/api-keys](https://platform.openai.com/api-keys)               |
| Anthropic        | API Key         | `sk-ant-...`                                | [console.anthropic.com/settings/keys](https://console.anthropic.com/settings/keys) |
| AWS Bedrock      | IAM Credentials | Access Key ID (`AKIA...`) + Secret + Region | AWS Console → IAM → Users                                                          |
| Google Vertex AI | JSON            | `{"type":"service_account"...}`             | GCP Console → IAM → Service Accounts                                               |
| fal.ai           | API Key         | `{uuid}:{hex}`                              | [fal.ai dashboard/keys](https://fal.ai/dashboard/keys)                             |
| Runway           | API Key         | `key_...`                                   | [runwayml.com/dashboard](https://runwayml.com/dashboard)                           |
| OpenRouter       | JSON            | `{"apiKey":"sk-or-v1-..."}`                 | [openrouter.ai/keys](https://openrouter.ai/keys)                                   |
| LiteLLM          | JSON            | `{"apiKey":"sk-...","baseUrl":"..."}`       | Your LiteLLM proxy admin                                                           |

***

### Core Capabilities

* **Multi-Provider Visibility**: Track spending across OpenAI, Anthropic, AWS Bedrock, Google Vertex AI, fal.ai, Runway, OpenRouter, and LiteLLM in one place
* **Workspace-Level Analytics**: Monitor costs by provider workspace or project
* **API Key Tracking**: Analyze spending and efficiency by individual API keys
* **Model Efficiency**: Compare cost-per-token or per-unit across models to optimize model selection
* **Workspace Management**: Rename workspaces, view metadata, and track usage history
* **Trend Analysis**: Period-over-period comparisons with percentage change indicators
* **Cost Filtering**: Filter by cost ranges, search by name, and export data to CSV

***

## Dashboard Tabs

The Provider Dashboard is organized into four specialized views:

### 1. Workspaces

Monitor AI spending organized by provider workspaces (OpenAI projects, Anthropic workspaces, AWS Bedrock models, Google Vertex AI projects, fal.ai endpoints, Runway organization accounts, OpenRouter accounts, LiteLLM proxy instances, etc.).

<figure><img src="https://2470865788-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FSUfCzMW8qWeXstipFXEh%2Fuploads%2Fgit-blob-3ec0ba4f805df6b9aa93e4b8c1b7a129d30b9630%2FProvider%20Dashboard%20-%20Workspace%20Overview.png?alt=media" alt=""><figcaption><p>Workspaces tab showing cost distribution, period comparison, and detailed workspace analytics</p></figcaption></figure>

**Summary Metrics:**

* Total Cost across all workspaces
* Active Workspace count
* Per-Provider cost breakdowns (i.e. Anthropic vs. OpenAI vs. fal.ai vs. Runway vs. OpenRouter vs. LiteLLM)

**Visualizations:**

* **Cost Distribution by Provider**: Pie chart showing spending allocation across providers
* **Current vs Previous Period**: Bar chart comparing workspace costs period-over-period

**Workspace Table Columns:**

* **Name**: Workspace or project name (customizable via Workspace Management)
* **Provider**: AI provider (with logo badge)
* **Cost**: Total spending for the selected period
* **% of Total**: Percentage of overall spending
* **Requests**: Number of API requests or other usage units (e.g., credits, images)
* **Tokens**: Total tokens processed (where applicable)
* **Trend**: Period-over-period cost change with visual indicator (↑ red for increases, ↓ green for decreases)

**Features:**

* Search by workspace name
* Filter by cost range (min/max)
* Sort by any column
* Export to CSV
* Refresh data on demand

***

### 2. API Key Analytics

Track spending and efficiency metrics for individual API keys across all providers.

<figure><img src="https://2470865788-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FSUfCzMW8qWeXstipFXEh%2Fuploads%2Fgit-blob-fbaa0ab0cc2008435ac30043bc2a81767564fc79%2FProvider%20Dashboard%20-%20API%20Keys.png?alt=media" alt=""><figcaption><p>API Key Analytics tab displaying cost distribution, key comparison, and detailed API key metrics</p></figcaption></figure>

**Summary Metrics:**

* Total Cost across all API keys
* Active API Key count
* Total Requests across all keys (may reflect custom units for fal.ai/Runway)
* Average Cost per 1M Tokens or applicable unit

**Visualizations:**

* **Cost Distribution by API Key**: Pie chart showing top spending API keys
* **API Key Cost Comparison**: Bar chart comparing current vs previous period costs by key

**API Key Table Columns:**

* **API Key**: Key name and associated workspace/project
* **Key Hint**: Partial key identifier (e.g., "sk-ant-api03-Z4C...EQAA")
* **Provider**: AI provider with logo badge
* **Status**: Key status (active, inactive)
* **Cost**: Total spending for the period
* **Tokens**: Total tokens processed (if applicable, otherwise credits/quantity)
* **Requests**: Number of API requests or per-model usages
* **Cost/1M**: Cost per million tokens or per 1,000 credits (see provider-specific unit)
* **Trend**: Period-over-period cost change

**Use Cases:**

* Identify high-cost API keys that may need optimization
* Track API key usage across teams, projects, or individuals
* Monitor cost efficiency (cost per 1M tokens/credits) by key
* Detect unused or underutilized keys

***

### 3. Model Efficiency

Compare cost efficiency across AI models to optimize model selection and reduce spending.

**Summary Metrics:**

* Total Cost across all models
* Model count
* Total Requests (unit varies by provider)
* Most Efficient Model (lowest cost per 1M tokens or per-usage unit)

**Visualizations:**

* **Cost Distribution by Model**: Pie chart showing spending by model
* **Model Cost Comparison**: Bar chart comparing current vs previous period costs

**Model Table Columns:**

* **Model**: Model name with ⭐ star indicator for most efficient model
* **Provider**: AI provider
* **Cost/1M**: Cost per million tokens or provider-specific unit
* **Cost**: Total spending
* **Requests**: Number of requests (note: Anthropic does not provide request counts via their API; fal.ai/Runway may show per-usage units)
* **Avg Tokens/Req**: Average tokens per request (only available for providers that report request counts)
* **Trend**: Period-over-period cost change

**Use Cases:**

* Compare GPT-5 vs Claude costs for similar tasks
* Identify opportunities to switch to more efficient models
* Track model cost trends over time
* Optimize model selection based on cost-per-token or cost-per-credit metrics

***

### 4. Workspace Management

Manage workspace names, view detailed metadata, and track workspace history.

<figure><img src="https://2470865788-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FSUfCzMW8qWeXstipFXEh%2Fuploads%2Fgit-blob-6546ab802489d30d4e5548f952b09c9ffb7d9c29%2FProvider%20Dashboard%20-%20Workspace%20Management.png?alt=media" alt=""><figcaption><p>Workspace Management tab for renaming workspaces and viewing metadata</p></figcaption></figure>

**Features:**

* **Rename Workspaces**: Assign custom names to provider workspaces for easier identification
* **View Metadata**: See workspace IDs, provider names, creation dates, and activity status
* **Track History**: View name change history and revert to previous names if needed
* **Search & Filter**: Find workspaces by name
* **Bulk Management**: Manage multiple workspaces efficiently

**Workspace Metadata Table:**

* **Workspace Name**: Current display name (editable inline)
* **Provider**: AI provider
* **Workspace ID**: Provider's internal workspace identifier
* **Provider Name**: Original name from provider
* **Status**: Active or inactive
* **First Seen**: Date workspace was first detected
* **Last Seen**: Most recent activity date
* **Actions**: Edit name, view history, revert changes

**Workspace History:** Click "View History" to see all name changes for a workspace, including:

* Previous names
* Change timestamps
* Ability to revert to any previous name

**Why Rename Workspaces?**

1. Provider-generated workspace names (like "proj\_abc123xyz") are often cryptic. Custom names like "Production API" or "Customer Support Bot" make cost tracking more intuitive for your team.
2. Combine workspaces across providers under one name. For example, you could rename an OpenAi project and an Anthropic workspace both to "Customer Support" to see combined costs and usage.

***

## Provider Filtering

Filter data by specific AI providers or view all providers combined:

* **All Providers** (default): Aggregate view across all connected providers
* **Anthropic**: Claude models and workspaces
* **OpenAI**: GPT models, DALL-E images, and projects
* **AWS Bedrock**: Foundation models accessed via Amazon's managed service
* **Google Vertex AI**: Gemini models and Google Cloud AI services
* **fal.ai**: 600+ models including image, video, and LLMs
* **Runway**: Video and image generation models
* **OpenRouter**: Multi-provider gateway with 200+ models
* **LiteLLM**: Self-hosted proxy usage across all configured providers

Provider filtering is available on Workspaces, API Key Analytics, and Model Efficiency tabs.

***

## View Sync Logs

The Provider Dashboard includes a **View Sync Logs** feature to help you troubleshoot data synchronization issues and verify that your provider connections are working correctly.

### Accessing Sync Logs

Click the **View Sync Logs** button in the Provider Dashboard header to open the sync log viewer.

### What Sync Logs Show

* **Sync Timestamps**: When each sync occurred
* **Provider Status**: Success or failure status for each provider
* **Data Retrieved**: Summary of workspaces, API keys, and usage data synced
* **Error Details**: Specific error messages when syncs fail

### Common Sync Issues

| Issue                    | Possible Cause             | Resolution                                     |
| ------------------------ | -------------------------- | ---------------------------------------------- |
| No data synced           | Invalid API credentials    | Re-authenticate via Manage AI Accounts         |
| Partial data             | Rate limiting              | Wait and retry, or contact support             |
| Stale data               | Sync not running           | Click Refresh Data or check account connection |
| Missing provider         | Not connected              | Add provider via Manage AI Accounts            |
| Delayed data (Vertex AI) | BigQuery export processing | Expected 24-48h delay for Google Vertex AI     |

***

## OpenAI Image Cost Tracking

The Provider Dashboard includes specialized cost tracking for OpenAI's image generation services (DALL-E).

### Image Cost Visibility

* **Dedicated Image Costs**: View image generation costs separately from text completion costs
* **Model Breakdown**: See costs by DALL-E model version (DALL-E 2, DALL-E 3, etc.)
* **Resolution Tracking**: Track costs by image resolution and quality settings
* **Usage Trends**: Monitor image generation volume and spending over time

### Where to Find Image Costs

Image costs appear in:

* **Model Efficiency tab**: DALL-E models listed with cost-per-image metrics
* **Workspace tab**: Image costs included in workspace totals
* **API Key Analytics**: Image generation tracked by API key

### Image Cost Metrics

| Metric           | Description                           |
| ---------------- | ------------------------------------- |
| Cost per Image   | Average cost per generated image      |
| Total Image Cost | Sum of all image generation costs     |
| Image Count      | Number of images generated            |
| Resolution Mix   | Distribution of standard vs HD images |

***

## Data Refresh

### Automatic Refresh

Provider data syncs automatically from connected AI provider accounts on a regular schedule.

### Manual Refresh

Click the "Refresh Data" button to trigger an immediate sync from provider billing systems. This is useful when:

* You've just added new API keys or workspaces
* You want the latest cost data before making decisions
* You're troubleshooting discrepancies

**Note**: Manual refresh may take 30-60 seconds depending on the amount of data being synced.

***

## Exporting Data

Export any table view to CSV for further analysis, reporting, or integration with other tools:

1. Apply desired filters (time period, provider, search, cost range)
2. Click "Export CSV" button
3. CSV file downloads with current filtered data

CSV exports include all visible columns and respect current sort order.

***

## Best Practices

### Start with Workspace Overview

Begin by reviewing the Workspaces tab to understand overall spending patterns and identify high-cost workspaces.

### Rename Workspaces Early

Use Workspace Management to assign meaningful names to provider workspaces as soon as they're created. This makes cost tracking more intuitive for your entire team.

### Monitor Model Efficiency Regularly

Check the Model Efficiency tab weekly to identify opportunities to switch to more cost-effective models without sacrificing quality.

### Track API Key Usage

Use API Key Analytics to ensure API keys are being used as intended and to identify keys that may need rotation or deactivation.

### Set Up Alerts

Combine Provider Dashboard insights with [Cost & Performance Alerts](https://docs.revenium.io/cost-and-performance-alerts) to get notified when spending exceeds thresholds.

### Export for Reporting

Export data to CSV for monthly cost reports, budget planning, or sharing with finance teams.

***

## Common Scenarios

### Scenario 1: Identifying Cost Spikes

**Goal**: Understand why AI costs increased 50% this month

**Workflow**:

1. Open Workspaces tab, set period to "Last 30 days"
2. Review "Current vs Previous Period" chart to identify which workspaces increased
3. Click on high-growth workspace to see details
4. Switch to API Key Analytics tab, filter by that workspace
5. Identify specific API keys driving the increase
6. Switch to Model Efficiency to see if model mix changed

### Scenario 2: Managing Team API Keys

**Goal**: Track which team members or projects are using which API keys

**Workflow**:

1. Use Workspace Management to rename workspaces by team/project
2. Open API Key Analytics tab
3. Search for specific team names or projects
4. Review cost and usage by API key
5. Identify unused keys (0 requests) for potential deactivation
6. Export data for team cost allocation

### Scenario 3: Monthly Cost Reporting

**Goal**: Generate monthly AI spending report for finance team

**Workflow**:

1. Set time period to previous month (custom date range)
2. Export Workspaces data to CSV
3. Export Model Efficiency data to CSV
4. Review trend indicators for period-over-period changes
5. Take screenshots of key charts for presentation
6. Combine with Budget Monitoring data for complete picture

***

## Summary

The Provider Dashboard provides comprehensive visibility into AI provider spending with workspace-level granularity, API key tracking, model efficiency analysis, and workspace management capabilities. By syncing directly with provider billing systems, it ensures accurate cost tracking and enables teams to optimize spending, manage workspaces effectively, and make data-driven decisions about model selection and API key usage.
