πŸ“ŠProvider Dashboard

Comprehensive cost analytics and workspace management for AI provider spending across OpenAI, Anthropic, AWS Bedrock, Google Vertex AI, fal.ai, and Runway.

The Provider Dashboard gives teams complete visibility into AI spending across all providers, workspaces, API keys, and models. Track costs in real-time, identify efficiency opportunities, and manage workspace configurationsβ€”all from a unified interface designed for both technical and financial stakeholders.

Unlike the AI Analytics module (which is based on data captured by our SDK integrations and focused on transaction-level analysis across all providers), the Provider Dashboard is purpose-built for provider-specific cost tracking and workspace management using data automatically synced from your AI provider accounts.

The data here will help you see what percentage of spending is currently captured via Revenium's SDK integrations and what percentage is coming from other sources.


How It Works

The Provider Dashboard automatically syncs billing and usage data directly from your AI provider accounts to provide accurate, provider-native cost tracking.

Supported Providers

  • OpenAI – GPT models, DALL-E, embeddings, and all OpenAI API services

  • Anthropic – Claude models and all Anthropic API services

  • AWS Bedrock – Amazon's managed AI service with access to multiple foundation models (synced via AWS Cost Explorer API, requires IAM credentials)

  • Google Vertex AI – Google Cloud's AI platform with Gemini and other models (synced via BigQuery Billing Export)

  • fal.ai – 600+ AI models including image, video, audio, and LLM-based (synced via fal.ai Platform API, requires admin key)

  • Runway – Video and image generation (Gen-4, Gen-4 Turbo, etc.), with usage and cost tracking (synced via Runway API key)

Note: Google Vertex AI cost data has a 24-48 hour delay due to BigQuery export processing time. This is expected behavior, not a sync issue.

Data Synchronization

  1. Connect Provider Accounts: Link your AI provider accounts via the "Manage AI Accounts" button

  2. Automatic Sync: Revenium syncs workspace, API key, and usage data from provider billing systems

  3. Real-Time Updates: Data refreshes automatically, with manual refresh available

  4. Historical Tracking: Compare current period costs against previous periods to identify trends


Connecting Provider Accounts

To start syncing cost data, connect your AI provider accounts via Settings β†’ Manage AI Accounts β†’ AI Platforms. Each provider requires specific credentials.

OpenAI Setup

Credential Format: sk-xxxxxxxxxxxxxxxxxxxxxxxx

Setup Steps:

  1. Click "Create new secret key"

  2. Ensure you have Admin permissions on your OpenAI organization

  3. Copy the key (starts with sk-)

  4. In Revenium, go to Settings β†’ Manage AI Accounts β†’ AI Platforms

  5. Click Add Provider and select OpenAI

  6. Paste your API key and save

What Gets Synced:

  • All projects/workspaces in your organization

  • API key usage and costs

  • Model-level spending (GPT-4, GPT-4o, DALL-E, etc.)

  • Token counts and request volumes


Anthropic Setup

Credential Format: sk-ant-xxxxxxxxxxxxxxxxxxxxxxxx

Setup Steps:

  1. Click "Create Key"

  2. Copy the key (starts with sk-ant-)

  3. In Revenium, go to Settings β†’ Manage AI Accounts β†’ AI Platforms

  4. Click Add Provider and select Anthropic

  5. Paste your API key and save

What Gets Synced:

  • All workspaces in your organization

  • API key usage and costs

  • Model-level spending (Claude 3.5, Claude 3, etc.)

  • Token counts

Note: Request counts may show as "N/A" for Anthropic data. While Anthropic tracks requests in their console, this metric is not available through their billing API.


AWS Bedrock Setup

Credential Format: JSON object with IAM credentials

Setup Steps:

  1. Create an IAM User with Billing Access

  2. Attach the Required Policy

    • Attach the AWSBillingReadOnlyAccess managed policy

    • Or create a custom policy with ce:GetCostAndUsage permission

  3. Create Access Keys

    • Go to the user's Security credentials tab

    • Click "Create access key"

    • Select "Third-party service" as the use case

    • Copy both the Access Key ID (starts with AKIA) and Secret Access Key

  4. Add to Revenium

    • In Revenium, go to Settings β†’ Manage AI Accounts β†’ AI Platforms

    • Click Add Provider and select AWS Bedrock

    • Paste the JSON credentials (format shown above)

    • Always use us-east-1 for the region field

Why us-east-1? The AWS Cost Explorer API only operates in the us-east-1 region, regardless of where your Bedrock resources are deployed.

What Gets Synced:

  • Bedrock model usage costs from AWS Cost Explorer

  • Model-level spending breakdown

  • Daily cost aggregations

Why IAM Credentials? Revenium uses the AWS Cost Explorer API to retrieve billing data. This API requires IAM credentials with billing permissionsβ€”Bedrock's model invocation keys cannot access Cost Explorer.


Google Vertex AI Setup

Credential Format: Google Cloud Service Account JSON key

Setup Steps:

  1. Enable BigQuery Billing Export (if not already enabled)

    • Select your billing account

    • Under BigQuery export, click Edit settings

    • Enable Standard usage cost export (recommended)

    • Select or create a BigQuery dataset (default name: billing_export)

    • Save and wait 24-48 hours for data to populate

  2. Create a Service Account

    • Select the project you want to connect, you may edit an existing credential if you have one, but it is recommended to create a new one for security purposes.

    • Click "Create Service Account"

    • Name it something like revenium-billing-reader

  3. Grant Required Roles

    • BigQuery Data Viewer (roles/bigquery.dataViewer) – to read billing data

    • BigQuery Job User (roles/bigquery.jobUser) – to run queries

  4. Create and Download JSON Key

    • Click on your new service account

    • Go to the Keys tab

    • Click "Add Key" β†’ "Create new key"

    • Select JSON format

    • Download the key file

  5. Add to Revenium

    • In Revenium, go to Settings β†’ Manage AI Accounts β†’ AI Platforms

    • Click Add Provider and select Google Vertex AI

    • Paste the entire contents of the JSON key file

    • Save

What Gets Synced:

  • Vertex AI model usage from BigQuery billing export

  • Model-level spending (Gemini, PaLM, etc.)

  • Project-level cost aggregations

Auto-Discovery: Revenium automatically finds your billing export table (tables starting with gcp_billing_export_v1_). If you use a non-default dataset name, you can add "billing_dataset": "your_dataset_name" to your service account JSON before pasting.

24-48 Hour Delay: Google Vertex AI cost data has a 24-48 hour delay because it comes from BigQuery billing exports, which are processed in batch. This is expected behaviorβ€”if you don't see recent data immediately, wait a day and check again.


fal.ai Setup

Credential Format: {uuid}:{hex} (e.g., 6efcdb9c-a9ce-464b-adc3-2561380ac473:38cab27415bf376b8e0e63445a5d92cb)

Setup Steps:

  1. Click "Create new key" and select ADMIN permissions

  2. Copy your new key (format: uuid:hex)

  3. In Revenium, go to Settings β†’ Manage AI Accounts β†’ AI Platforms

  4. Click Add Provider and select fal.ai

  5. Paste your admin API key and save

What Gets Synced:

  • Usage logs and costs for all supported fal.ai endpointsβ€”image, video, audio, and LLMs

  • Per-endpoint, per-day cost and usage breakdown

  • Time-series and summary usage per workspace (based on credential metadata)


Runway Setup

Credential Format: key_xxxxxxxxxxxxxxxxxxxxxxxx (API key from Runway)

Setup Steps:

  1. Click "Create new API key" (keys start with key_)

  2. Copy your API key

  3. In Revenium, go to Settings β†’ Manage AI Accounts β†’ AI Platforms

  4. Click Add Provider and select Runway

  5. Paste your API key and save

What Gets Synced:

  • Usage and cost breakdown by model (e.g., Gen-4, Gen-4 Turbo)

  • Video and image generation counts

  • Per-day and per-model credit usage (converted to dollarsβ€”1 credit = $0.01)


Credential Quick Reference

Provider
Format
Key Pattern
Where to Get

OpenAI

API Key

sk-...

Anthropic

API Key

sk-ant-...

AWS Bedrock

JSON

{"accessKeyId":"AKIA..."}

AWS Console β†’ IAM β†’ Users

Google Vertex AI

JSON

{"type":"service_account"...}

GCP Console β†’ IAM β†’ Service Accounts

fal.ai

API Key

{uuid}:{hex}

Runway

API Key

key_...


Core Capabilities

  • Multi-Provider Visibility: Track spending across OpenAI, Anthropic, AWS Bedrock, Google Vertex AI, fal.ai, and Runway in one place

  • Workspace-Level Analytics: Monitor costs by provider workspace or project

  • API Key Tracking: Analyze spending and efficiency by individual API keys

  • Model Efficiency: Compare cost-per-token or per-unit across models to optimize model selection

  • Workspace Management: Rename workspaces, view metadata, and track usage history

  • Trend Analysis: Period-over-period comparisons with percentage change indicators

  • Cost Filtering: Filter by cost ranges, search by name, and export data to CSV


Dashboard Tabs

The Provider Dashboard is organized into four specialized views:

1. Workspaces

Monitor AI spending organized by provider workspaces (OpenAI projects, Anthropic workspaces, AWS Bedrock models, Google Vertex AI projects, fal.ai endpoints, Runway organization accounts, etc.).

Workspaces tab showing cost distribution, period comparison, and detailed workspace analytics

Summary Metrics:

  • Total Cost across all workspaces

  • Active Workspace count

  • Per-Provider cost breakdowns (i.e. Anthropic vs. OpenAI vs. fal.ai vs. Runway)

Visualizations:

  • Cost Distribution by Provider: Pie chart showing spending allocation across providers

  • Current vs Previous Period: Bar chart comparing workspace costs period-over-period

Workspace Table Columns:

  • Name: Workspace or project name (customizable via Workspace Management)

  • Provider: AI provider (with logo badge)

  • Cost: Total spending for the selected period

  • % of Total: Percentage of overall spending

  • Requests: Number of API requests or other usage units (e.g., credits, images)

  • Tokens: Total tokens processed (where applicable)

  • Trend: Period-over-period cost change with visual indicator (↑ red for increases, ↓ green for decreases)

Features:

  • Search by workspace name

  • Filter by cost range (min/max)

  • Sort by any column

  • Export to CSV

  • Refresh data on demand


2. API Key Analytics

Track spending and efficiency metrics for individual API keys across all providers.

API Key Analytics tab displaying cost distribution, key comparison, and detailed API key metrics

Summary Metrics:

  • Total Cost across all API keys

  • Active API Key count

  • Total Requests across all keys (may reflect custom units for fal.ai/Runway)

  • Average Cost per 1M Tokens or applicable unit

Visualizations:

  • Cost Distribution by API Key: Pie chart showing top spending API keys

  • API Key Cost Comparison: Bar chart comparing current vs previous period costs by key

API Key Table Columns:

  • API Key: Key name and associated workspace/project

  • Key Hint: Partial key identifier (e.g., "sk-ant-api03-Z4C...EQAA")

  • Provider: AI provider with logo badge

  • Status: Key status (active, inactive)

  • Cost: Total spending for the period

  • Tokens: Total tokens processed (if applicable, otherwise credits/quantity)

  • Requests: Number of API requests or per-model usages

  • Cost/1M: Cost per million tokens or per 1,000 credits (see provider-specific unit)

  • Trend: Period-over-period cost change

Use Cases:

  • Identify high-cost API keys that may need optimization

  • Track API key usage across teams, projects, or individuals

  • Monitor cost efficiency (cost per 1M tokens/credits) by key

  • Detect unused or underutilized keys


3. Model Efficiency

Compare cost efficiency across AI models to optimize model selection and reduce spending.

Summary Metrics:

  • Total Cost across all models

  • Model count

  • Total Requests (unit varies by provider)

  • Most Efficient Model (lowest cost per 1M tokens or per-usage unit)

Visualizations:

  • Cost Distribution by Model: Pie chart showing spending by model

  • Model Cost Comparison: Bar chart comparing current vs previous period costs

Model Table Columns:

  • Model: Model name with ⭐ star indicator for most efficient model

  • Provider: AI provider

  • Cost/1M: Cost per million tokens or provider-specific unit

  • Cost: Total spending

  • Requests: Number of requests (note: Anthropic does not provide request counts via their API; fal.ai/Runway may show per-usage units)

  • Avg Tokens/Req: Average tokens per request (only available for providers that report request counts)

  • Trend: Period-over-period cost change

Use Cases:

  • Compare GPT-5 vs Claude costs for similar tasks

  • Identify opportunities to switch to more efficient models

  • Track model cost trends over time

  • Optimize model selection based on cost-per-token or cost-per-credit metrics


4. Workspace Management

Manage workspace names, view detailed metadata, and track workspace history.

Workspace Management tab for renaming workspaces and viewing metadata

Features:

  • Rename Workspaces: Assign custom names to provider workspaces for easier identification

  • View Metadata: See workspace IDs, provider names, creation dates, and activity status

  • Track History: View name change history and revert to previous names if needed

  • Search & Filter: Find workspaces by name

  • Bulk Management: Manage multiple workspaces efficiently

Workspace Metadata Table:

  • Workspace Name: Current display name (editable inline)

  • Provider: AI provider

  • Workspace ID: Provider's internal workspace identifier

  • Provider Name: Original name from provider

  • Status: Active or inactive

  • First Seen: Date workspace was first detected

  • Last Seen: Most recent activity date

  • Actions: Edit name, view history, revert changes

Workspace History: Click "View History" to see all name changes for a workspace, including:

  • Previous names

  • Change timestamps

  • Ability to revert to any previous name

Why Rename Workspaces?

  1. Provider-generated workspace names (like "proj_abc123xyz") are often cryptic. Custom names like "Production API" or "Customer Support Bot" make cost tracking more intuitive for your team.

  2. Combine workspaces across providers under one name. For example, you could rename an OpenAi project and an Anthropic workspace both to "Customer Support" to see combined costs and usage.


Provider Filtering

Filter data by specific AI providers or view all providers combined:

  • All Providers (default): Aggregate view across all connected providers

  • Anthropic: Claude models and workspaces

  • OpenAI: GPT models, DALL-E images, and projects

  • AWS Bedrock: Foundation models accessed via Amazon's managed service

  • Google Vertex AI: Gemini models and Google Cloud AI services

  • fal.ai: 600+ models including image, video, and LLMs

  • Runway: Video and image generation models

Provider filtering is available on Workspaces, API Key Analytics, and Model Efficiency tabs.


View Sync Logs

The Provider Dashboard includes a View Sync Logs feature to help you troubleshoot data synchronization issues and verify that your provider connections are working correctly.

Accessing Sync Logs

Click the View Sync Logs button in the Provider Dashboard header to open the sync log viewer.

What Sync Logs Show

  • Sync Timestamps: When each sync occurred

  • Provider Status: Success or failure status for each provider

  • Data Retrieved: Summary of workspaces, API keys, and usage data synced

  • Error Details: Specific error messages when syncs fail

Common Sync Issues

Issue
Possible Cause
Resolution

No data synced

Invalid API credentials

Re-authenticate via Manage AI Accounts

Partial data

Rate limiting

Wait and retry, or contact support

Stale data

Sync not running

Click Refresh Data or check account connection

Missing provider

Not connected

Add provider via Manage AI Accounts

Delayed data (Vertex AI)

BigQuery export processing

Expected 24-48h delay for Google Vertex AI


OpenAI Image Cost Tracking

The Provider Dashboard includes specialized cost tracking for OpenAI's image generation services (DALL-E).

Image Cost Visibility

  • Dedicated Image Costs: View image generation costs separately from text completion costs

  • Model Breakdown: See costs by DALL-E model version (DALL-E 2, DALL-E 3, etc.)

  • Resolution Tracking: Track costs by image resolution and quality settings

  • Usage Trends: Monitor image generation volume and spending over time

Where to Find Image Costs

Image costs appear in:

  • Model Efficiency tab: DALL-E models listed with cost-per-image metrics

  • Workspace tab: Image costs included in workspace totals

  • API Key Analytics: Image generation tracked by API key

Image Cost Metrics

Metric
Description

Cost per Image

Average cost per generated image

Total Image Cost

Sum of all image generation costs

Image Count

Number of images generated

Resolution Mix

Distribution of standard vs HD images


Data Refresh

Automatic Refresh

Provider data syncs automatically from connected AI provider accounts on a regular schedule.

Manual Refresh

Click the "Refresh Data" button to trigger an immediate sync from provider billing systems. This is useful when:

  • You've just added new API keys or workspaces

  • You want the latest cost data before making decisions

  • You're troubleshooting discrepancies

Note: Manual refresh may take 30-60 seconds depending on the amount of data being synced.


Exporting Data

Export any table view to CSV for further analysis, reporting, or integration with other tools:

  1. Apply desired filters (time period, provider, search, cost range)

  2. Click "Export CSV" button

  3. CSV file downloads with current filtered data

CSV exports include all visible columns and respect current sort order.


Best Practices

Start with Workspace Overview

Begin by reviewing the Workspaces tab to understand overall spending patterns and identify high-cost workspaces.

Rename Workspaces Early

Use Workspace Management to assign meaningful names to provider workspaces as soon as they're created. This makes cost tracking more intuitive for your entire team.

Monitor Model Efficiency Regularly

Check the Model Efficiency tab weekly to identify opportunities to switch to more cost-effective models without sacrificing quality.

Track API Key Usage

Use API Key Analytics to ensure API keys are being used as intended and to identify keys that may need rotation or deactivation.

Set Up Alerts

Combine Provider Dashboard insights with Cost & Performance Alerts to get notified when spending exceeds thresholds.

Export for Reporting

Export data to CSV for monthly cost reports, budget planning, or sharing with finance teams.


Common Scenarios

Scenario 1: Identifying Cost Spikes

Goal: Understand why AI costs increased 50% this month

Workflow:

  1. Open Workspaces tab, set period to "Last 30 days"

  2. Review "Current vs Previous Period" chart to identify which workspaces increased

  3. Click on high-growth workspace to see details

  4. Switch to API Key Analytics tab, filter by that workspace

  5. Identify specific API keys driving the increase

  6. Switch to Model Efficiency to see if model mix changed

Scenario 2: Managing Team API Keys

Goal: Track which team members or projects are using which API keys

Workflow:

  1. Use Workspace Management to rename workspaces by team/project

  2. Open API Key Analytics tab

  3. Search for specific team names or projects

  4. Review cost and usage by API key

  5. Identify unused keys (0 requests) for potential deactivation

  6. Export data for team cost allocation

Scenario 3: Monthly Cost Reporting

Goal: Generate monthly AI spending report for finance team

Workflow:

  1. Set time period to previous month (custom date range)

  2. Export Workspaces data to CSV

  3. Export Model Efficiency data to CSV

  4. Review trend indicators for period-over-period changes

  5. Take screenshots of key charts for presentation

  6. Combine with Budget Monitoring data for complete picture


Summary

The Provider Dashboard provides comprehensive visibility into AI provider spending with workspace-level granularity, API key tracking, model efficiency analysis, and workspace management capabilities. By syncing directly with provider billing systems, it ensures accurate cost tracking and enables teams to optimize spending, manage workspaces effectively, and make data-driven decisions about model selection and API key usage.

Last updated

Was this helpful?