robotFor AI Agents

Access Revenium documentation from AI agents and coding assistants — via Context7 MCP for automatic retrieval, or llms.txt for direct access.

Revenium documentation is available in agent-friendly formats, so AI coding assistants can generate accurate integration code using current APIs — not outdated training data.

Available Resources

Resource
Access Method
Best For

Context7 MCP

Automatic via MCP protocol

Claude Code, Cursor, Windsurf, IDE agents

llms.txt

Paste URL into conversation

ChatGPT, Claude.ai, Gemini, browser-based AI

OpenAPI Specs

Fetch from readme.io

Schema validation, code generation


Context7arrow-up-right provides automatic documentation retrieval via MCP (Model Context Protocol). Your AI agent pulls relevant Revenium docs on demand — no manual steps needed.

Supported Agents

Any AI coding assistant that supports MCP:

  • Claude Code (Anthropic)

  • Cursor

  • Windsurf

  • Continue

  • Cline

  • GitHub Copilot (via MCP)

Setup

Add the Context7 MCP server to your agent's configuration. For example, in Claude Code:

Refer to your tool's MCP documentation for the configuration file location.

Usage

Once configured, mention Revenium in your prompt:

"Using context7, show me how to integrate Revenium metering with OpenAI"

"Use context7 to look up the Revenium AI completion metering API"

The agent retrieves current documentation snippets and uses them to generate accurate code.

Revenium Libraries on Context7

These are the official Revenium documentation sources indexed on Context7:

Library
Context7 ID
Content

API Reference

revenium/revenium_readme_io

Complete API reference with request/response schemas

Knowledge Base

revenium/revenium_io

Guides, quickstarts, and platform concepts

When querying Context7, use revenium as the search term to find these libraries.


llms.txt (Direct Access)

For chat-based AI tools (ChatGPT, Claude.ai, Gemini) or when Context7 is unavailable, use the llms.txt URL directly:

https://revenium.readme.io/llms.txtarrow-up-right

Paste this URL into your conversation to give the AI access to our full API reference in a single pass.

Agent-Friendly OpenAPI Specs

For programmatic access to full request/response schemas:


Agent Guidelines

The following sections provide context that helps AI agents work effectively with the Revenium platform. This content is indexed by Context7 and available automatically when agents query Revenium documentation.

What is Revenium?

Revenium is an AI Economic Control System — the system of record for AI usage, cost, and unit economics. It captures every AI transaction with full attribution and provides the controls to enforce pricing and economic guardrails in real time.

Three capability pillars:

Pillar
Question It Answers
What It Does

Economic Observability

What happened?

Captures every AI transaction (model calls, agent actions, tool invocations) with full attribution to customer, feature, and workflow

Economic Intelligence

Was it worth it?

Connects AI activity to business outcomes: cost per customer, margin per feature, ROI per workflow

Economic Control

What do we do about it?

Enforces profitability rules in real time — spending limits, alerts, blocking unprofitable usage

Integration Hierarchy

When recommending a Revenium integration, start at the highest coverage level that fits the developer's stack:

Level
What It Does
When to Use

Gateway (LiteLLM)

One integration, every model metered

Already routing through a proxy

MCP Server

Zero-code metering via MCP protocol

Claude Desktop, Cursor, Windsurf, any MCP host

Unified Middleware (Node)

One package, multiple providers

Multi-provider Node.js app without a gateway

CLI Tools

Meter developer AI tools

Claude Code, Gemini CLI usage tracking

Framework Integration

Plug into LangChain, Griptape, n8n

Already using one of these frameworks

Provider Middleware

Drop-in wrapper per provider SDK

Single provider, maximum control

Media Generation

Image/video generation metering

Fal.ai or Runway ML

Core SDKs

Direct API metering, full payload control

No middleware for your stack

REST API

HTTP POST, no SDK needed

Any language, simplest possible integration

SDKs are available on PyPIarrow-up-right, npmarrow-up-right, and GitHubarrow-up-right. See Integration Options for AI Metering for the full SDK matrix.

API Quick Reference

Authentication: All API calls use the X-API-KEY header with a Revenium API key (prefix hak_).

Metering endpoints (data ingestion):

Endpoint
Purpose

POST /meter/v2/ai/completions

LLM completions — tokens, cost, model, latency

POST /meter/v2/ai/images

Image generation — count, resolution, cost

POST /meter/v2/ai/audio

Audio processing — transcription, TTS, translation

POST /meter/v2/ai/video

Video processing operations

POST /meter/v2/ai/tools

Non-token AI costs (function calls, external tools)

Key fields for AI completion metering:

Field
Required
Purpose

model

Yes

AI model identifier (e.g., gpt-4o, claude-sonnet-4-20250514)

inputTokenCount

Yes

Input/prompt tokens

outputTokenCount

Yes

Output/completion tokens

totalTokenCount

Yes

Sum of all token types

provider

Yes

AI provider (e.g., OpenAI, Anthropic, Google)

requestTime

Yes

ISO 8601 timestamp

organizationName

No

Customer attribution — always send for per-customer tracking

subscriber

No

User attribution — { id, email, credential: { name, value } }

traceId

No

Groups related calls in the same workflow

taskType

No

Freeform category (e.g., chat, summarization, code-generation)

circle-exclamation

For the complete schema with all optional fields, use the AI Completion Metering API documentationarrow-up-right.

Non-Token AI Cost Tracking

Revenium tracks costs beyond traditional token-based LLM usage:

Service Type
Billing Unit
Supported Providers

Image Generation

Per image, per resolution

DALL-E, Fal.ai, Midjourney

Video Generation

Per second, per credit

Runway, Kling, Mochi

Audio Processing

Per minute, per character

ElevenLabs, OpenAI TTS/Whisper

circle-info

For hands-on SDK integration, see Integration Options for AI Metering. For provider connections and API credentials, see Connections.

Last updated

Was this helpful?