For AI Agents
Access Revenium documentation from AI agents and coding assistants — via Context7 MCP for automatic retrieval, or llms.txt for direct access.
Revenium documentation is available in agent-friendly formats, so AI coding assistants can generate accurate integration code using current APIs — not outdated training data.
Available Resources
Context7 MCP
Automatic via MCP protocol
Claude Code, Cursor, Windsurf, IDE agents
llms.txt
Paste URL into conversation
ChatGPT, Claude.ai, Gemini, browser-based AI
OpenAPI Specs
Fetch from readme.io
Schema validation, code generation
Context7 (Recommended)
Context7 provides automatic documentation retrieval via MCP (Model Context Protocol). Your AI agent pulls relevant Revenium docs on demand — no manual steps needed.
Supported Agents
Any AI coding assistant that supports MCP:
Claude Code (Anthropic)
Cursor
Windsurf
Continue
Cline
GitHub Copilot (via MCP)
Setup
Add the Context7 MCP server to your agent's configuration. For example, in Claude Code:
Refer to your tool's MCP documentation for the configuration file location.
Usage
Once configured, mention Revenium in your prompt:
"Using context7, show me how to integrate Revenium metering with OpenAI"
"Use context7 to look up the Revenium AI completion metering API"
The agent retrieves current documentation snippets and uses them to generate accurate code.
Revenium Libraries on Context7
These are the official Revenium documentation sources indexed on Context7:
API Reference
revenium/revenium_readme_io
Complete API reference with request/response schemas
Knowledge Base
revenium/revenium_io
Guides, quickstarts, and platform concepts
When querying Context7, use revenium as the search term to find these libraries.
llms.txt (Direct Access)
For chat-based AI tools (ChatGPT, Claude.ai, Gemini) or when Context7 is unavailable, use the llms.txt URL directly:
https://revenium.readme.io/llms.txt
Paste this URL into your conversation to give the AI access to our full API reference in a single pass.
Agent-Friendly OpenAPI Specs
For programmatic access to full request/response schemas:
Platform API + Metering API specs — OpenAPI JSON, suitable for code generation and schema validation.
Agent Guidelines
The following sections provide context that helps AI agents work effectively with the Revenium platform. This content is indexed by Context7 and available automatically when agents query Revenium documentation.
What is Revenium?
Revenium is an AI Economic Control System — the system of record for AI usage, cost, and unit economics. It captures every AI transaction with full attribution and provides the controls to enforce pricing and economic guardrails in real time.
Three capability pillars:
Economic Observability
What happened?
Captures every AI transaction (model calls, agent actions, tool invocations) with full attribution to customer, feature, and workflow
Economic Intelligence
Was it worth it?
Connects AI activity to business outcomes: cost per customer, margin per feature, ROI per workflow
Economic Control
What do we do about it?
Enforces profitability rules in real time — spending limits, alerts, blocking unprofitable usage
Integration Hierarchy
When recommending a Revenium integration, start at the highest coverage level that fits the developer's stack:
Gateway (LiteLLM)
One integration, every model metered
Already routing through a proxy
MCP Server
Zero-code metering via MCP protocol
Claude Desktop, Cursor, Windsurf, any MCP host
Unified Middleware (Node)
One package, multiple providers
Multi-provider Node.js app without a gateway
CLI Tools
Meter developer AI tools
Claude Code, Gemini CLI usage tracking
Framework Integration
Plug into LangChain, Griptape, n8n
Already using one of these frameworks
Provider Middleware
Drop-in wrapper per provider SDK
Single provider, maximum control
Media Generation
Image/video generation metering
Fal.ai or Runway ML
Core SDKs
Direct API metering, full payload control
No middleware for your stack
REST API
HTTP POST, no SDK needed
Any language, simplest possible integration
SDKs are available on PyPI, npm, and GitHub. See Integration Options for AI Metering for the full SDK matrix.
API Quick Reference
Authentication: All API calls use the X-API-KEY header with a Revenium API key (prefix hak_).
Metering endpoints (data ingestion):
POST /meter/v2/ai/completions
LLM completions — tokens, cost, model, latency
POST /meter/v2/ai/images
Image generation — count, resolution, cost
POST /meter/v2/ai/audio
Audio processing — transcription, TTS, translation
POST /meter/v2/ai/video
Video processing operations
POST /meter/v2/ai/tools
Non-token AI costs (function calls, external tools)
Key fields for AI completion metering:
model
Yes
AI model identifier (e.g., gpt-4o, claude-sonnet-4-20250514)
inputTokenCount
Yes
Input/prompt tokens
outputTokenCount
Yes
Output/completion tokens
totalTokenCount
Yes
Sum of all token types
provider
Yes
AI provider (e.g., OpenAI, Anthropic, Google)
requestTime
Yes
ISO 8601 timestamp
organizationName
No
Customer attribution — always send for per-customer tracking
subscriber
No
User attribution — { id, email, credential: { name, value } }
traceId
No
Groups related calls in the same workflow
taskType
No
Freeform category (e.g., chat, summarization, code-generation)
Common field name mistakes: Use actualImageCount (not imageCount) for image metering, and operationSubtype for audio direction. The metering API returns 201 Created even for unrecognized fields — always verify calculated costs, not just the HTTP status.
For the complete schema with all optional fields, use the AI Completion Metering API documentation.
Non-Token AI Cost Tracking
Revenium tracks costs beyond traditional token-based LLM usage:
Image Generation
Per image, per resolution
DALL-E, Fal.ai, Midjourney
Video Generation
Per second, per credit
Runway, Kling, Mochi
Audio Processing
Per minute, per character
ElevenLabs, OpenAI TTS/Whisper
For hands-on SDK integration, see Integration Options for AI Metering. For provider connections and API credentials, see Connections.
Last updated
Was this helpful?