# Integration Options for AI Metering

{% hint style="info" %}
**Python SDK:** `pip install revenium-python-sdk` — available on [PyPI](https://pypi.org/project/revenium-python-sdk/) with extras for each provider.\
**Node.js SDKs:** available on [npm](https://www.npmjs.com/org/revenium).\
**Go SDK:** available on [GitHub](https://github.com/revenium/revenium-go-sdk).
{% endhint %}

## Overview

Revenium SDKs wrap your AI provider’s client library to automatically capture token usage, costs, latencies, and metadata — no changes to your API logic required. Data flows to the AI Analytics and Alerts dashboards in real time.

***

### Python SDK

Install: `pip install revenium-python-sdk`

The unified Python SDK supports all major AI providers through optional extras. Install only the providers you need:

| Provider                              | Install Command                                        | Chat Completions |                    Embeddings                    |
| ------------------------------------- | ------------------------------------------------------ | :--------------: | :----------------------------------------------: |
| OpenAI                                | `pip install "revenium-python-sdk[openai]"`            |         ✅        |                         ✅                        |
| Azure OpenAI                          | `pip install "revenium-python-sdk[openai]"`            |         ✅        |                         ✅                        |
| Anthropic                             | `pip install "revenium-python-sdk[anthropic]"`         |         ✅        |         Anthropic has no embedding models        |
| Anthropic via AWS Bedrock             | `pip install "revenium-python-sdk[anthropic]"`         |         ✅        |         Anthropic has no embedding models        |
| Google Gemini (AI SDK)                | `pip install "revenium-python-sdk[google-genai]"`      |         ✅        |                         ✅                        |
| Google Vertex AI (Enterprise)         | `pip install "revenium-python-sdk[google-vertex]"`     |         ✅        |                         ✅                        |
| LiteLLM                               | `pip install "revenium-python-sdk[litellm]"`           |         ✅        |                         ✅                        |
| LiteLLM Proxy                         | `pip install "revenium-python-sdk[litellm-proxy]"`     |         ✅        |                         ✅                        |
| Ollama                                | `pip install "revenium-python-sdk[ollama]"`            |         ✅        |                         ✅                        |
| Perplexity (OpenAI-compatible)        | `pip install "revenium-python-sdk[perplexity-openai]"` |         ✅        | Perplexity does not currently support embeddings |
| Perplexity (Native)                   | `pip install "revenium-python-sdk[perplexity-native]"` |         ✅        | Perplexity does not currently support embeddings |
| Fal.ai (image, video, audio metering) | `pip install "revenium-python-sdk[fal]"`               |         –        |                         –                        |
| **LangChain**                         | `pip install "revenium-python-sdk[langchain]"`         |         ✅        |                         ✅                        |

You can install multiple providers at once: `pip install "revenium-python-sdk[openai,anthropic,langchain]"`

{% hint style="info" %}
The Python SDK is available on [PyPI](https://pypi.org/project/revenium-python-sdk/). Source code and examples are on [GitHub](https://github.com/revenium/revenium-python-sdk).
{% endhint %}

***

### Node.js SDKs

| Provider                      | Package                                                                                       | Chat Completions |                    Embeddings                    |
| ----------------------------- | --------------------------------------------------------------------------------------------- | :--------------: | :----------------------------------------------: |
| OpenAI                        | [@revenium/middleware](https://www.npmjs.com/package/@revenium/middleware) (`/openai`)        |         ✅        |                         ✅                        |
| Azure OpenAI                  | [@revenium/middleware](https://www.npmjs.com/package/@revenium/middleware) (`/openai`)        |         ✅        |                         ✅                        |
| Anthropic                     | [@revenium/middleware](https://www.npmjs.com/package/@revenium/middleware) (`/anthropic`)     |         ✅        |         Anthropic has no embedding models        |
| Google Vertex AI (Enterprise) | [@revenium/middleware](https://www.npmjs.com/package/@revenium/middleware) (`/google/vertex`) |         ✅        |                         ✅                        |
| Google AI SDK                 | [@revenium/middleware](https://www.npmjs.com/package/@revenium/middleware) (`/google/genai`)  |         ✅        |                         ✅                        |
| Perplexity                    | [@revenium/middleware](https://www.npmjs.com/package/@revenium/middleware) (`/perplexity`)    |         ✅        | Perplexity does not currently support embeddings |
| LiteLLM                       | [@revenium/middleware](https://www.npmjs.com/package/@revenium/middleware) (`/litellm`)       |         ✅        |                         ✅                        |

{% hint style="info" %}
All Node.js SDKs are available on [npm](https://www.npmjs.com/org/revenium).
{% endhint %}

***

### Go SDK

Install:

```bash
go get github.com/revenium/revenium-go-sdk
```

Minimal OpenAI example — wrap your provider client once and metering is captured automatically:

```go
import (
    "github.com/revenium/revenium-go-sdk/openai"
    openai_sdk "github.com/openai/openai-go"
)

client := openai.Wrap(openai_sdk.NewClient(), openai.Config{
    APIKey: os.Getenv("REVENIUM_API_KEY"),
})
// Use `client` exactly like the underlying openai-go client.
```

The SDK includes a circuit breaker, automatic retry with backoff, prompt capture, and usage summary on top of payload metering. Provider-specific adapters follow the same `Wrap()` pattern.

| Provider     | Chat Completions |                    Embeddings                    | Image | Video | Audio |
| ------------ | :--------------: | :----------------------------------------------: | :---: | :---: | :---: |
| OpenAI       |         ✅        |                         ✅                        |   –   |   –   |   –   |
| Azure OpenAI |         ✅        |                         ✅                        |   –   |   –   |   –   |
| Anthropic    |         ✅        |         Anthropic has no embedding models        |   –   |   –   |   –   |
| Google       |         ✅        |                         ✅                        |   ✅   |   ✅   |   –   |
| Perplexity   |         ✅        | Perplexity does not currently support embeddings |   –   |   –   |   –   |
| Fal.ai       |         –        |                         –                        |   ✅   |   ✅   |   ✅   |
| LiteLLM      |         ✅        |                         ✅                        |   –   |   –   |   –   |
| Runway       |         –        |                         –                        |   –   |   ✅   |   –   |

{% hint style="info" %}
Full setup, per-provider examples, configuration reference, and release notes are on [GitHub](https://github.com/revenium/revenium-go-sdk).
{% endhint %}

***

### LangChain

```bash
pip install "revenium-python-sdk[langchain]"
```

```python
from revenium_middleware.openai.langchain import wrap  # bundled with the OpenAI module
from langchain_openai import ChatOpenAI

llm = wrap(ChatOpenAI(model="gpt-4o"))
response = llm.invoke("Hello from Revenium!")
```

The `wrap()` function attaches a callback handler that reports usage data to Revenium's metering API. See the [SDK README](https://github.com/revenium/revenium-python-sdk) for additional examples.

For LangChain apps already using OpenTelemetry, see [OpenTelemetry Integration](/opentelemetry-integration.md) as an alternative path.

***

### Other Framework Integrations

| Framework / Platform  | Language | Install                                                                 |
| --------------------- | -------- | ----------------------------------------------------------------------- |
| n8n – OpenAI Agent    | Node.js  | [GitHub](https://github.com/revenium/revenium-middleware-openai-n8n)    |
| n8n – Anthropic Agent | Node.js  | [GitHub](https://github.com/revenium/revenium-middleware-anthropic-n8n) |
| Griptape              | Python   | Via griptape driver                                                     |
| OpenTelemetry         | Any      | [See OTEL docs](/opentelemetry-integration.md)                          |

***

### AI Coding Assistants

Revenium ingests telemetry from AI coding assistants via OpenTelemetry, tracking adoption and usage across your engineering team.

| Tool        | Status      |
| ----------- | ----------- |
| Claude Code | ✅ Supported |
| Gemini CLI  | ✅ Supported |

Setup: configure the tool's OTLP exporter to point at Revenium — see [OpenTelemetry Integration](/opentelemetry-integration.md). Usage data appears in the [AI Coding Dashboard](/ai-coding-dashboard.md). For a full list of collected data points, see the [AI Coding Data Reference](/ai-coding-dashboard/ai-coding-data-reference.md).

***

### Multimodal Cost Tracking

Revenium tracks costs across all AI modalities in a single view: [Completions](https://revenium.readme.io/reference/meter_ai_completion), [Images](https://revenium.readme.io/reference/meter_ai_images), [Video](https://revenium.readme.io/reference/meter_ai_video), and [Audio](https://revenium.readme.io/reference/meter_ai_audio).

***

> **Using a different SDK or framework?** Revenium supports [direct API calls](#direct-api-integration) and [OpenTelemetry](/opentelemetry-integration.md) for any application. Contact <support@revenium.io> for integration assistance.
>
> **Using an AI agent?** Revenium docs are available in Claude Code, Cursor, Windsurf, and other MCP-compatible agents via Context7. See [For AI Agents](/for-ai-agents.md).

## Usage Metadata Reference

Each SDK accepts an optional `usage_metadata` object for billing, cost attribution, and alerting. The more fields you provide, the more granular your reporting.

Key fields: `traceId`, `taskType`, `organizationId`, `subscriptionId`, `productId`, `agent`, `responseQualityScore`, `subscriber` (with `id`, `email`, `credential`).

For field definitions, naming conventions, and examples, see each SDK's documentation on [npm](https://www.npmjs.com/org/revenium) or [PyPI](https://pypi.org/org/revenium/). For OTEL-based attribution, see the `revenium.*` attributes in the [OpenTelemetry Integration](/opentelemetry-integration.md#deep-attribution-with-revenium-attributes) docs.

***

### Direct API Integration

For complete customization, use the metering API directly:

* [Completions API](https://revenium.readme.io/reference/meter_ai_completion) | [Images API](https://revenium.readme.io/reference/meter_ai_images) | [Video API](https://revenium.readme.io/reference/meter_ai_video) | [Audio API](https://revenium.readme.io/reference/meter_ai_audio)
* [Agent-friendly API documentation](/for-ai-agents.md)

{% hint style="danger" %}
**A 201 response does NOT confirm your payload was processed correctly.** The metering API returns `201 Created` even when your payload contains unrecognized fields. Misspelled or incorrect field names are silently ignored, which can result in **$0 cost calculations**. Always verify the calculated cost in the AI Analytics dashboard after your first integration.

Common field-name mistakes:

* `imageCount` (ignored) vs `actualImageCount` (correct)
* `audioLength` (ignored) vs `audioDurationSeconds` (correct)
* `stop_reason` (ignored) vs `stopReason` (correct)
  {% endhint %}

{% hint style="info" %}
**Required field: `stopReason`** — For completion metering, the `stopReason` field is **required**. Valid values: `END`, `END_SEQUENCE`, `TOKEN_LIMIT`, `COST_LIMIT`, `COMPLETION_LIMIT`, `ERROR`, `TIMEOUT`, `CANCELLED`. Omitting this field or sending an invalid value (e.g., `MAX_TOKENS`, `CONTENT_FILTER`, `STOP`) will result in a `400 Bad Request`.
{% endhint %}

{% hint style="warning" %}
**Authentication errors return 403, not 401.** If your API key is missing or invalid, the metering API returns `403 Forbidden`. You will not receive a `401 Unauthorized` response.
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.revenium.io/integration-options-for-ai-metering.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
