πŸ”­OpenTelemetry Integration

Revenium accepts OTLP (OpenTelemetry Protocol) data directly from your instrumented applications. If your app already emits telemetry via an OTLP exporterβ€”using LangChain, the OpenAI Python SDK with OTel instrumentation, or your own custom spansβ€”you can point it at Revenium with a few lines of configuration and start seeing AI usage data immediately.

This page covers Revenium-specific configuration. It assumes you already understand OTLP and have an exporter set up.


Endpoints

Revenium accepts both OTLP/HTTP and OTLP/gRPC.

OTLP/HTTP

Path
Accepts

https://api.revenium.io/v2/otlp/v1/traces

Traces (JSON or protobuf)

https://api.revenium.io/v2/otlp/v1/logs

Logs (JSON or protobuf)

https://api.revenium.io/v2/otlp/v1/metrics

Metrics (JSON or protobuf)

https://api.revenium.io/v2/otlp

Unified (traces + logs + metrics in one request)

Both application/json and application/x-protobuf content types are accepted.

OTLP/gRPC

Host
Port
Service

api.revenium.io

4317

opentelemetry.proto.collector.trace.v1.TraceService

The gRPC endpoint accepts traces. For logs and metrics, use the HTTP endpoints.


Authentication

Your Revenium API key goes in the Authorization header as a Bearer token. Both Authorization: Bearer and x-api-key headers are accepted.

or

For frameworks or SDKs that cannot set custom headers (e.g., some CLI tools), you can pass the API key as an OTLP resource attribute instead:

This travels inside the encrypted OTLP payload body and is secure over TLS. Note: this fallback is not available for OTLP/gRPC (interceptors run before message deserialization on the gRPC path).

You can find your API key in the Revenium app under Settings β†’ API Keys.


Quick Start

1. Configure your OTLP exporter

Set the exporter endpoint and auth header. The exact env vars depend on your SDK, but the standard OpenTelemetry environment variables work with all OTel SDKs:

If your framework uses separate endpoint vars for signals:

circle-exclamation

2. Send a trace

Run your application normally. The OTel instrumentation in your app creates spans when AI calls are made, and the exporter sends them to Revenium.

3. See it in Revenium

Go to System & Transaction Logs or Trace Analytics. If the span contained recognized GenAI attributes (see below), you'll see token counts, model, provider, and cost data populated automatically.


Supported Frameworks and SDKs

Revenium auto-detects the source based on resource attributes and instrumentation scope names. No special configuration in Revenium is requiredβ€”just point your exporter at the endpoint.

Python: opentelemetry-instrumentation-openai

Python: opentelemetry-instrumentation-anthropic

Python: LangChain via OpenInference or OpenLLMetry

Both OpenInferencearrow-up-right and OpenLLMetryarrow-up-right emit GenAI semantic convention spans. Configure the OTLP exporter as shown above; Revenium picks them up automatically.

Node.js: @opentelemetry/instrumentation-openai

Go SDK (gRPC)

OpenTelemetry Collector

If you're already running an OTel Collector, add Revenium as an exporter:


What Gets Captured Automatically

Revenium reads OpenTelemetry GenAI semantic conventionsarrow-up-right from your spans and log records. If your instrumentation library follows the spec, these fields populate automaticallyβ€”no extra code required.

Token Counts

GenAI Attribute
Revenium Field
Notes

gen_ai.usage.input_tokens

Input tokens

Also accepts deprecated gen_ai.usage.prompt_tokens

gen_ai.usage.output_tokens

Output tokens

Also accepts deprecated gen_ai.usage.completion_tokens

gen_ai.usage.cache_read_input_tokens

Cache read tokens

Also accepts gen_ai.usage.cache_read_tokens

gen_ai.usage.cache_creation_input_tokens

Cache creation tokens

Also accepts gen_ai.usage.cache_creation_tokens

Model and Provider

GenAI Attribute
Revenium Field
Notes

gen_ai.response.model

Model

Preferred; falls back to gen_ai.request.model

gen_ai.provider.name

Provider

Also accepts deprecated gen_ai.system

gen_ai.request.model

Model (request)

Used when response model is absent

Operation Type and Finish Reason

GenAI Attribute
Revenium Field
Values

gen_ai.operation.name

Operation type

chat β†’ Chat; embeddings β†’ Embed; text_completion β†’ Chat; generate_content β†’ Chat

gen_ai.response.finish_reasons

Stop reason

Array; Revenium uses the first element β€” see mapping table below

Finish reason mapping:

gen_ai.response.finish_reasons value

Revenium stop reason

stop, end_turn

End

max_tokens, length

Token Limit

stop_sequence

End Sequence

content_filter

Error

tool_calls, function_call

End

(any other value)

End

Request Parameters

GenAI Attribute
Revenium Field
Notes

gen_ai.request.temperature

Temperature

Standard OTel attribute; captured automatically

gen_ai.response.id

System fingerprint

Response identifier from the model provider

Timing and Errors

Span start/end nanosecond timestamps (startTimeUnixNano, endTimeUnixNano) are used to populate request time, response time, and duration. Error details are read from exception.message (primary) and error.type (fallback). HTTP status codes are read from http.response.status_code.

Infrastructure Context

Standard OTel Attribute
Revenium Field

deployment.environment

Environment

cloud.region

Region

Span Hierarchy

traceId, spanId, and parentSpanId from spans are mapped to Revenium's trace, transaction, and parent transaction fields respectively. This powers the Trace Analytics dependency tree and waterfall visualizations.

circle-info

Revenium skips spans with gen_ai.operation.name of execute_tool, invoke_agent, or create_agent β€” these are orchestration spans, not LLM calls. Only spans that represent actual model invocations are metered.


Deep Attribution with revenium.* Attributes

Standard OTel tells you what happened β€” which model was called, how many tokens were used. revenium.* attributes tell Revenium why it happened and who it happened for. This is what enables per-customer, per-product, per-agent, and per-job cost attribution that goes beyond what the OTel spec captures.

Set these on spans or log records (record-level attributes take precedence over resource-level attributes, so you can set defaults at the resource level and override on individual spans).

System Fingerprint

circle-info

revenium.system.fingerprint is one of Revenium's most powerful attribution tools. It lets you tag each AI call with an identifier that represents what configuration produced it β€” your system prompt version, your prompt template ID, your agent configuration hash, or any other identifier you use to distinguish one variant from another.

This is how you answer questions like: "Which version of my system prompt is more expensive?" or "Did my prompt optimization actually reduce costs?" Standard OTel has no equivalent.

Attribute
Revenium Field
Notes

gen_ai.response.id

System fingerprint

Populated automatically if your provider returns a response ID

revenium.system.fingerprint

System fingerprint

Use this to set your own fingerprint β€” overrides gen_ai.response.id when both are present

Use any value that uniquely identifies the configuration: a version string, a git SHA, a prompt template ID, or a hash of your system prompt content.

Organization and Product

Attribute
Revenium Field
Example

revenium.organization.name

Organization

"acme-corp"

revenium.product.name

Product

"document-summarizer"

revenium.subscription.id

Subscription

"enterprise-plan-q1"

Subscribers and Users

Attribute
Revenium Field
Example

revenium.subscriber.id

Subscriber ID

"user-12345"

revenium.subscriber.email

Subscriber email

revenium.subscriber.credential.name

Credential name

"api-key-prod"

revenium.subscriber.credential.value

Credential value

"pk-abc123"

Agents and Agentic Workflows

Attribute
Revenium Field
Example

revenium.agent.name

Agent

"support-agent-v2"

revenium.task.type

Task type

"summarize"

revenium.trace.type

Trace type

"rag-pipeline"

revenium.trace.name

Trace name

"support-ticket-resolution"

revenium.transaction.name

Transaction name

"retrieve-context"

Squads (Multi-Agent Teams)

Attribute
Revenium Field
Example

revenium.squad.id

Squad ID

"squad-billing"

revenium.squad.name

Squad name

"Billing Support Squad"

revenium.squad.role

Role in squad

"orchestrator"

Agentic Jobs

Attribute
Revenium Field
Example

revenium.job.id

Job ID

"job-20250312-001"

revenium.job.name

Job name

"nightly-report-gen"

revenium.job.type

Job type

"batch"

revenium.job.version

Job version

"2.1.0"

Other Fields

Attribute
Revenium Field
Notes

revenium.operation.subtype

Operation subtype

Free-form sub-classification

revenium.retry.number

Retry number

Integer; useful for tracking retry cost

revenium.request.stream

Is streamed

Boolean

revenium.middleware.source

Middleware source

Identifies the SDK or integration layer

Setting Attribution Attributes (Python Example)

You can also set defaults at the resource level so they apply to all spans from your service, and override on individual spans where needed:

Record-level (span) attributes always win over resource-level attributes when the same key is set at both levels.


Troubleshooting

Data not appearing in Revenium

Check that:

  1. Your API key starts with hak_ and is in the correct format (hak_<tenant>_<secret>).

  2. Your spans include at least one of the following so Revenium can route them to the correct mapper:

    • gen_ai.provider.name or gen_ai.system attribute (resource-level or span-level)

    • An instrumentation scope name starting with gen_ai, openai, anthropic, opentelemetry.instrumentation.openai, or opentelemetry.instrumentation.anthropic

  3. The exporter endpoint URL is correct and includes the full path (/v2/otlp/v1/traces, not /v2/otlp), unless your SDK sends all signals to a single base URLβ€”in that case, use /v2/otlp.

Tokens showing as zero

Verify your instrumentation library emits gen_ai.usage.input_tokens and gen_ai.usage.output_tokens (or their deprecated equivalents gen_ai.usage.prompt_tokens / gen_ai.usage.completion_tokens) as numeric attributes on the span.

Tool call and agent orchestration spans are not appearing

This is expected behavior. Revenium filters out execute_tool, invoke_agent, and create_agent spans because they are not LLM calls. Only spans that represent actual model invocations produce metering records.


Last updated

Was this helpful?