# AI Outcomes

Most observability tools stop at technical execution. They tell you that an agent completed, how many tokens it used, and what it cost. They don't tell you whether the business goal was achieved.

AI Outcomes closes that gap. Every agent run ends with a reported outcome — CONVERTED, ESCALATED, DEFLECTED, or CUSTOM — posted alongside the execution cost. From that single data point, Revenium calculates ROI, deflection rates, cost per conversion, and the ratio of business value to spend. The ledger becomes legible.

The core insight is that **outcomes are financial events, not just metrics**. An agent that closes a $4,200 deal produces a financial record. An agent that deflects a support call that would have cost $50 to handle manually also produces a financial record. Aggregated across hundreds or thousands of runs, these records answer the question every AI product leader actually needs to answer: are these agents paying for themselves?

***

### <i class="fa-circle-exclamation">:circle-exclamation:</i> Why outcomes matter

A 100% technical success rate means nothing if the business goal fails.

Consider an AI sales agent that qualifies leads. Every LLM call succeeds. Every tool invocation returns data. The workflow completes cleanly. But only 8% of qualified leads convert to deals. Is the agent performing? The technical logs say yes. The economics say it depends entirely on what those conversions are worth against the total cost of running all 100% of the qualifying runs.

This is the calculation that Revenium's AI Outcomes is built around. Without outcome reporting, you have cost data with no value side. With it, the ROI dashboard can tell you not just what the agent spent, but whether it earned.

The same principle applies to support deflection, code review automation, document processing — any workflow where "did it run" is a different question from "did it deliver."

***

### <i class="fa-table">:table:</i> The outcome taxonomy

Four outcome types cover the range of business results that matter across agentic workflows.

| Outcome       | When to use                                                                                                                                                            | What it signals                                                                                                             |
| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| **CONVERTED** | The agent achieved the business goal and generated value — a deal closed, a support ticket resolved without escalation generating a measurable saving, code merged     | Positive economic return: revenue created or cost meaningfully displaced                                                    |
| **ESCALATED** | The agent handed off to a human to complete the job                                                                                                                    | Human involvement required; report full business value and meter human time separately as a tool cost so ROI stays accurate |
| **DEFLECTED** | The agent completed work that would otherwise have required a more expensive path — a support call handled self-serve, an automated review that replaced manual effort | Cost avoidance: the value is what the alternative would have cost                                                           |
| **CUSTOM**    | A business result your organization defines that doesn't map to the above — tiered qualification scores, partial completions, multi-stage deal progressions            | Domain-specific outcomes with tenant-defined naming and value                                                               |

**UNSUCCESSFUL** is a fifth `outcomeType` value, posted the same way as the four above — the job ran without technical error but produced no business outcome and was not escalated. Post `executionStatus: SUCCESS` with `outcomeType: UNSUCCESSFUL`.

**Pending** is a UI-only status — it appears when a job has not yet had an outcome reported. It is not a value you post through the API.

**Outcomes are immutable.** Once reported, the record cannot be changed — a 409 Conflict is returned on any second post to the same job. This is deliberate: business outcomes are historical facts, and the integrity of every ROI calculation built on them depends on that immutability.

**Execution status and outcome type are independent dimensions.** `executionStatus` captures whether the technical work completed (`SUCCESS`, `FAILED`, `CANCELLED`). `outcomeType` captures what the job produced in business terms. A job can be `SUCCESS` / `CONVERTED`, or `SUCCESS` / `UNSUCCESSFUL` (ran cleanly, business goal missed), or `FAILED` / `ESCALATED`. The separation is intentional — conflating them is how teams end up thinking an agent is working when it isn't delivering.

***

### <i class="fa-code">:code:</i> How to emit outcomes

Outcome reporting uses the Python SDK's `AgenticOutcomeClient`. With the SDK, the pattern is four calls in sequence: register the job, meter LLM completions as they happen, meter tool events as they happen, then report the terminal outcome when the job ends.

If you are using direct metering payloads instead of the SDK, you do not need a separate create-job call. Revenium creates the job entity the first time it sees a new `agenticJobId`; see [Analyze Decision Costs](/instrument-your-agents/analyze-decision-costs.md#reporting-outcomes-via-api) for the direct API flow.

**Install the SDK:**

```bash
pip install revenium-python-sdk
```

**The four-call pattern in outline:**

```python
from revenium_middleware.agentic_outcomes import AgenticOutcomeClient, AgenticOutcomeSettings

settings = AgenticOutcomeSettings(api_key="rev_sk_...")  # write-scope key required

with AgenticOutcomeClient(settings) as client:
    client.create_job(job_id, name=..., type=..., environment=...)   # 1. register the job
    client.emit_completion({...})                                     # 2. meter each LLM call
    client.emit_tool_event({...})                                     # 3. meter each tool call
    client.report_outcome(job_id, {"executionStatus": "SUCCESS",
                                   "outcomeType":     "CONVERTED",
                                   "outcomeValue":    4200.00})       # 4. close with outcome
```

**Key fields on the outcome payload (call 4):**

| Field             | Required | Allowed values                                                             |
| ----------------- | -------- | -------------------------------------------------------------------------- |
| `executionStatus` | Yes      | `SUCCESS`, `FAILED`, `CANCELLED`                                           |
| `outcomeType`     | Optional | `CONVERTED`, `ESCALATED`, `DEFLECTED`, `UNSUCCESSFUL`, `CUSTOM`            |
| `outcomeValue`    | Optional | Monetary value (number). USD by default; EUR, GBP, CAD, JPY supported      |
| `metadata`        | Optional | JSON string. Carry any additional context — customer ID, deal stage, notes |

**Full payload reference:** the complete field list for `emit_completion`, `emit_tool_event`, and `report_outcome` lives in the [API reference](https://revenium.readme.io/reference/report_job_outcome) and in the runnable SDK examples (linked below). Treat those as canonical; this page is the conceptual walkthrough.

**Write-scope API key required.** Outcome reporting writes financial records. Use a key with write permissions (`rev_sk_...`), not a read-only or metering-only key. See [API Key Permissions](/integrations/api-key-permissions.md) for the key tier reference.

Response codes: `200 OK` — recorded; `409 Conflict` — already reported (do not retry); `404 Not Found` — job ID unknown (the SDK retries automatically — see retry timing below).

***

### <i class="fa-clock-rotate-left">:clock-rotate-left:</i> Job Outcomes and Retry Timing

If you post a job outcome immediately after sending metering data, the outcome request can briefly return `404`. Metering ingestion creates the job record asynchronously, so the outcome lookup may run before the job exists.

The SDK handles this with exponential backoff:

* Maximum attempts: `10`
* Initial delay: `2.0s`
* Maximum delay: `90s` (sized to absorb backend `ErrorPatternRateLimitFilter` penalties up to 60s and honor server-sent `Retry-After`)
* Configuration: `AgenticOutcomeSettings.outcome_retry_*`

Do not add a second retry loop around the SDK call unless you have a specific reason. Await the SDK call and let it handle the short creation race.

***

### <i class="fa-gauge-high">:gauge-high:</i> What you'll see in the dashboard

Reported outcomes feed the **Intelligence > Costs & Revenue > ROI Dashboard** in real time.

The headline figure is the **Value Ratio** — dollars of business value produced per dollar of agent cost. This is visible at the aggregate level, per job type, and drillable to individual job executions. Two funnels break down where outcomes land:

* **Conversion Funnel** — Total Jobs → Successful → Converted. The gap between Successful and Converted is the population that ran cleanly but didn't deliver a business outcome. These are the jobs where optimization opportunity lives.
* **Cost Avoidance Funnel** — Total Jobs → Successful → Deflected. Measures how often successful technical execution produced cost savings rather than revenue.

For trace-level debug — the individual LLM calls and tool events inside a job — see [Debug Logs & Traces](/optimize-performance/debug-logs-and-traces.md). The Analyze Decision Costs page covers the full ROI dashboard in depth, including how to handle escalated outcomes, how to value deflections, and how to read the Job Types by Value Ratio table.

***

### <i class="fa-rectangle-code">:rectangle-code:</i> Example scenarios

The Revenium Python SDK ships three reference implementations that demonstrate the full four-call pattern across different workflow types:

**AI Sales Agent** — a sales lead qualification workflow with three LLM steps (prospecting, qualification, close), two enrichment tool calls (ZoomInfo, Apollo), and a human SDR escalation path. Deterministic outcome mix: 12% conversion rate, full deal value reported on CONVERTED outcomes.

**AI Customer Support Agent** — a support ticket workflow across three scenario types (triage, resolution, escalation handling) with KB search and CRM tool calls. Outcome mix: 80% DEFLECTED (self-serve deflection value = avoided human agent cost), 8% ESCALATED, 12% CONVERTED (upsell during support session).

**AI Coding Workflow** — five scenario types (PR review, test generation, incident RCA, release gate, dependency risk analysis) with repo search, CI, and GitHub tool calls. Outcome mix: 72% CONVERTED (autonomous task completion), 10% ESCALATED (human engineering takeover), 18% CUSTOM (task canceled).

These are available in the [Revenium Python SDK examples](https://github.com/revenium/revenium-python-sdk/tree/main/examples/agentic_outcomes) (`sales.py`, `coding.py`, `support.py`). Each file is self-contained and can be adapted directly to a real workflow by swapping in actual LLM calls and tool events in place of the demo data generators.

***

### <i class="fa-link">:link:</i> Related

* [Analyze Decision Costs](/instrument-your-agents/analyze-decision-costs.md) — the ROI dashboard in depth: conversion funnels, Value Ratio, how to handle escalated outcomes correctly, and per-job-type analysis
* [Agent Instrumentation Guide](/instrument-your-agents/agent-instrumentation-guide.md) — the full instrumentation model: transactions, traces, jobs, squads, and what each level unlocks
* [Monitor Agent Tool Usage](/instrument-your-agents/monitor-agent-tool-usage.md) — registering external tools and metering tool events so tool costs appear alongside token costs in the ROI calculation
* [AI Insights](/optimize-performance/ai-insights.md) — anomaly detection and recommendations that surface after outcome data accumulates
* [Debug Logs & Traces](/optimize-performance/debug-logs-and-traces.md) — trace-level view of individual LLM calls and tool events within a job


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.revenium.io/instrument-your-agents/agent-outcomes.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
