Incline Trust
A reviewer workspace for an AI-driven corporate spend platform. The model flags expense and vendor exceptions. The interface makes the model legible enough that a controller can accept, adjust, or reject each call with an audit artifact that defends the decision.
The design challenge was not to make the AI look smarter. It was to prevent the two failure modes that destroy finance-ops value: reviewers rubber-stamping a drifting model, and reviewers ignoring the model and falling behind on close.
AI in finance operations has a specific failure pattern. The model ranks thousands of transactions and returns a handful for human review. The reviewer is a controller, an AP lead, or a spend compliance analyst. They have minutes per case during month-end close, a backlog that cannot slip, and a personal signature on the audit file.
If the reviewer trusts the model too much, drift goes undetected. A small change in vendor behavior, a shifted policy interpretation, or a seasonal spike that was not in the training data walks past the queue, because the score still looks fine. When the audit arrives, the reviewer cannot explain the call, because the call was made by the model and the reviewer was not actually the reviewer.
If the reviewer trusts the model too little, the backlog wins. Every case gets re-read against receipts, contracts, and prior months. Close slips. The AI investment gets written up as a productivity story that did not ship. The next review cycle reopens the same pool with the same failure mode.
The real failure is not the model. It is the interface around the model. A recommendation without visible mechanism asks the reviewer to choose between obedience and suspicion. Neither is a reviewer.
Incline Trust is built around a different assumption: the reviewer is accountable, so the interface must make the model accountable to the reviewer. Every score surfaces the signals that produced it. Every signal surfaces its weight. Every weight is tunable in-place. Every decision carries a confidence band forward into an immutable audit artifact, so the audit defends itself.
The product frame is an exception-review queue for a corporate spend platform: the kind of system that connects a company’s cards, bills, reimbursements, and vendor agreements, and uses an AI model to flag cases that deserve human attention. In that frame, each case is a transaction or a vendor pattern that the model believes is off. The controller decides what to do with it. The interface decides how much of the model the controller can actually see.
Five Visualizations, Five Decision Questions
The screen is not a dashboard. It is a single workspace that carries a case from filtered inbox to attested artifact in one session. Each visualization answers a distinct question the reviewer must answer to sign the record, and each one is designed so that the answer is readable in seconds, not minutes.
Inbox confidence bar with uncertainty band
Every queue row shows a filled confidence bar plus a sigma band behind it.
Which cases are confidently flagged, and which are flagged with wide variance that deserves another look?
A reviewer reading the tail, not the mean, chooses a different next case than a reviewer reading a flat ranking.
Trust score arc gauge with needle and sigma band
The selected case renders its score as an arc from 0 to 100, with the current score as a needle, a plus-minus band across the arc, and a color break at the review threshold.
Is this a 78 with plus-minus 2, or a 78 with plus-minus 12?
The 2 cases demand different levels of scrutiny even though the point estimate is identical.
12-month timeline bar chart with adjusted baseline
The vendor or cost-center timeline shows 12 months of spend as bars, an adjusted baseline as a reference line, a sigma band for normal variance, and the flagged month annotated in place.
Where does this month sit against its own history?
A month that looks anomalous in isolation often looks routine against a 12-month baseline, and vice versa.
Weight sliders for the model signals
4 sliders expose the weights the model uses: amount deviation, recurrence match, vendor risk posture, and policy fit. Moving any slider updates the gauge live.
Is the score being driven by the signal I trust most, or by a signal I would weight lower if I were calling this myself?
The sliders turn a black-box score into a decision the reviewer can argue with.
Peer cohort fit bars
A small set of horizontal bars compares the current case against a peer cohort of similar vendors, cost centers, and categories in the same period.
Is the case an outlier against its natural peer group, or is the whole peer group drifting, which would be a policy review rather than a case decision?
The same point estimate becomes a different call depending on whether the cohort is holding or moving around it.
Attestation artifact with captured weights
The audit card starts dark. It lights up only when the reviewer signs the verdict, and it embeds the signals, weights, confidence band, peer cohort, and timestamp the decision was made under.
What did the reviewer see, what did they weight, and how sure was the model at the moment of the call?
The question the audit must answer later is already baked into the record the moment the signature lands.
Each visualization is deliberate about what it does not do. The gauge does not hide its uncertainty. The timeline does not flatten seasonality into a single mean. The sliders do not let the reviewer fake a number the model never produced, because the weights propagate into the audit. The inbox does not rank by score alone, because rank without sigma is how reviewers learn to distrust the queue.
Every pixel is accountable to the signature at the bottom of the screen. If a visual cannot be defended in an audit, it does not belong on the page.
The Critical Path
The guided tour inside the prototype walks the reviewer through the same seven-step path that the interface is optimized for: open the filtered inbox, read the grounded explanation, tune the weights that deserve adjustment, read the score and its uncertainty band, cross-check the 12-month geometry, enter the verdict with attestation, and review the immutable audit artifact. Each step has one primary visualization and one primary control. Each step moves the case closer to a defensible decision, not to a faster one.
The order matters. Reading the explanation before touching the sliders prevents the reviewer from tuning their way to a preferred number. Reading the geometry before entering the verdict prevents a decision that ignores history. Signing the attestation last prevents an audit that cannot explain itself. The sequence is the accountability, not a suggestion.
Incline Trust · Exception review workspace · Simulated corporate spend data · No real vendor, cardholder, or transaction information
The guided tour inside the prototype is available from the top app bar. It spotlights each step of the critical path, names the design impact of that step, and advances the reviewer through the workspace without requiring prior context. Keyboard shortcuts (J and K to move through the queue, A to accept, D to decline, H to hold) are documented in the keyboard tooltip and respect reduced-motion preferences.
The inbox is ranked by a composite score, not by a raw confidence. Each row shows the model’s score, the sigma band behind it, the category, the vendor or cost center, the dollar impact, and the scenario chip. Filters across the top of the inbox are persistent, so scope does not reset when the reviewer opens a case.
The explanation panel is grounded. It does not produce free-form prose. It lists the signals that moved the score (recurrence match, amount deviation, vendor risk posture, policy fit, peer cohort divergence) and cites a specific comparable pattern for each one. The explanation is argumentative rather than narrative.
The weights are tunable, but never hidden. The sliders update the score in real time, but the current weights are captured into the audit artifact at attestation. A reviewer who tunes aggressively leaves the tuning on the record. The design pressure is toward honest adjustment, not preferred outcomes.
The audit artifact is immutable and dark until signed. Before attestation the card shows a disabled state, because a pre-signature audit is a design lie. After attestation the card shows the verdict, the captured signals, the captured weights, the sigma band, the peer cohort, and a reference ID that links the record back to the platform-side ledger.
Technical Details
- Single-file HTML prototype, Material Design 3 dark, teal primary. No build tools, no framework, no external state
- Typographic scale tuned at a 1.250 major-third ratio across 13, 14, 16, 20, 25, 31, 39, and 49 px, with an 8 px spacing rhythm across the layout
- Five coordinated D3.js v7 visualizations: inbox confidence bar with sigma band, trust score arc gauge with needle and sigma band, 12-month timeline bar chart with adjusted baseline and flagged annotation, weight-tuning sliders, peer cohort fit bars
- Native HTML label-and-input accessibility pattern on the attestation checkbox, with
forassociation, visible focus state, and no double-fire toggle logic - Immutable audit artifact with captured signals, captured weights, captured sigma band, peer cohort, ISO timestamp, and control reference ID (IT-C-04 pattern)
- Guided tour overlay with adaptive popover placement, spotlight cutout, and a per-step design-impact note surfaced alongside the critical-path description
- Keyboard path: J and K through the queue, A to accept, D to decline, H to hold; tour keydown handler in capture phase to prevent collision with queue shortcuts when the tour is open
- Four built-in scenario presets (Month-end, Vendor burst, Policy drift, Travel surge) that reshape the inbox, the score distribution, and the peer cohort in-place without page navigation
- Reduced-motion aware throughout: animation and transition durations collapse under
prefers-reduced-motion: reduce
Incline Trust is a portfolio demonstration. The product name, the vendor names, the cardholder names, the transaction amounts, and the peer cohorts are all simulated. The prototype does not connect to any real corporate spend platform, and no proprietary model, weight set, or production ruleset is disclosed. The interaction model and the accountability pattern are authentic to the design practice demonstrated across the rest of this portfolio.
What This Demonstrates
AI-assisted review is not a productivity problem. It is an accountability problem. The reviewer’s signature is what makes the process defensible, and an interface that hides the model’s mechanism is an interface that removes the reviewer from the process they are supposed to be responsible for. The fix is not to make the AI more confident. The fix is to make the mechanism more visible.
This showcase demonstrates a Trust Architecture approach to AI decision support: expose the signals that drove the score, let the reviewer adjust the weights in-place, keep uncertainty visible in every visualization, anchor each case against its own 12-month history, and bind the verdict to an immutable audit artifact the moment it is signed. None of these is a novel idea in isolation. What matters is that they are implemented as a single workspace that compresses the critical path from filtered inbox to defensible record without asking the reviewer to reconstruct context between steps.
The broader thesis, shared with the other showcases in this portfolio, is that probabilistic systems fail in the interface, not in the math. When the interface flattens uncertainty into a number, reviewers rubber-stamp drift. When the interface buries the mechanism, reviewers ignore the tool. When the interface keeps the mechanism visible and the uncertainty legible and the decision defensible, the AI stops being an opinion and starts being a colleague.
Know Who Knows
An enterprise talent mapping interface that makes the topology of expertise visible across three coordinated views.
View case study →
Uncertainty as Signal
A climate risk disclosure platform that turns probabilistic models into decision-ready financial views.
View case study →