What Gets Measured

Back to selected work

Healthcare Analytics · Clinical Operations · Data Visualization

What Gets Measured

A clinical performance benchmarking system for a leading New York City cancer research institution, reconstructed 15 years later as an exception-led command center.

The design challenge was not to compare physicians on a single score. It was to surface exception patterns before they turn into care quality, equity, or workload risk.

Role

Product designer and sole UX lead. Brought in by a leading New York City cancer research institution to design a faculty performance benchmarking interface spanning clinical, financial, research, and educational dimensions simultaneously.

Approach

Exception-led command center: every visualization answers exactly 1 comparison question, so the patterns worth attention are visible before they are missed in review.

Outcome

A working prototype that made multi-dimensional faculty performance comparable across 101 physicians and 4 years of data, rebuilt in 2026 with care quality, equity, and burnout signals.

3 earlier showcases in this portfolio, Uncertainty as Signal, North Bridge, and HR as DJ, deal with the same underlying problem: when dense information is misread, the resulting decision becomes expensive to reverse. Uncertainty as Signal keeps probabilistic climate risk legible for capital allocation. North Bridge keeps workflow signal tied to company context and follow-through. HR as DJ turns staffing uncertainty into actionable allocation moves. This showcase brings that same design problem into clinical operations, where missed exception patterns create care quality risk.

This showcase goes further back, and further forward at the same time.

Faculty performance at an academic medical center is not a single number. It is the contested intersection of 4 incommensurable domains: clinical productivity, research output, financial contribution, and educational impact. A high-volume surgeon fills OR hours and trains fellows. A principal investigator on 3 grants earns the institution funding and intellectual standing that no collection ratio can capture. A primary care physician who sees twice the patient volume of her colleagues but publishes nothing is, by different measures, both the most and least valuable person on the department’s roster.

I was brought in to design an interface that could show all of these dimensions simultaneously, compare any physician against any reference point, and support decisions about resource allocation, promotion, and strategic planning without pretending the underlying tensions had been resolved.

The design problem was not how to display data. It was how to make 4 incommensurable things comparable without flattening them into 1 number.

This was 2010 to 2012. D3.js did not yet exist. We built the Faculty Information Database on Protovis, the library that would become D3, developed by Mike Bostock and Jeff Heer at Stanford. The framework was early AngularJS, pre-1.0. That constraint forced a kind of clarity that is harder to achieve today. When you cannot reach for a charting library that hands you a parallel coordinates implementation, you have to think from fundamentals about what each visual encoding is actually for. The answers end up in the design.

The Core Insight: Benchmark-Driven Comparison

The design principle that unified the entire interface was this: every visualization answers exactly 1 comparison question.

Bar chart
How does this year compare to prior years, for the same metric?
Treemap
How does this service compare to all others, by area?
Parallel coords
How does this physician compare to every other across 8 dimensions?
Spider diagram
How does this individual compare to the department average?
Bubble chart
How does financial efficiency compare to revenue capture, per service?

5 visualizations. 5 comparison questions. No chart that answered more than 1, because a chart that tries to answer 2 usually answers neither cleanly.

The most sophisticated interaction in the original work was the parallel coordinates brush filter, showing all 101 physicians as individual polylines crossing 8 axes simultaneously. The brush let users hold 1 dimension constant while immediately observing whether that dimension correlated with others or was entirely orthogonal. That is not filtering. That is analysis. The interface should support it at the precision of a continuous brush, not the coarseness of a dropdown.

Benchmark-driven comparison is not dashboards with benchmarks. It is the chart type and the comparison question being the same decision.

Interactive Prototype
Clinical Benchmark Command Center · Simulated data · No real patient or clinical information

The 2026 reconstruction carries the same design philosophy forward, not as a faithful recreation, but as an evolution that brings 15 years of additional practice to the same problem.

The domain expanded to care quality and equity. The original tracked billing amount, collection ratio, and OR cases. The reconstruction tracks Care Quality, Access Reliability, Equity Gap Closure, Workload Pressure, Value Realization, and Team Support. The metrics are different because the questions have shifted.

The interface became exception-led. The original required navigation: select a view, load a chart, apply filters. The command center surfaces exceptions immediately, a ranked alert feed, a selected exception panel, a burnout risk signal, because exploration follows the signal; it does not precede it.

Burnout risk and patient equity entered the frame. 2 signals appear in the reconstruction that the original work could not have anticipated: clinician burnout as a systemic risk indicator, and access parity stratified by patient cohort, measuring whether care access patterns differ across racial and ethnic lines by service. These are 2026 questions.

The parallel coordinates survived and deepened. The Benchmark Comparison Engine is the same analytical operation, hold 1 axis and observe all others, rebuilt in D3.js v7 with 6 clinical dimensions, overlaid reference lines for the review target and scoped service mean, and 4 scenario lenses: Quarterly Benchmark Review, Capacity Strain Watch, Access Parity Review, Quality Drift Watch.

The metrics chosen for any performance dashboard are not neutral. They are an institutional statement about what gets valued. What gets measured defines what gets managed. A well-designed interface makes that argument legible rather than hiding it in a formula.


Technical Details

  • Original build: AngularJS (pre-1.0) + Protovis, the library that preceded D3.js, by the same authors
  • 5 chart types in the original: grouped bar, treemap, parallel coordinates, spider or radar, bubble chart
  • 101 physicians, 8 performance axes, 4 years of monthly historical data
  • 2026 rebuild: single-file HTML, D3.js v7 via CDN, no build tools or framework dependencies
  • Parallel coordinates on Canvas with live brush filtering across 6 clinical dimensions
  • 4 named review scenarios with service-level and patient cohort adjustments
  • Exception surfacing: alert feed ranked by severity, drill-down panel, burnout risk toast
  • CSS custom properties and @layer specificity management, with no !important in prototype code
  • 5 polished screenshots and working AngularJS source code preserved from the original build
Disclosure

All data is simulated. No real patient or clinical information is disclosed or implied. The original prototype screenshots document genuine prior work. No confidential institutional information appears in the working demonstration.

What This Demonstrates

The choice of what to measure, how to visualize it, and how to make it interactive are all the same decision made at different levels of abstraction. A parallel coordinates brush is not a filter control. It is the claim that holding 1 dimension constant while observing all others is the right analytical operation for this kind of comparison, and that a decision-maker under time pressure deserves that capability at the precision of a continuous brush, not the coarseness of a dropdown.

15 years of practice in this domain, from Protovis to D3, from reporting tools to command centers, from financial metrics to equity signals, produced a point of view. This showcase is the evidence for it.

Continue through the showcases
Previous showcase
HR as DJ
A workforce planning interface that makes staffing uncertainty visible as a capacity distribution.
View case study →
Next showcase
Decision Logic Before Build
A cross-platform clinical mobile prototype — iOS and Android — that verifies decision logic, accessibility structure, and workflow coverage before native build begins.
View case study →

View all selected work →