HR as DJ
A workforce planning interface that turns staffing uncertainty into a visible capacity distribution for resource managers, delivery leads, and staffing reviews.
The design challenge was that misreading capacity leads to misallocated headcount. The solution was an allocation model that keeps uncertainty visible as a working planning condition rather than disguising it as certainty.
The core insight was that capacity is not just a number. It is a distribution problem. A firm might have 100 billable people, but if 80 of them are at the senior partner level and only 10 are at the senior consultant level, the portfolio is structurally constrained regardless of total headcount.
I designed a sunburst visualization that reveals how uncertainty permeates the capacity distribution, with opacity encoding confidence levels: practice on the outer ring, skill band in the middle, current utilization in the center. The allocation model estimates the likelihood that a practice area can deliver 5 senior consultants for a 6-month banking engagement, surfacing the confidence levels behind each match rather than pretending to know the answer.
The Chrome app extends this into a search interface. Search for a skill, a practice area, or a specific name, and immediately see availability ranges, bench depth with confidence intervals, and deployment history.
A resource manager accustomed to the precision of a staffing spreadsheet must now navigate probabilistic uncertainty, where 88 consultants represent a distribution of confidence levels, not a fixed count. The interface design challenge was specific and daunting: how do you present capacity uncertainty in a way that supports deterministic staffing decisions without pretending the uncertainty does not exist?
You do it not by hiding the distribution, but by making uncertainty visible as uncertainty. A manager who can see the full capacity landscape with its confidence intervals and understand the mechanism behind each match recommendation is far more equipped to make a defensible staffing decision than someone who sees a single “best fit” score and has no idea where it came from.
System Diagram
Pending asset: model-01.png
Before creating the 1st wireframe, I built a domain model visualizing how a consultant’s bench status cascades through practice areas, skill bands, and geographic locations to eventually impact a staffing decision.
This model ultimately became the precursor to a service blueprint mapping the user’s transition from bench awareness to capacity planning and finally to staffing action, with explicit decision points where confidence levels determine whether to proceed or seek more information.
Workflow Map
Pending asset: brainStormMaterials.pdf
Service blueprint: bench recovery to capacity planning to staffing action, with confidence thresholds at each transition.
The platform’s information architecture was structured as a spatial reasoning interface, not a static dashboard. The core architectural problem was that each consultant has 3 distinct analysis axes, practice area, skill band, and availability percentage, and the firm’s capacity distribution changes daily, with varying degrees of certainty.
The defining design decision to manage this cognitive load was progressive disclosure. High-level capacity scores with confidence intervals for initial portfolio screening. Highly detailed individual profiles with availability ranges for specific staffing decisions. A unified treemap that encodes uncertainty visually, block opacity reflects confidence in availability, so users can toggle between structural clarity and probabilistic honesty without losing their analytical context.
Wireframe Sequence
Pending asset: draft.vsdx
Greyscale wireframes showing how 88 consultants are tamed from macro capacity view to individual staffing decision.
Interface Concept
Pending asset: bg-02.jpg
Practice, skill, and availability toggles in a single view. Decision-ready, not data-dumped.
A forward-looking capacity slider replaced the traditional series of static reports in favor of a continuous view showing how bench availability migrates from today out to the next quarter, with confidence bands widening as the timeline extends. The goal at every single level was to be decision-ready, not data-dumped, and to be honest about uncertainty, not decorative about it.
Treemap Detail
Pending asset: model-02.png
Treemap showing capacity distribution by practice and skill band, with block opacity reflecting confidence in availability.
The entire approach crystallized in the treemap analysis view. This interface relies on a custom squarify algorithm showing precisely where capacity is concentrated, how deep the bench runs, and which practice areas may be able to deliver on demand, with the critical design decision that a senior consultant at 60% availability appears as a translucent block, not as a solid block.
The design decision that truly matters here is that this view shows a resource manager evaluating a banking engagement whether the firm may have 5 senior consultants available or merely 5 junior associates, and shows the confidence level behind that estimate. That singular capacity distinction changes the staffing decision entirely. It changes the proposal. It gives a manager something tangible they can put in front of a delivery board and defend, not an abstract utilization score, but a physical mechanism of capacity with its uncertainty made visible rather than hidden.
Simulated data · Portfolio demonstration
Through this design, a delivery board can see the mechanism of capacity with its confidence levels rather than just a black-box utilization score. A resource manager can satisfy staffing reviews by drilling down into individual profiles, availability ranges, and skill distributions, entirely without needing to be a data scientist. A compliance team can map platform outputs directly to staffing decisions and generate audit-ready allocation reports that explicitly state confidence levels rather than pretending to certainty.
A capital allocator can ask how much a practice area’s bench depth may save their portfolio in delivery risk over the next quarter, and get a probabilistically honest answer from that very same interface. The platform does not just show capacity. It provides the framework for navigating it across non-linear time horizons, shifting demand requirements, and the probabilistic nature of workforce availability, with uncertainty visible as uncertainty, not disguised as certainty.
This is a front-end prototype designed to test visualization logic before backend investment. The simulated data was necessary to isolate the interface hypothesis from production data constraints.
Technical Details
- Custom squarify algorithm for treemap layout with confidence-weighted sizing
- Custom SVG sunburst with bench_bucket → practice hierarchy and opacity-encoded confidence
- Custom radar chart with interactive hover tooltips showing factor-level confidence intervals
- No backend, no database, no API dependencies
- Single-file HTML prototype with consolidated CSS (@layer structure)
- 15-person expanded dataset with capacity_weight encoding and confidence intervals
- Drag-and-drop matching with probability report generation including confidence bands
- Iterative design with resource managers to validate that confidence encoding improves decision quality
All data is simulated. No confidential information is disclosed. This prototype validates interface logic, matching behavior, and uncertainty presentation before backend investment.
What This Demonstrates
Capacity planning is not a dashboard problem. It is a representation problem. When staffing confidence is hidden, the organization overcommits with false precision. When confidence is visible, staffing decisions become more honest and more defensible.
This showcase demonstrates how uncertainty can be encoded directly into workforce planning interfaces so that range, confidence, and structural constraints remain visible at the point of assignment.
North Bridge
A reconstructed private-capital coverage workspace that turns dashboard signal, note context, and follow-up tasks into one connected workflow.
View case study →
What Gets Measured
Clinical benchmarking rebuilt as an exception-led command center for care quality, equity, and cross-service comparison.
View case study →