Back to portfolio Featured case study

Designing and launching a UI-enabled control solution for a billion-dollar manufacturing operation.

How I took an ambiguous, high-stakes operating problem from discovery through adoption — aligning stakeholders, defining a product path, and following through until measurable value appeared.

Industry
Heavy manufacturing / Industrial operations
My role
Product and operations lead
Production scale
~$1B annual production line
Lifecycle
Discovery → MVP → Full launch

A leading business in a brutal market — constrained, cash-tight, and losing ground it couldn't afford to lose.

The organization was a large manufacturing subsidiary — $2B+ in annual revenue — competing in a market that punished inconsistency. Customers chose on cost, quality, and delivery lead time. Loyalty was low. If you couldn't perform, they moved on. The company held a leading market position, but that position was eroding. Customer sentiment was declining, the balance sheet was cash constrained, and the growth path was blocked: capacity constrained meant limited upside until the operation itself performed better.

The business was regulated by the American Iron and Steel Institute (AISI), which set minimum performance thresholds for material composition and field application. That added another layer of non-negotiable constraint — you had to hit a floor just to stay in the game, and hitting only the floor wasn't enough to win.

The core strategic problem was simple to state and hard to solve: the organization couldn't consistently execute the strategy it needed to compete. Any product or operational improvement that enabled more consistent performance wasn't a nice-to-have — it was directly tied to market share, revenue, and survival.

My mandate
Embed in the business. Understand how people, process, and technology were interacting. Determine where the gaps were and what a better operating design would look like — and do it in a way that was analytical enough to be credible and empathetic enough to actually work on the floor.
Operating environment
Company size$2B+ annual revenue
IndustryHeavy manufacturing
RegulationAISI — material + performance standards
GTM approachCost · quality · delivery lead time
Pressure points
Growth modeCapacity constrained
CompetitionHighly competitive, low loyalty
Balance sheetCash constrained
Customer sentimentDeclining
The core challenge
Was there a way to make everyday life better for operators and middle management — while simultaneously driving the operational stability the business needed to compete and grow?

I led the product and operating path. The team built, operated, and validated it.

This project required close collaboration across operators, engineers, managers, and business leaders. Being clear about what I owned versus what I enabled is important for understanding what this work demonstrates about how I lead.

What I owned
  • Problem framing and operating context analysis
  • Stakeholder alignment strategy and execution
  • User research design and synthesis
  • Product requirements development and MVP scoping
  • Prioritization methodology (RICE) and tradeoff decisions
  • Launch sequencing strategy and risk mitigation design
  • Post-launch KPI monitoring and feedback integration
What the team contributed
  • Engineering design and technical implementation
  • Equipment integration and technical validation
  • Operational subject matter expertise
  • Production floor testing and calibration
  • Shift-level rollout coordination
Stakeholder RACI matrix showing 19 activities across 8 stakeholder groups
Stakeholder RACI matrix
Decision rights across 8 stakeholder groups and 19 activities. PM held Accountable/Responsible on 14 of 19 — engineering execution was owned by Process Automation Lead.

This product had to work inside a live, high-stakes operation — not around it.

The constraints on this project were not incidental — they were the most important inputs to product design. A solution that required production downtime, disrupted operator workflows, or introduced new failure modes was not a solution. Every product decision had to pass through these filters:

Zero tolerance for production disruption during rollout Must work within legacy equipment architecture Operator trust had to be earned, not assumed No disruption to existing shift and team structures Reliability requirements at or above existing controls Business case required measurable ROI, not just capability ~$1B annual production line exposure during deployment
Design implication

These constraints shaped the entire product path — from the phased launch strategy, to the shadow run methodology, to the UI design principles that prioritized clarity and trust over feature richness.

Thirty conversations. One finding that changed everything.

Before any product decisions were made, I conducted 30 structured 1:1 interviews with operators across experience levels. The interview guide was consistent — covering motivation, history with the product, how frequently they saw successful production, what their experience was like during the most challenging scenarios, how they expected their equipment to perform, and critically, what they would change if they could. Consistency in the questions made the synthesis meaningful: I could identify themes by volume rather than guessing at what was representative.

After the interviews, I synthesized my notes into a set of repeating VoC themes. One theme dominated — in both frequency and emotional intensity. Operators couldn't accurately and expeditiously intervene when pre-existing automation was ineffective and manual action was required. More quotes pointed to this than any other theme, by a wide margin. That's what became the product's north star.

Method

30 structured 1:1 interviews

Consistent question template across senior and junior operators — motivation, production history, equipment expectations, challenging scenario experience, and what they would change. Consistency enabled theme quantification, not just anecdote collection.

Method

VoC synthesis + affinity mapping

Interview notes synthesized into repeating themes ranked by comment volume. The largest theme — intervention difficulty — drove directly into RICE scoring as the highest-reach, highest-impact candidate feature.

Method

User journey mapping

Mapped the operator experience across the full production scenario lifecycle. Confirmed the VoC insight in a different format — and gave senior leaders who weren't on the floor daily a way to feel the problem, not just understand it intellectually.

Method

Operational data review

Analyzed production data, incident reports, and performance trends to validate and quantify what operators were describing qualitatively — grounding the emotional finding in measurable business impact.

Senior Operator user persona showing goals, frustrations, personality, motivation, and change approaches
User persona — Senior Operator
The most critical and most resistant persona. High influence on peers, deeply proud of their expertise, and highly sensitized to anything that suggested their judgment was being replaced. Designing for this person — not around them — was what made adoption possible.
Voice of customer synthesis showing the core insight and representative operator interview quotes
VoC synthesis — dominant theme
The largest theme by comment volume: operators could not accurately or quickly enough intervene when pre-existing automation fell short. Three representative quotes shown — each expressing the same experience in different ways, across different experience levels.
Operator journey map across Inspect, Assess, Prepare, Execute, and Adapt phases
Operator journey map
Confirmed the VoC finding spatially — the sharpest drop in experience quality occurs precisely at the moment intervention is required. For senior leaders who weren't on the floor when difficult schedules ran, this made the emotional reality of the problem visible in a way that interview summaries couldn't.
What surprised me — and why it mattered
I expected to find operational frustration. I didn't expect to find anxiety at that scale. The entire facility felt it when difficult production schedules arrived — operators, managers, everyone whose metrics were at the mercy of the day's production mix. There was a palpable powerlessness. Coming to work on certain days was genuinely stressful for people who cared deeply about their craft. That emotional reality gave the product its staying power: once we demonstrated it could actually reduce that anxiety and give operators a real leg up in those moments, adoption wasn't something we had to push. People wanted it.

Every product decision in this phase was traceable back to something a user said or a number confirmed.

Product definition on this program moved through four deliberate steps: problem statements → constraint-free ideation → solution alignment → prioritized MVP scope. Each step was designed to do specific work — not just sequence activities, but prevent the team from solution-jumping before the problem was sufficiently understood, and prevent groupthink from narrowing the solution space too early.

1
Problem statements — translating VoC into structured scope

Using the personas as guides, I generated persona-centered problem statements that translated VoC themes into actionable scope. Each statement mapped a specific user need to a solution requirement, giving the team a concrete target rather than an abstract theme. Critically, I used an anonymous idea collection method during this phase to reduce bias — ensuring that the diversity of experience and seniority in the room didn't cause quieter voices to defer to louder ones. The feedback was documented digitally and synthesized into value statements that every stakeholder could see themselves in.

The goal wasn't just to document problems — it was to surface the maximum number of synergies a single product could address. The more jobs-to-be-done one solution could handle, the broader the adoption case and the stronger the business justification.

Problem statements table showing persona-centered user needs mapped to solution requirements
Problem statements
Persona-centered statements mapped to solution needs — from yield loss and safety exposure to manual intervention burden. Each row represents a traceable line from user insight to product requirement.
2
Constraint-free ideation — expand before you narrow

Before reintroducing any constraints, I facilitated ideation sessions designed to generate a large number of potential solutions without filtering. No idea was out of bounds — no technical constraints, no budgetary limits, no HR, production schedule, time-to-market, management optics, or workforce culture constraints. This was intentional: constraints introduced too early cause teams to anchor on what's feasible rather than what's valuable.

I prompted participants with questions designed to surface latent needs: what they'd want their systems to do differently, what upstream operations could do better, what information they'd need to make better decisions, and what good would actually look like if nothing was holding them back. Having the problem statements visible during this session accelerated alignment — it's easier to imagine solutions when you're looking at a structured problem than when you're working from memory.

3
Solution alignment — reintroduce constraints, converge on direction

Once the ideation space was fully explored, I reintroduced constraints to move the group toward a definable MVP. The team collectively aligned on an in-house automation product that would integrate a new control system with a novel closed-feedback loop and enhanced model learning capabilities. This was not an off-the-shelf solution — it required building something that didn't exist, calibrated specifically to the facility's equipment, product mix, and operator reality.

To define what the MVP would actually need to be and do, I scheduled requirements elicitation sessions across stakeholder groups — using formal interviews, informal conversations, and independent data gathering in parallel. The sessions were structured around seven core questions: what the solution would need to look like, how it should work, what it should never do, how end-users would use it, why non-functional stakeholders should care, which users wanted design input, and critically — if it worked as intended, would the end-user actually use it, and why or why not?

Product requirements hierarchy showing Functional, Technical, Non-Functional, Reliability, Security, and Integration categories
Requirements hierarchy
Six requirement categories built from elicitation sessions: Functional, Technical, Non-Functional, Reliability, Security, and Integration. Each requirement traceable to a stakeholder input or a constraint identified during solutioning.
4
RICE prioritization — a fair, transparent framework for hard tradeoffs

With a full feature list assembled, the next challenge was prioritization — and in a room with strong opinions and competing stakeholder interests, intuition-based prioritization creates friction. I used the RICE matrix to score each candidate feature against reach, impact, confidence, and effort. The transparency of the method did as much work as the scores themselves: when senior stakeholders could see exactly why one feature outranked another, alignment came faster and held longer.

RICE prioritization matrix comparing four feature candidates by reach, impact, confidence, and effort with final RICE scores
RICE prioritization matrix
Four feature candidates scored against reach, impact, confidence, and effort. Closed Loop Real-Time Correction scored 94 — nearly double the next candidate — and became the undisputed anchor of the MVP. The method gave stakeholders a shared, defensible basis for every scope decision that followed.

We didn't launch to a production floor. We earned the right to be there.

Given the stakes — a product that would influence real-time decisions on a ~$1B production line — rollout had to be designed with the same rigor as the product itself. The goal of the launch sequence was not just to deploy software. It was to build operator trust, validate technical behavior under real conditions, and give the business confidence before expanding exposure.

Phase 1

Shadow runs

Product ran in parallel with existing controls, disconnected from live equipment. Validated calculated outputs, interface behavior, and user comprehension without any production risk.

Phase 2

Test-group launch

Released to a small cohort of early adopters across select operators and shifts. Collected structured feedback, monitored usage patterns, and resolved functional gaps before expansion.

Phase 3

Limited production launch

Expanded to additional operators and shifts while continuing to monitor product performance, production behavior, and stakeholder confidence metrics.

Phase 4

Full production launch

Deployed broadly across the operation with active KPI monitoring, a structured feedback loop, and regular post-launch reviews with operations leadership.

Launch plan — phased rollout sequence
Launch plan — phased rollout sequence
Four gated phases — Shadow, Test-group, Limited production, Full production — each with explicit pass criteria before expansion.
Launch design principle
Adoption was not treated as a communications task. It was designed into the product strategy, the rollout sequence, and the post-launch feedback loop from day one.

I didn't convince operators to use the product. I built it with the ones who wanted to win — and let the results do the rest.

Senior operators at this facility had decades of equipment knowledge that I had to earn the right to understand. They had seen tools come and go, promises made and not kept, and management initiatives that didn't account for what the job actually felt like from the floor. Winning their trust wasn't a communications challenge. It was a design challenge — and it started with the 30 conversations I had before a single requirement was written.

What those conversations revealed wasn't just frustration. It was something more specific and more useful: the most experienced operators, despite their outward skepticism, were deeply competitive. They wanted to be the best. They wanted their shift to outperform the others. They might not say it openly, and some of them looked like they didn't care — but care was exactly what was driving the resistance. They were protecting their identity as the people who knew this equipment better than anyone.

The adoption wedge
Once I understood that competitiveness was the underlying driver, I stopped trying to convert skeptics and started identifying the operators who would welcome any tool that gave them an edge. I deliberately prioritized their shifts for testing and rapid prototyping — developing the product with them, not for them.

That decision changed everything about how adoption unfolded. The early adopters weren't passive testers — they were collaborators who helped shape the product's behavior through real production conditions. I shared performance metrics with them directly and consistently. They could see what the product was doing, when it was helping, and when it needed refinement. Over time, they stopped seeing it as my product and started treating it as theirs. They raved about it. And in a facility where operators do verbal handoffs across shifts — that word traveled.

By the time the other shifts started hearing about it, the product was already mature. It had handled complex production runs. The improvements were visible to any casual observer on the floor. The lagging shifts didn't need a sales pitch — they needed evidence, and the evidence was already there in the metrics and in what their colleagues were saying. That organic credibility is what enabled the transition to the limited production launch milestone: the product had already earned its place before we formally expanded it.

  • Identify the competitive operators, not just the open ones. Openness to change and desire to win are different things — and the latter is a much more durable motivation for adoption.
  • Develop with users, not for them. Early adopters who shape the product become advocates who sell it — without being asked to.
  • Share metrics directly with the people doing the work. When operators could see performance data in their own language, they stopped evaluating the product and started improving it.
  • Let the product's results reach the laggards before you do. Organic peer credibility is more durable than any training session or launch communication.
The outcome
By the end of the 60-day hypercare period — with the PM and Process Automation team on shift support — the product had reached >95% utilization 24×7. That number wasn't the result of a mandate. It was the result of operators who trusted what they were using, because they had helped build it.
Stakeholder map showing full network of Operations, Maintenance, Quality, Automation, and Project Sponsor groups connected to the SOLUTION center node
Stakeholder map
Adoption wasn't a single-audience problem. Operators, supervisors, managers, and senior leaders each had different trust thresholds, different information needs, and different definitions of "working." The stakeholder map shaped how I sequenced communications and evidence-sharing across the entire rollout.

The product created measurable value across users, operations, and the business.

Post-launch KPI monitoring tracked outcomes across four dimensions: user confidence, operational performance, production efficiency, and financial impact. The results reflected what happens when product design, launch strategy, and adoption management are treated as a unified system.

10%
Increased sales capacity
30%
Improved key department productivity metric
$13M+
Savings from improved operating efficiency
$5M+
Revenue enabled through improved yield
~7%
Reduced per-unit cost vs. baseline
Success criteria showing Objective, Key Results, and Key Product Metrics defined upfront before development began
Success criteria
Objective, key results, and product metrics defined before development began — the pre-agreed contract governing every Go/No-Go decision.

A highly successful program — and a clear-eyed look at what I'd sharpen next time.

This project delivered strong outcomes across every dimension we set out to improve. But successful programs teach you just as much as difficult ones — if you're willing to examine them honestly. Here's what I believe made this work, and where I'd operate differently with the benefit of hindsight.

What worked well
Stakeholder engagement

Continuous input, not periodic check-ins

Stakeholder involvement wasn't a phase — it was a constant. Building feedback loops into every stage of the lifecycle meant decisions were grounded in real input, not assumptions. It also neutralized a lot of the political friction that transformation efforts typically run into, because every major decision had broad organizational ownership baked in before it was final.

Problem approach

First principles over inherited assumptions

Applying Lean Six Sigma methodology — solving from first principles, with rigor and transparency at every step — meant senior leaders could see the logic behind every decision. There was no guessing, no corner-cutting, no "trust me" moments. That credibility was especially important when navigating complex integration challenges mid-program, where momentum could have stalled without leadership confidence.

User empathy

VoC and personas shaped more than the product

The decision to invest heavily in user research — VoC synthesis, persona development, observational usability testing — paid dividends well beyond product design. The nuance it gave me about senior versus junior operators changed how I wrote communications, structured training, sequenced the test group, and framed the value of the product to different stakeholder audiences. Understanding people deeply is a design tool, not just a research exercise.

Decision-making

Data gave the work credibility and momentum

Using SQL, Minitab, and BI tools to surface insights, build the business case, and track outcomes created a shared language with leadership that pure intuition never could. When I said something was working or wasn't, I had evidence — and that made it significantly easier to get alignment, resources, and air cover when the program needed it.

Adoption strategy

>95% usage 24×7 — because trust was designed in, not bolted on

Reaching greater than 95% utilization around the clock at the end of hypercare didn't happen because of a training program. It happened because end users were incorporated into every significant design and decision throughout the lifecycle. By the time we launched, the product wasn't something being done to operators — it was something they had helped build. The test group approach was particularly effective: two early-adopter groups engaged voluntarily, ran tests during live shifts, and became internal advocates before broad rollout began.

Execution discipline

Agility when it mattered most

A 21-month program with complex system integration across five engineering workstreams will hit walls. What mattered was how fast we moved through them. When complications arose, I focused the team on the product vision — what we were ultimately building and why it mattered — and moved quickly to unblock whatever was in the way. Keeping momentum on a long-cycle program is its own discipline, distinct from execution velocity on any individual task.

What I'd sharpen
Documentation practice

Make informal processes transferable

Sprint retrospectives and usability observations generated real insights — but they lived in notebooks and verbal debriefs rather than structured documents. The learning was captured in real time, but not always in a form another practitioner could pick up and use. On a multi-year program with multiple workstreams, investing more deliberately in turning informal process learning into transferable artifacts would have strengthened team continuity and created a richer institutional record of how decisions were actually made.

Measurement setup

Design measurement instrumentation in parallel with requirements

Success criteria and MVP exit criteria were defined early — which was the right call and shaped every Go/No-Go decision. But the instrumentation needed to track the more granular product metrics (usage rates, intervention frequency, quality attribution) wasn't fully in place until later in the testing phase. Running measurement design in parallel with requirements definition — not sequentially after — would have created cleaner pre/post comparisons and a stronger post-launch evidence base from day one of production testing.

Pilot structure

Structure pilot feedback to surface insights faster

The test group approach was one of the best decisions on this program — voluntary engagement, live shift conditions, real production stakes. Feedback collection was largely observational and conversational, which captured a lot but at a slower pace than it could have. Adding a lightweight structured debrief after each test session — a consistent set of questions, answered the same way every time — would have surfaced patterns earlier and created a more systematic evidence base for each design iteration.

The pattern I carry forward

The outcomes on this program were strong. But the thing I'm most proud of isn't the $13M in savings or the 30% productivity improvement — it's that we earned them. Every significant decision was grounded in user understanding, backed by data, and built with the people who would actually use it. That's the only way transformation work sticks. The product didn't succeed because we built something technically impressive. It succeeded because operators trusted it enough to run it 24 hours a day, seven days a week — and trust like that doesn't come from a launch plan. It's built across every conversation, every design decision, and every moment you choose transparency over convenience.