Case 02 · Amazon Robotics

Autonomous fleet monitoring improvements.

Proteus — Amazon's first autonomous drive unit — is a collaborative robot that transports carts of customer packages from collection chutes to outbound truck loading. I led the operator-facing UX for Proteus: the on-robot indicators, the floor-monitor resolution workflows, and the HRI framework guiding the Gen2 hardware redesign.

A fleet of Proteus autonomous drive units staged inside a fulfillment center

Role

Senior UX Lead — HRI framework + Gen2 hardware UX

Team

Hardware, robotics SW, ops, program

Timeline

2024 – Present · Gen2 in Alpha

System misalignment at scale

Proteus was planned to scale to 14 additional sites throughout 2025, averaging 150 bots per site. At the same time, second-generation bot development kicked off with goals to improve safety sensor capability while reducing per-bot cost.

Carts staged for outbound loading in a Proteus-served area of a fulfillment center
Proteus operates collaboratively alongside humans — every issue, edge case, and exception has to be resolved by a human Floor Monitor in real time.
  • Traffic management. Floor Monitors were required to manually clear traffic jams and navigate bots out of deadlocks.
  • Issue response times. Resolving Proteus-related issues consumed 77% of a Floor Monitor's time during a shift, with resolution averaging 6 to 23 minutes per issue.
  • Competing program timelines. Short-term software improvements for Gen1 had to be balanced with long-term hardware updates for Gen2.
77%

of a Floor Monitor's shift was consumed by Proteus issue response — with single-issue resolution averaging 6 to 23 minutes.

Diagnosing system friction

I defined the research strategy for fleet management — scoping it to cover both the bot's behavior and the human's workflow — and led the four research streams that surfaced where the existing system was failing.

A Floor Monitor interacting with a Proteus bot — pressing the action button on the bot's control interface during a live issue
Shadowing Floor Monitors during live shifts — capturing the physical movements, button presses, and decisions a single issue actually demands of a human responder.

Findings showed that operators were navigating fragmented workflows across multiple touchpoints — with limited visibility into task progression, error conditions, or issue resolution status.

Visualizing touchpoint usage

Research findings were translated into a touchpoint analysis — a swim-lane view across user need, workflow, monitoring tool, radio, joystick, and every Proteus output (LEDs, eyes, LCD display, accessory buttons, sound, e-stop, spotlight) — so product and engineering could see the full surface area a single use case spans.

Touchpoint analysis diagram for Proteus — a swim-lane spanning user need, workflow, bot inputs and outputs across LEDs, eyes, LCD, accessory buttons, sound, e-stop and spotlight
Example touchpoint analysis — Use Case 1: manual intervention with the bot. Operators were navigating fragmented workflows across this many distinct touchpoints, with limited visibility into task progression, error conditions, or issue resolution.

Structural redesign

To increase resilience and clarity of system status, I developed a simplified HRI framework focused on structural simplification — not purely aesthetic updates.

HRI framework matrix mapping control mode, health status, action, and the corresponding state indicator across LED, sound, and display surfaces
The HRI framework I structured for Proteus — mapping every combination of control mode (autonomous vs. manual), health status (healthy vs. unhealthy), and required action to a consistent set of state indicators across LEDs, eyes, sound, and accessory buttons.
Operator indicator design — pairing every operator need with a specific multi-modal signal across the bot's eyes, front/sides underglow, and rear underglow
Operator-indicator design derived from the framework — every operator need (no WAN, no Wi-Fi, paired joystick, ready to release, fault) gets a deliberate, multi-modal signal that reads from any angle on the floor.
Removing redundant joystick commands — a gamepad with redundant buttons highlighted in orange, marked for removal in Gen2 hardware
Applying the framework: I identified opportunities to remove excessive touchpoints, reduce resolution steps, and eliminate redundant joystick commands — favoring simplification and Gen2 cost reduction over more hardware indicators.

Align autonomy with human oversight. When a fleet scales by an order of magnitude, the human-readable signal has to scale with it — or response times balloon.

Evaluating effectiveness

I used a combination of quantitative performance and qualitative satisfaction data to validate HRI framework concepts, software feature releases, and new Gen2 hardware components. The same artifact gates software releases and Gen2 hardware decisions.

Every shipped Gen1 software release and every proposed Gen2 hardware component lands on this grid as a single point. Only changes that land in the green region ship; yellow iterates, red goes back to redesign.

Operational impact

While Gen2 bot designs are currently in Alpha, positive results have already been seen from short-term software releases to live sites.

−50%

Reduced onsite staffing burden

Reduced number of support tickets

Reduced training time for new operators

−5%

Lower overall Gen2 bot cost from simplified design

Next case studyMapping Operator Workflows and Exceptions →
Case 03

Get in touch

Currently exploring senior & staff-level UX roles in systems thinking and service design.