Overview
The company was at an early stage of scaling its AI-powered seafloor intelligence software solution. The software automated seafloor scanning and analysis, combining underwater imagery and sensor data to support environmental monitoring, infrastructure inspection, etc. As their AI capabilities scaled, a critical question emerged,
"Should this product exist in its current form?
And if so, how should humans and AI collaborate within it?"
Responsibility
My work focused on validating both the product’s relevance and its usability: first, assessing whether users genuinely needed and would adopt an AI-driven solution for seafloor examination; and second, understanding how users interpret, trust, and collaborate with AI-generated insights in real-world decision-making.
Process
​​
-
Mixed-methods research
-
Think-aloud sessions
-
Comparative task scenarios
-
Thematic analysis
Problem
The company was investing in advanced AI capabilities, but two critical uncertainties remained. It was unclear whether users would consistently rely on software-based seafloor scanning rather than existing manual or semi-automated methods. At the same time, early users expressed uncertainty around interpreting AI outputs, raising concerns about trust and correct usage.
The challenge was ensuring the product met real market needs and supported confident user decisions.
Research Goal
The research aimed to validate whether AI-driven seafloor scanning addressed a real and recurring user need, and whether users would integrate the software into their existing workflows. In parallel, it examined how users understood and trusted AI-generated insights, and what UX patterns were required to support safe and effective human–AI collaboration.
Assumptions
-
Users need faster and more scalable alternatives to manual seafloor inspection.
-
AI-generated insights can reduce time-to-decision if they are interpretable and trustworthy.
-
Adoption depends on users understanding when to rely on automation and when to apply expert judgment.
The research goal was to de-risk an early-stage AI product by validating whether users actually needed automated seafloor scanning, and by understanding what it would take for them to trust and use AI outputs in real operational decisions
UX Research
The discovery phase combined mixed-method research with workflow analysis to understand user needs, motivations, and constraints.
​
Exploring how users currently scan and analyze seafloor data, where existing approaches break down, and whether AI-driven software could realistically replace or augment these workflows. In parallel, the research evaluated early software concepts and outputs to understand how users perceived AI recommendations and whether they felt confident acting on them.
​
This dual focus allowed validation of both problem–solution fit and usability risks.
Core Success Key Signals & Proxy Metrics
Key signals
Proxy Metrics
This framework was a design deliverable a structured set of observable signals built to stand in for long-term adoption data at early stage, giving the team a testable definition of what 'working' would look like before live usage data existed.
Limitations & Next Steps
This research was conducted at an early discovery stage, proxy metrics were used to estimate adoption, trust, and efficiency before real-world usage data was available..
Future validation included tracking live adoption rates, reliance on AI recommendations, and time-to-decision in operational settings to confirm long-term impact.
