AI-Powered Experiences made for Trust, Control, and Adoption
The break of AI technology into the mainstream is a marked time in modern product design. The possibilities and expectations of technology exploded in 2023. This project ventures into a clean and transparent approach to designing AI-powered features and AI-powered environments.
Lake Oswego, Oregon
2015
Automotive Tech
$9.2 million (2025)
100 Employees
Challenge
As AI capabilities became core differentiators in our product suite, we faced a common enterprise SaaS dilemma: how to introduce automated recommendations without eroding user trust or disrupting established workflows. The underlying technical models were strong, but early user feedback revealed that customers saw the AI as a black box, which limited adoption and, in some cases, generated resistance from power users.
In a SaaS environment where trust, predictability, and operational control directly impact user retention and expansion revenue, what may seem like a surface-level UX issue was a strategic business risk.
My responsibility was to define a design strategy that would:
Increase adoption of AI-driven recommendations
Preserve user confidence and control
Integrate AI into existing workflows without creating additional cognitive burden
I led this work as the principal designer responsible for interaction strategy, prototyping, and aligning cross-functional teams on a unified vision.
Design Strategy & Decisions
From the start, the goal was to design for trust, transparency, and control in an AI-assisted SaaS workflow.” This became our north star.
1. Transparent rationale layer
Instead of showing recommendations only as output, we surfaced:
Key factors influencing each recommendation
Option for users to see and edit those factors
This shifted the experience from predictive guesswork to collaborative suggestion.
2. Adjustable control panel
Users needed the ability to fine-tune the behavior of the AI within their existing workflow:
Global controls for strategy preferences
Per-instance overrides
Undo/redo history
These were designed as lightweight panels that fit into existing dashboard structure rather than modal interruptions.
3. Progressive disclosure
For new or less technical users, we kept the initial interface simple, with optional “advanced explanation” for teams who wanted deeper insight.
Throughout, I partnered closely with engineers and data scientists to ensure our solutions respected the underlying model constraints, as well as performance and security requirements.
Trust and data
Foureyes is a platform that allows users take in and interact with data about their business and prospects. With consistency and reliability, Foureyes has built trust with their users in this data. Every data platform knows that trust is hard earned and easy to lose. It only takes users to start noticing inconsistencies for them to distrust and abandon a platform. Maintaining this trust was of the utmost priority going into this work.
Sources vs Summaries
Foureyes being a data platform meant that users of Foureyes were accustomed to seeing hard data in front of them, records of the going ons of their business.
One of the first ways that AI was entering the Foureyes platform in the form of AI generated summarries of phone calls.
We wanted to be intentional about how we distinguish parts of the app that were hard fact as opposed to AI generated processed information. We also wanted to be transparent about how confident we were in this AI generated information.
Since then, over the last 2 years, AI has made a more profound impact on the functionality of the Foureyes platform. With the launch of the new product PE+, which allows sales people to leverage AI for outreach and prospect engagement.
3 Actor interactions
this project took shape, it became clear that conversations within the Foureyes platform were between 3 actors: The customer, The car dealer, and the AI sales agent.
There were points in development where we considered hiding this fact from the customer, with the car dealer dropping in seamlessly in conversations between the customer and the Ai agent, but our approach of elevating transparency helped clarify the expectations each actor had from these conversations.
Although AI getting closer and closer to mimicking what conversations between 2 humans would look like, there is simply no replacing the connection people between themselves. Instead of taking on the impossible task of having AI replace this connection, we make it abundantly clear when someone is taking to a robot.
In this way AI supplements the process of outreach instead of degrading it.
Conclusion
The year brought many lessons when it came to designing AI-powered features. New accessibility considerations, considerations of transparency and clarity, and defining the narrative of AI involvement and value are all a part I carry forward in my process now.
It materially affected key SaaS metrics:
Adoption of AI recommendations increased by 41% within 8 weeks of launch
User modifications of AI parameters increased by 59%, signaling confidence
Time-to-task completion decreased by 23%, directly affecting daily operational efficiency
Feature engagement heatmaps showed reductions in hesitation and exit points
Sales and support teams reported fewer objections rooted in mistrust, and product analytics showed higher retention among users who engaged with the transparent UI layer compared to those who didn’t.





