Articles
Mar 9, 2026

How AI User Testing Changed the Way We Validate Design Decisions

AI saliency prediction gave us objective, impartial evidence about how users would experience a design — before we committed a single euro to

How AI User Testing Changed the Way We Validate Design Decisions

One of the most persistent problems in product design is the gap between how designers think users will experience their work and how users actually experience it. The best teams close this gap with research. Most teams close it with opinion — and usually the most senior opinion in the room wins.

On the AA Ireland projects — both the main website redesign and the MyAA membership flow — we used a bespoke AI Saliency Prediction model that replicates human visual behaviour with 95% accuracy to get objective data on how real users would process our designs before any development investment was made.

What Saliency Prediction Actually Measures

Saliency prediction models are trained on large datasets of eye-tracking studies — recordings of where real humans look when they encounter a visual interface. A well-trained model can predict, with high accuracy, which elements of a new design will attract attention, which will be overlooked, and what the likely reading path through a page will be.

This is enormously useful for answering specific design questions: Will users see this call to action before they see this competing element? Is this error message getting attention at the right moment? Does this headline actually land before the user scrolls past it?

Two-Step Validation

On the AA Ireland MyAA project, we ran a two-step validation process. First, we applied the saliency model to the existing membership flow to diagnose where the current design was failing — where attention was being misdirected, where key conversion messages weren't landing, and where the visual hierarchy was working against comprehension.

Second, once the redesign was complete, we reapplied the same model to the new designs — using the same objective methodology to confirm that the solutions were performing better than the problems they were solving. This gave us confidence before prototype testing, and confidence before build.

What It Doesn't Replace

AI saliency testing is a complement to human research, not a replacement for it. It tells you where attention goes; it doesn't tell you what people feel, what confuses them, or what mental models they bring to an interface. For that, you still need qualitative research, moderated usability testing, and direct observation.

The combination of objective attention data and qualitative insight is significantly more powerful than either alone — and it's a combination I now treat as standard practice on any high-stakes product design engagement.

  • Use saliency testing early to diagnose existing designs before proposing solutions.
  • Reuse the same model post-redesign to validate improvement — not just as a one-time diagnostic.
  • Always pair quantitative attention data with qualitative user research for the full picture.

Thanks for joining our newsletter.
Oops! Something went wrong.

Explore our collection of 200+ Premium Webflow Templates