← All Work
03 — UX Research · Travelers Insurance · 2025–2026

Click Test Claude research Analysis

43 internal users. Nine findability tasks. Two workflows with a 0% success rate — and a new AI-assisted synthesis method that got findings to leadership in a fraction of the usual time.

Travelers internal work — details available on request
AI-assisted synthesis — Claude used as research tool
43
Internal Participants Tested
9
Findability Tasks
71%
Overall Success Rate (95% CI: 65.8–75.8%)
/ The Problem

Internal users were experiencing friction locating critical information inside Agency 360 — a Salesforce platform supporting operational and decision-making tasks across business groups. Small findability failures were creating outsized risk: slow daily workflows, increased cognitive load, and a system that was quietly failing the people who depended on it most.

/ My Role

I served as primary owner of research synthesis and reporting. I led the translation of raw usability data into executive-ready findings — and pioneered the use of Claude as an AI synthesis aid to accelerate analysis, validate interpretations, and structure recommendations. All findings were reviewed and validated by the research team before delivery.

/ Research Findings

Not a training problem. An architecture problem.

Click tests were selected for this study because they allow precise, objective measurement of where users expect information to live. Success rates and perceived ease were analyzed together — distinguishing tasks that merely felt difficult from those that were objectively broken. Results were consistent across all business groups, confirming the issues stemmed from design and information architecture, not role-based knowledge gaps.

/ Critical Issues Identified

Two tasks requiring immediate attention.

40%Task 5 · 95% CI: 26.0–55.7%

Major usability problem affecting all users. More than half of participants failed this task, indicating a significant design flaw.

0%Task 9 · Personal Insurance · n=7

Complete failure among Personal Insurance users — a showstopper that must be addressed before launch.

Both tasks met or exceeded usability benchmarks on perceived ease — users didn't know they were failing.

/ Outcomes
01

Critical Failures Surfaced

Two workflows. Zero successful completions.

Two workflows with 0% success rates were identified and escalated — triggering immediate prioritization discussions with product and leadership.

02

Architecture Indicted, Not Users

Same failure pattern across every business group.

Findings confirmed issues were consistent across all business groups, shifting the conversation from user training to information architecture redesign.

03

AI Synthesis Validated

Faster analysis. Same research rigor.

Claude-assisted analysis demonstrated that AI tools can meaningfully enhance research efficiency without sacrificing rigor — a methodology now available to the broader research team.

/ What I Learned
/ Selected Artifacts

/ Click any screen to expand

01 · Critical Issues Identified — Tasks Requiring Immediate Attention
02 · Task-by-Task Success Rates — 95% Confidence Intervals
03 · Detailed Results — Per-Task Breakdown