Click Test — Claude research Analysis
43 internal users. Nine findability tasks. Two workflows with a 0% success rate — and a new AI-assisted synthesis method that got findings to leadership in a fraction of the usual time.
Internal users were experiencing friction locating critical information inside Agency 360 — a Salesforce platform supporting operational and decision-making tasks across business groups. Small findability failures were creating outsized risk: slow daily workflows, increased cognitive load, and a system that was quietly failing the people who depended on it most.
I served as primary owner of research synthesis and reporting. I led the translation of raw usability data into executive-ready findings — and pioneered the use of Claude as an AI synthesis aid to accelerate analysis, validate interpretations, and structure recommendations. All findings were reviewed and validated by the research team before delivery.
Not a training problem. An architecture problem.
Click tests were selected for this study because they allow precise, objective measurement of where users expect information to live. Success rates and perceived ease were analyzed together — distinguishing tasks that merely felt difficult from those that were objectively broken. Results were consistent across all business groups, confirming the issues stemmed from design and information architecture, not role-based knowledge gaps.
Two tasks requiring immediate attention.
Major usability problem affecting all users. More than half of participants failed this task, indicating a significant design flaw.
Complete failure among Personal Insurance users — a showstopper that must be addressed before launch.
Both tasks met or exceeded usability benchmarks on perceived ease — users didn't know they were failing.
Critical Failures Surfaced
Two workflows. Zero successful completions.
Two workflows with 0% success rates were identified and escalated — triggering immediate prioritization discussions with product and leadership.
Architecture Indicted, Not Users
Same failure pattern across every business group.
Findings confirmed issues were consistent across all business groups, shifting the conversation from user training to information architecture redesign.
AI Synthesis Validated
Faster analysis. Same research rigor.
Claude-assisted analysis demonstrated that AI tools can meaningfully enhance research efficiency without sacrificing rigor — a methodology now available to the broader research team.
- Accuracy-based testing exposes what perception-based testing hides.
Users on two critical workflows reported feeling successful while failing completely. Without click testing, those failures would have been invisible in a standard satisfaction survey.
- AI-assisted synthesis changes what's possible for a small team.
Using Claude to accelerate analysis and structure findings let me focus on insight quality and decision-making rather than manual reporting. The output was sharper and reached leadership faster.
- When results are consistent across groups, the system is broken — not the users.
Uniform failure patterns across all business units are the clearest possible signal that the problem is structural. That finding gave leadership the confidence to act.
/ Click any screen to expand