How UX personas made our AI training data more inclusive
Briefly

How UX personas made our AI training data more inclusive
"My role was straightforward: write queries (prompts and tasks) that would train AI agents to engage meaningfully with users. But as a UXer, one question immediately stood out - who are these users? Without a clear understanding of who the agent is interacting with, it's nearly impossible to create realistic queries that reflect how people engage with an agent. That's when I discovered a glitch in the task flow. There were no defined user archetypes guiding the query creation process. Team members were essentially reverse-engineering the work: you think of a task, write a query to help the agent execute it, and cross your fingers that it aligns with the needs of a hypothetical "ideal" user - one who might not even exist."
"In UX design, we've long recognized the danger of designing for an "ideal user" - typically someone who looks like the design team, thinks like them, and has the same access to education and resources. The same risk exists when training AI agents. Only the stakes are even higher. Every query we write teaches the agent what "normal" human interaction looks like. When those queries are skewed in one direction, the agent's behavior becomes skewed in the same way."
"For my first task, I did what any UXer would do: I spoke to real AI users across different domains. One insight stood out: there's a significant difference in how people interact with AI. A UX designer working at a tech company might prompt: "Can you audit the GreenView App Design file in Figma and identify the three frames with the most comments from team members?" A business owner who's not fluent in English might prompt: "I need make list of things finishing in shop." A neurodivergent user struggling to articulate a complex task might type fragmented thoughts, or even need the agent to help structure their promp"
Introducing a UX persona reshaped how the team created AI training queries. The training role involved writing prompts and tasks for AI agents, but a missing element was clarity about who users actually are. The absence of defined user archetypes led to reverse-engineered queries aimed at a hypothetical "ideal" user. Designing for an ideal user risks bias because every query teaches the agent what "normal" interaction looks like. Conversations with real users revealed large differences in prompting styles across roles, language fluency, and neurodivergence, underscoring the need for representative personas in prompt design.
Read at Medium
Unable to calculate read time
[
|
]