
Designing for Doubt: Building User Trust in an Age of AI Hallucinations
In a rush and a bind, the only ways we all tend to be these days, I grabbed a research paper I should have got to last week and asked my ever so trustworthy AI bestie to summarize it.
As always, in seconds nevertheless, it delivers a beautifully written, coherent summary in seconds. It even quotes a key statistic that sounds perfect for my presentation.
That’s without me even asking it too. Weird, but good. I’ll take it!
The only problem? The statistic is completely fabricated. My partner-in-just-about-everything-I-do didn’t just misread the paper; it invented a plausible-sounding fact out of thin air. And made it so convincing that I wouldn’t even have bothered to check it if it wasn’t so absurd even at first glance.
This is an “AI hallucination,” and it represents one of the most profound UX challenges of our time. And it makes me want to break up with my best friend. Except, I know I won’t!
Why This is a Serious Problem
As designers, we’ve spent decades building user trust through signals of reliability: clean interfaces, fast performance, and polished copy.
But in the age of generative AI, these signals are no longer enough.
In fact, a seamless, confident UI presenting a hallucinated fact is even more dangerous than a poorly designed one. It weaponizes our design principles against the user’s critical thinking.
The core of the problem is a “confidence gap.”
Today’s AI models are engineered for fluency, not necessarily for truth. They will generate text that is grammatically perfect and tonally confident, regardless of the factual accuracy of the underlying data.
As designers working at this new frontier, our role has to evolve. We must shift from solely designing for trust to actively designing for doubt—creating experiences that encourage healthy skepticism and empower users to verify information for themselves.
To build a more resilient, honest, and ultimately more trustworthy relationship between humans and AI, allow me to discuss four practical UX strategies.
Show Your Work: Proactive Transparency and Source Citation
The single most effective way to ground a generative AI is to force it to show its work. An answer without a source is just an assertion. An answer with a clear, verifiable source is evidence. This is the difference between an oracle and a research assistant, and we must design for the latter.
In a Nutshell: Integrating verifiable citations directly into the AI’s output, making it immediately clear where the information comes from.
Key UX Patterns:
- Inline Citations: Just like in a well-researched article, key statements in the AI’s response should have small, clickable footnote numbers or links. Tapping a citation should instantly reveal the source document, URL, or even the specific highlighted passage from which the information was drawn.
- Source Panels: Accompanying the AI’s response with a dedicated panel listing all consulted sources gives users a “bibliography” to explore. This allows them to quickly assess the quality of the sources (e.g., a scientific journal vs. a random blog) and dig deeper if needed.
- Distinguishing Inferred vs. Sourced Content: When an AI synthesizes information to create a new thought, the UI should make that clear. The interface can visually differentiate between text that is directly extracted from a source and text that is a novel inference generated by the model.
Platforms like Perplexity AI are pioneering this approach, building their entire experience around sourced answers. This simple act transforms the AI from a “black box” into a transparent and auditable tool.
Frame the Tool Correctly: Setting Clear Expectations Upfront
User trust begins with the very first interaction. How we frame the AI’s capabilities from the outset has a massive impact on how users will interact with it. We must be honest about what the tool is—and what it is not.
In a Nutshell: Using onboarding, microcopy, and UI framing to position the AI as a powerful but imperfect assistant, not an all-knowing expert.
Key UX Patterns:
- Strategic Microcopy: The language we use is critical. Instead of a prompt that says, “Ask me anything,” a more honest alternative would be, “I can help you brainstorm, summarize, and draft ideas. Please verify important information.” The disclaimer below ChatGPT’s input field (“…may produce inaccurate information…”) is a necessary, if blunt, example of this.
- Role-Based Naming: Call the AI a “co-pilot,” an “assistant,” or a “creative partner.” These labels inherently suggest a collaborative relationship rather than a transfer of authority.
Informative Onboarding: Use the first-run experience to educate users. A simple, one-time screen that says, “I’m an AI and I can make mistakes. I’m best used for creativity and first drafts, not for critical fact-checking,” can set the relationship on the right foot.
Encourage the Fact-Check: Designing for Verifiability
A trustworthy system isn’t one that demands blind faith; it’s one that invites scrutiny. Instead of designing seamless, frictionless paths to an answer, we should be designing interfaces that make it incredibly easy for users to question and verify that answer.
In a Nutshell: Building tools and pathways directly into the UI that help users validate the AI’s claims in real-time.
Key UX Patterns:
- The “Verify This” Button: Place a small button next to key facts or statements generated by the AI. Clicking it could trigger a traditional, targeted web search for that specific claim, giving the user immediate access to a broader set of sources.
Highlighting Ambiguity: If the AI draws from sources that disagree on a point, the UI shouldn’t hide this conflict. It should bring it to the surface. Imagine a sentence highlighted in yellow with a note: “Sources offer conflicting information on this topic.” This respects the user’s intelligence and exposes the nuance of the subject matter.
The Power of "I Don't Know"
Perhaps the most damaging behavior of current AI models is their tendency to invent an answer when they don’t have one. A core tenet of designing for doubt is to build systems that have the humility to admit when they are out of their depth.
In a Nutshell: Programming and designing the AI to gracefully and helpfully decline to answer questions when it lacks sufficient, high-quality data.
Key UX Patterns:
- Polite Refusals: Instead of generating a hallucination, the AI should be designed to respond with, “I don’t have enough reliable information to answer that question accurately.”
Suggesting Alternative Paths: A refusal shouldn’t be a dead end. The AI can follow up with a helpful suggestion: “Would you like me to help you search for articles or research papers on this topic instead?” This maintains a sense of utility while being honest about its limitations.
Building a Relationship We Can Trust
Let’s dive into designing for doubt. Let’s part ways from the usual Silicon Valley approach for polished, “magical” experiences and take a step toward something more genuine.
Let’s aim for transparency and collaboration, not to override a user’s judgment with an AI’s bold certainty, but to empower their critical thinking.
From my work, I’ve seen myself forgiving a tool’s shortcomings when they’re communicated clearly to me. Trust doesn’t falter when my AI makes a mistake; it breaks when my AI delivers a confident falsehood.
In my playbook, the most reliable AI of tomorrow won’t claim to have every answer. It will be the one that’s upfront about its limits, respects the user’s intelligence, and equips them to navigate complexity with clarity and confidence.
Let’s create technology that doesn’t dazzle with illusions but partners with users, saying, “Here’s what I know and here’s what I don’t. But I’m here for you and let’s sort through this together.”
Now that’s a bestie I’d ride into battle with!