How to design interfaces that communicate limitations and uncertainty. Build trust without deception
Artificial intelligence in many solutions is becoming a co-participant in decision-making rather than a mechanism operating in the background of a product.
It assesses credit risk, recommends content, supports diagnoses, and suggests responses in customer service. With this shift, the responsibility of designers increases. We are no longer designing only human–system interactions. We are designing the human-AI relationship.
This relationship differs from traditional digital interactions. AI does not operate deterministically. It works on probability, estimation, and incomplete data. By nature, it is burdened with uncertainty. If the interface does not communicate this uncertainty, an information asymmetry emerges. The user believes in the stability and objectivity of the system qualities it does not actually possess. It is precisely in this space that both excessive trust and later disappointment arise.
Building trust without deception therefore means designing interfaces that present the system’s capabilities and limitations in a way that is understandable, appropriate, and honest.
Trust calibration as a starting point
In AI design, trust calibration is increasingly discussed. This approach assumes that the goal is not to maximize trust, but to align it with the system’s actual capabilities. Users should neither trust too much nor too little. They should trust appropriately.
In contexts such as finance, medicine, or recruitment, excessive trust can lead to unreflective acceptance of recommendations. Conversely, too little trust results in ignoring accurate guidance. The interface is where this balance is shaped. It is within the interface that users build their mental model of how AI works.
If the system never signals uncertainty, users may assume that every recommendation has the same level of accuracy. If messages about limitations are too technical or hidden, they cease to serve their purpose.
What can support trust calibration
Introducing visible confidence levels for predictions, allowing users to distinguish between more and less certain situations.
Communicating the context in which a recommendation was generated, e.g., “based on historical data from the past 12 months.”
Designing moments of reflection in which users can consciously approve a decision.
Testing whether users truly understand when the system may be wrong.
Cognitive biases and dark patterns
Excessive trust often stems from automation bias the tendency to regard system decisions as more accurate than one’s own. If the interface further reinforces authority through absolute language, visual emphasis, or the absence of uncertainty information, a subtle dark pattern emerges. The user is guided toward acceptance without real awareness of limitations.
Explainability as an element of responsibility
Explainability is sometimes treated as a technical add-on. In practice, however, it is an element of design responsibility. Without explanations, users lack the tools to assess the quality of system decisions. In such situations, AI errors are perceived as arbitrary and unfair.
Well-designed explanations do not involve presenting complex model parameters. They involve showing which factors influenced a decision and what its limitations may be. This enables users to maintain a critical stance rather than relying blindly.
How Explainability can be designed
Creating layered explanations: a short summary for most users, with the option to explore deeper details for experts.
Using language tailored to the user’s level of knowledge.
Indicating not only the reasons behind a decision but also situations in which the recommendation may be less accurate.
Designing error messages as a normal part of system operation rather than as exceptions.
Design risks
Pseudo-explanations that sound professional but provide no real information can create an illusion of transparency. This is a form of dark pattern in which users feel they understand, while in reality they do not. It is often accompanied by the illusion of explanatory depth the belief that we understand something better than we actually do.
Predictability as a foundation of cognitive comfort
In the human-AI relationship, predictability proves to be just as important as effectiveness. Users can accept imperfection if system behavior is consistent. They find variability that cannot be understood much harder to accept.
If AI communicates uncertainty in one context but not in a similar one, users lose their reference point. Cognitive load increases because they must constantly monitor the system.
What supports predictability
A consistent method of presenting confidence levels and limitations.
A uniform structure for error messages.
A clear distinction between autonomous actions and those requiring approval.
Analyzing the time and frequency of manual corrections as a signal of declining trust.
Potential risks
Hiding the system’s error history or changing communication style depending on business goals can lead to a loss of credibility. The recency effect and negativity bias mean that a single negative episode can strongly influence the overall evaluation of the system.
Human in the loop and a sense of agency
As AI systems become more autonomous, the need to protect user agency grows. Human in the loop assumes that a human remains a participant in the decision-making process, even if AI performs a significant portion of the analysis.
In practice, this means designing interfaces that do not remove the possibility of correction or revision. A sense of control significantly influences system acceptance.
How to strengthen agency
Providing options to modify analysis parameters.
Clearly indicating moments when the decision belongs to the user.
Presenting alternative scenarios instead of a single recommendation.
Designing default settings in a transparent manner.
Risk of manipulation
Default settings that lead to automatic acceptance, difficulty reversing decisions, or hiding options to reject recommendations are examples of dark patterns. In this area, the default effect and status quo bias are particularly visible.
Humanizing AI without crossing boundaries
Humanization can reduce distance and increase interaction comfort. At the same time, it activates mechanisms of anthropomorphization. Users begin attributing intentions, empathy, and deeper understanding to the system.
In sensitive contexts, this can lead to excessive trust. If the system simulates emotions while failing to communicate its limitations, the boundary between support and manipulation begins to blur.
What can support responsible humanization
Clearly communicating that the conversation is with an AI system.
Using a neutral, supportive tone instead of imitating a personal relationship.
Avoiding messages that induce guilt or pressure continuation.
Designing pathways to redirect users to real human assistance in crisis situations.
Cognitive biases and dark patterns
Anthropomorphism bias and the overtrust effect may cause users to lower their vigilance. Emotional messages designed to keep users engaged in conversation are examples of manipulative design patterns.
AI Patterns influencing trust
In AI interface design, characteristic patterns emerge that directly influence trust levels.
1. Model Selection
Choosing a model as choosing a priority, not “Better Intelligence”
Model selection is a pattern in which users can choose the AI’s operating mode or model. When designed responsibly, it does not suggest a hierarchy of “better–worse,” but instead presents different trade-offs: speed vs. accuracy, creativity vs. rigor, low cost vs. deep analysis.
If AI co-participates in decisions, different tasks require different levels of precision and caution. A brainstorming mode should not be communicated in the same way as a medical analysis mode.
Gemini Model Selection
When designed responsibly, it can:
Enable conscious alignment of AI capabilities with the task.
Reduce frustration resulting from “unexpected” answer quality.
Strengthen the sense of control and transparency.
Allow management of computational costs without hidden limitations.
Support different risk levels, from creative to regulated contexts.
ElevenLabs Model Selection
What is crucial in design
Mode names should describe trade-offs, not levels of “intelligence.”
Differences between models should be clearly explained.
A higher mode should not be communicated as a guarantee of correctness.
The system may suggest a more suitable mode for a task, but should not impose it.
Poorly designed model selection can reinforce authority bias and overtrust, especially when names suggest levels of expertise.
2. Confidence Indicator
Communicating not only the result but also its certainty
A confidence indicator is visible information about the certainty level of an answer or prediction. It may take the form of a label, scale, color, or short message.
Because AI operates on probability, the absence of uncertainty information encourages automation bias. Users may treat every answer as equally reliable.
ConsensusAI Confidence Indicator
When designed responsibly, it can:
Help align trust with the situation.
Reduce frustration with weaker answers.
Increase the sense of transparency.
Support different risk levels.
Trigger additional verification in high-stakes contexts.
What is crucial in design
The scale should be understandable, not overly technical.
Low confidence levels may be paired with suggestions for verification.
Color usage should not imply guaranteed safety.
The indicator should be genuine, not consistently “high.”
A confidence indicator is one of the most important tools for trust calibration. It demonstrates that the system does not hide its limitations but communicates uncertainty honestly.
Extend Confidence Indicator
Legal and ethical transparency
In regulated contexts, transparency is not only a value but a requirement. Users should know what data is being processed and what limitations the system may have.
What can support transparency
Clear sections such as “How does this recommendation work?”
Information about data sources and their potential limitations.
Easy access to privacy policies written in understandable language.
Inclusion of diverse stakeholder groups in the design process.
Hiding key information in lengthy terms and conditions, using complex language, or default consent to broad data processing are classic examples of dark patterns supported by information overload.
Trust as a process
Trust is not a visual effect or a one-time success in interaction. It is a process that develops over time, especially in moments of error.
A system that can acknowledge uncertainty, clearly communicate limitations, and allow user correction builds a durable relationship. A system that hides its boundaries may achieve faster adoption but will lose it just as quickly.
BehaviorAI is Responsible Responsible AI design library.
The goals of the project are:
combining inspiration with an interpretive layer and responsible AI design,
organizing dispersed knowledge and use-case examples,
building greater trust in curated content compared with generic AI responses.
The project is aimed at product teams, designers, and product managers working on AI products in small and medium-sized technology companies that implement AI without mature responsible AI standards.
Project value: PLN 462,551.00
European funds contribution: PLN 322,911.34
The Startup Booster for Social Impact project of SWPS University is implemented with funds from the Polish Agency for Enterprise Development (PARP) under Priority II of the European Funds for a Modern Economy 2021–2027 (FENG) programme, in accordance with the grant agreement No. FENG.02.28-IP.02-0003/23-00.




