Skip to main content

Designing conversational AI from a UX perspective

As AI continues to change how business gets done, the link between design, language, and conversation grows even closer. What are the best ways to create conversational AI from a human-centred perspective?

Man and women working together at laptops

Conversational AI is reshaping software by not only enhancing chatbot interactions, but also enabling fully automated services that operate without human intervention. We’re seeing these services being added to software products at a remarkable speed. And that begs the question: how do we ensure these human-to-AI conversational experiences are not only useful but trustworthy?  

To craft conversational experiences responsibly and effectively, we have to recognise AI’s role in design and understand the nuance of language and conversation.

AI through the lens of design

When we think about AI from a design perspective, these are some of the foundational qualities we need to consider in order to shape it responsibly:

  • Prediction: a system’s ability to recognise patterns and predict an outcome based on data. The design challenge here is to understand users’ needs, business goals, and how these correlate to data needs.
  • Adaptivity: a system’s ability to adjust to the situation and the user. The design challenge here is to design interactions that support continuous data collection.
  • Initiative: a system’s ability to act on its own. The design challenge here is to address different types of initiatives – different situations where the system and user take control.
  • Agency: a system’s ability to make its own decision without a human (autonomy), learn from previous experiences (adaptivity), and perceive and interact with human and artificial agents (interactivity). The design challenge here is to design strategies around feedback and control in a way that builds trust and confidence.

Creating a system of trust

AI can be designed to assist or collaborate with humans to complete tasks, or even complete tasks on a human’s behalf. That’s why trust in the AI system is vital. 

The goal when we design AI from a human-centred perspective is to empower the user in their work with appropriate levels of AI agency. Different levels are suitable for different user needs and business goals, and balancing automation and human control is key to a successful system:

  1. An assistive service has low autonomy and adaptivity and is highly dependent on the user interacting with the system to complete tasks. Example: a customer service chatbot answering a question. 
  2. An agentive service has high autonomy, adaptivity, and interactivity and combines it in a way that allows us to utilise AI’s capability to adapt and autonomously complete tasks while interacting with the user. Example: autocorrect suggesting words while you’re typing. 
  3. An automated service has high autonomy, low or high adaptivity, and low interactivity. In this case, the user has configured the service to complete a certain task autonomously without human intervention. Example: an automatic door opening by itself.
Graphic with words explaining the difference between assistive, agentive, and automated services

Each of these levels serves a different purpose. When implemented and executed correctly – meaning, the user gets the level of autonomy, adaptivity, and interactivity they expect – it helps to build trust and confidence in not only the AI system.

Building conversational design around language

In this domain, words become the primary material we shape to craft interactions between users and systems. These conversations are a blend of intents (what users want to do), utterances (how they communicate their intents), and the resulting dialogue. Within this, context emerges as a vital component, offering the AI system a way to build meaning based on historical interactions and situational awareness. 

Something that’s equally important is the personality of the AI: the character traits, tone, and behaviour that make interactions feel more natural and human. 

When infusing personality into AI, it can be beneficial to use a structured and intentional approach. To make sure that the personality of our conversational AI is appropriate for the context of use, we need to define interaction goals, level of personification, power dynamics, character traits, tone, and key behaviours. In Conversations with Things: UX Design for Chat and Voice (2021), Deibel and Evanhoe provide a useful framework for designing elements such as:

  • Interaction goals – the three or four most important success factors of the interaction.
  • Level of personification ranges from low to medium to high, and concerns factors such as where it is suitable to use, how it builds trust, and whether or not it needs a name.
  • Power dynamics – the power that the human and AI has, how intimate the relationship needs to be, and if it will change over time.
  • Character traits – traits used to support the interaction goals, like straight-forward, humorous, or empathetic. 
  • Tone – how formal or casual, expert or novice, warm or cold, and excited or calm the personality of the system is.
  • Key behaviours – key situations that demand consistent behaviour, such as being interrupted or not knowing the answer.

Of course, the system’s entire personality should align with its purpose and the expectations of users to ensure a cohesive user experience.

A few of the biggest challenges

Trust in conversational technology is still a work in progress, shaped by users’ experiences with early, less-capable systems. The complexity of language, as well as humans ability to notice problems in conversations, adds another layer of difficulty – requiring designers to be deliberate in the design of these systems. 

Users often have high expectations of AI’s understanding capabilities, so balancing these expectations with the system’s actual capabilities is crucial. To do that, we need to align the user’s expectations with the actual capabilities of the AI system. This can be done through communicating how the system works through feedback and explanations. 

Making sure ethical design principles are non-negotiable

Conversational AI should be transparent about its capabilities and limitations, respect user privacy, and adhere to regulatory standards. It must also cater to a diverse range of users, considering various backgrounds, abilities, and needs. In terms of conversational logic, consistency, and accuracy are key to maintaining user trust and ensuring the system can handle varying degrees of detailed information.

As with anything relating to AI, transparency and explainability are critical. Users should be able to understand the AI’s reasoning and behaviour, especially when it behaves unexpectedly. But, this information must be conveyed without overwhelming them, striking a balance between clarity and simplicity. It’s a balancing act for sure, but it’s crucial to the overall user experience. The design challenge here includes explaining the data, how the algorithms use it, and what the user can do to modify the outcome of the system, without disrupting the workflow.

Final thoughts

Designing effective conversational AI is a multifaceted challenge that requires a blend of technical skill, a deep understanding of user experience, and a commitment to ethical principles. Additionally, while this article focuses on UX design, it ultimately requires the collaboration of a multidisciplinary team to create a great conversational AI service. 

By focusing on different qualities of AI, understanding the complexities of language, and a human-centred approach, we can create conversational interfaces that are not only functional but also intuitive and trustworthy. This journey of design is one of continuous learning, collaboration, and iteration, mirroring the very nature of the AI systems we seek to create.

Get an inside look at how we help Visma companies turn customers into fans.

Find inspiration with Visma UX

Most popular