Uncanny Valley

The Uncanny Valley: Why Consumers Distrust Lifelike AI

Despite the rise of voice assistants like Amazon Alexa, people are uncomfortable with lifelike AI.

For example, Google unveiled the “Duplex” feature for the Google Assistant last year. The human-sounding AI could make simple phone calls on behalf of users, mainly for booking restaurant reservations.

The AI sounded too lifelike. Call recipients reported feeling “creeped out” by the Duplex bot because it was almost indistinguishable from a human.

This is an example of “the uncanny valley” – the eerie feeling people get when human-like AI imitates a person yet falls short of seeming completely real. This gap in realism leads to feelings of revulsion and mistrust.

This article reviews the trend of the uncanny valley, its consequences, and what is needed for AI to gain the trust of consumers going forward. Businesses considering using AI should read this article to understand how AI may impact consumers’ level of trust.

Inside the Uncanny Valley

AI’s increased realism is unnerving, but this negative emotional response is nothing new. Looking at lifelike dolls, corpses, and even prosthetic limbs can trigger the same effect.

This is because lifeless yet human-esque objects remind us of our own mortality. Sci-fi and horror films utilize this phenomenon to great effect, conjuring images that are too close for comfort.

The following graphic depicts “the uncanny valley” as it relates to robots, dolls, and even zombies.

The uncanny valley

Source: TechTarget

Lifelike AI is also disturbing because humans are biologically incentivized to avoid those who look sick, unhealthy, or ‘off’. This is known as “pathogen avoidance,” which biologically serves to protect against dangerous diseases.

Lifelike AI seems almost human, but almost human is not enough.


Related read: Artificial Intelligence: a Pandora's Box or the Holy Grail?


People neither trust nor understand AI

Humans have evolved to control their environment. As a result, we hesitate to delegate tasks to algorithms that are neither fully understood nor failsafe.

So when AI fails to perform to human standards, which is often, we’re acutely aware.

For example, Uber’s self-driving car has yet to operate safely on auto-pilot. According to research by UC Berkeley, one AI housing system set about charging minority homeowners higher interest rates for home loans. Even in the case of Google Duplex, users doubted whether the AI could correctly understand the simple details of their restaurant reservation.

AI is perceived as untrustworthy because no matter how often it succeeds, even if it fails a handful of times, those situations stick out. Though convenience is appealing, consumers demand reliability, control, and comfort when using the technology.

Voice assistants like Amazon Alexa occupy a happy medium for users. The AI isn’t too lifelike, and it’s easily understood how to control the technology.

People only trust what they understand. But lifelike AI, like most, is little known.


Related read: What is a Chatbot and How to Use It for Your Business


Differentiation and understanding critical to trust

To gain trust, AI developers and businesses must ensure a more comfortable AI experience for users. Foremost, this means that the AI should appear and sound less human.

People want technology such as Google Duplex to announce itself as AI, and this would make them more comfortable with the technology. Visually, AI can be created to appear cute rather than anatomically accurate. If the AI is easily distinguishable from a human, people are more likely to adopt it.

Though machine learning algorithms are too complex to be understood by humans, transparency and explainability do engender trust.

To this end, sharing information about AI decision-making processes can shine a light into the “black box” of machine-learning algorithms. In one study, people were more likely to trust and use AI in the future if they were allowed to tweak the algorithm to their satisfaction.

This suggests that both a sense of control and familiarity are key to fostering acceptance for lifelike AI.

Finally, if consumers will not trust a business’ AI system, revert back to the old-fashioned way and use humans to communicate with customers - and seek help from third-party sources like virtual assistants to ensure the task doesn’t become overwhelming.

Why consumers distrust lifelike AI

To open people up to lifelike AI, companies must avoid the uncanny valley. Familiarity, education, and visual distinction are needed to help people be comfortable in the presence of humanoid technology.



This is a guest post by Riley Panko. Riley is Senior Content Developer for Clutch, a B2B research, ratings, and reviews agency in Washington, D.C. She is responsible for marketing Clutch’s business services segments, specifically voice services companies.