People hold different beliefs about AI: Some advocate for a utopian future where AI will solve the grand challenges, others display less optimistic viewpoints varying from mild reluctance to serious concerns.

Intelligent products and services have arrived

In recent years, a growing number of products and services are being marketed as intelligent. Intelligent vacuum cleaners such as Roborock by Xiaomi, intelligent personal assistants such as Alexa by Amazon or Google Home by Google, intelligent refrigerators such as Samsung Smart Refrigerator by Samsung have settled in hundreds of thousands of people’s homes. Artificial intelligence (AI) has become a common denominator for various autonomous products and services. However, even though intelligent products and services have successfully engaged and charmed their users, people often demonstrate an aversion to AI.

Sometimes, we need a human

When people need healthcare advice, an example of a funny joke, a recommendation of a good movie, or some emotional encouragement, they still tend to turn to fellow humans and disregard AI recommendations. Algorithmic aversion occurs due to various beliefs people hold about AI: Inability of AI to consider people’s unique circumstances, inability of people to understand how AI comes up with a certain recommendation or lack of empathy on the side of AI.

AI is here to stay

Though people tend to favor human judgement over AI in a large number of contexts, ignoring AI is not always an option. The “presence” of AI in people’s lives will only substantiate and become more tangible in the future. In many cases, AI can save resources or deliver better service to a larger amount of people. For example, intelligent assistants, such as service robots working in hospitals or daycare centers, do not discriminate against any groups of people when interacting with them and neither do they get tired of working long hours. This makes them very efficient and highly qualified service providers. Put simply: AI is here to stay.

Algorithmic aversion might prevent people from benefitting from products and services marketed as AI. Overcoming this barrier in the adoption of AI requires cultivation of trust towards AI.

Trust: Overcoming fears of being hurt

From research on interpersonal trust we know that trust implies disclosure. Trusting someone means sharing information in situations when one has no formal guarantee that the other person will keep their part of the bargain. Disclosing one’s opinions or sharing views with another person can potentially make people vulnerable to being hurt. Given that there is always a possibility to be put at risk or disadvantage, how do people choose whom to trust? Two important qualities determine whether a person will be trusted or not: competence and goodwill.  When a person is considered to be competent to carry out the task in question (e.g. keep a secret), they are likely to be considered trustworthy. Another reason why we are willing to allow ourselves to be vulnerable is because we believe that the trustee is filled with good intentions, or in other words, the trustee deeply cares about carrying out with the task they were entrusted with.

Can AI be trustworthy? Can AI instill perceptions of competence and goodwill?

AI and Trust

Competence manifests when one has the ability to perform a task both well and efficiently. Ensuring that AI does not put any groups of people at a disadvantage, is one of the important steps towards facilitating trust in AI. Many scholars and policymakers have advocated for the importance of using heterogenous data to design better and more accurate algorithms. Goodwill is another important precondition for developing trust attitudes. Research on brand management suggests that a “go-to” strategy to ensure that products convey good intentions, has been making these products look like humans. Designing products and services with a human voice, a human appearance, that make jokes or laughs has indeed proved to increase emotional attachment to such products. Interestingly, people aren’t always as comfortable when AI looks like a human. Because people are aware of their interaction with a non-human agent, be it a robot or software, they feel tricked when AI looks too human. Instead of focusing solely on developing and designing AI that conveys good intentions, organizations should focus on projecting good intentions themselves. People increasingly demand explanations for how AI works, what it can do and what it cannot do. Addressing these concerns by enabling public conversations about AI is one of the preconditions towards goodwill.

Featured photo: Unsplash

Christian Fieseler
Christian Fieseler
Professor in Media and Communication Management at BI Norwegian Business School | Website

Christian Fieseler is project leader of the research project Algorithmic AccountabilityRead more about Christian on AFINO's webpage.

Kateryna Maltseva
Kateryna Maltseva

Kateryna is Postdoctoral Fellow at BI and Adjunct Associate Professor at Bjørknes University College. She works in the research project Algorithmic Accountability.