Are Faces Merely Labelled as Artificial are Trusted Less?

Abstract

Artificial intelligence increasingly plays a crucial role in daily life. At the same time, artificial intelligence is often met with reluctance and distrust. Previous research demonstrated that faces that are visibly artificial are considered to be less trustworthy and remembered less accurately compared to natural faces. Current technology, however, enables the generation of artificial faces that are indistinguishable from natural faces. In five experiments (total N = 867), we tested whether natural faces that are merely labelled to be artificial are also trusted less. A meta-analysis of all five experiments suggested that natural faces merely labeled as being artificial were judged to be less trustworthy. This bias did not depend on the degree of trustworthiness and attractiveness of the faces (Experiments 1-3). It was not modulated by changing raters’ attitude towards artificial intelligence (Experiments 2-3) or by information communicated by the faces (Experiment 4). We also did not observe differences in recall performance between faces labelled as artificial or natural (Experiment 3). When participants only judged one type of face (i.e., either labelled as artificial or natural), the difference in trustworthiness judgments was eliminated (Experiment 5) suggesting that the contrast between the natural and artificial categories in the same task promoted the labelling effect. We conclude that faces that are merely labelled to be artificial are trusted less in situations that also include faces labelled to be real. We propose that understanding and changing social evaluations towards artificial intelligence goes beyond eliminating physical differences between artificial and natural entities.

Publication
Collabra: Psychology