Artificial intelligence (AI) has reached such a level of sophistication that it can now fool individuals into believing they are interacting with another human being. In fact, recent research published in Psychological Science reveals that AI-generated images of white faces appear more “human” than actual human faces themselves. These findings shed light on the concept of hyperrealism, which refers to the phenomenon where AI-generated faces look astonishingly realistic. Additionally, the study suggests that people are generally not adept at differentiating AI-generated faces from real ones.
The advent of AI, alongside other technological advancements within the fourth industrial revolution, has revolutionized the online landscape, drastically changing the faces that we encounter. While AI-generated faces have proven to be valuable in certain contexts, such as aiding in finding missing children, they also carry risks. Instances of identity fraud, catfishing, and cyber warfare have exposed the potential dangers associated with hyperrealistic AI identities. Surprisingly, people’s misplaced confidence in their ability to detect AI-generated faces has left them vulnerable to deceptive practices. Cybercriminals can capitalize on this overconfidence, as individuals readily divulge sensitive information to hyperrealistic AI identities.
The Hidden Bias
A troubling aspect of AI hyperrealism lies in its racial bias. The study revealed that AI-generated faces of color, including Asian and Black faces, were easier to identify as AI-generated compared to white faces. This indicates that white AI-generated faces possess a heightened level of realism, surpassing not only faces of color but also real white human faces. This racial bias can be attributed to the training processes of AI algorithms, which often rely on predominantly white face data. Importantly, such racial bias can have severe consequences, as demonstrated by studies revealing the decreased detection rates of Black individuals by self-driving cars, thereby jeopardizing their safety.
As AI-generated content becomes increasingly realistic, it raises concerns regarding our ability to accurately detect it and safeguard ourselves from deception. The study identified specific characteristics that contribute to the hyperrealistic appearance of white AI faces, such as familiar and proportionate features that do not raise suspicion. Participants in the study misinterpreted these characteristics as indicators of “humanness,” leading to the hyperrealism effect. However, it is essential to note that AI technology is rapidly evolving, and the findings of this study may not apply indefinitely. Furthermore, other AI algorithms may exhibit different differentiating features from human faces.
To protect oneself from misidentifying AI-generated content as real, awareness of human limitations when distinguishing between AI-generated and real faces is crucial. By recognizing our fallibility in this aspect, we become less susceptible to the influence of AI-generated content online and can take additional steps to verify information when necessary. Public policy also plays a significant role in addressing this issue. Requiring the declaration of AI usage is one potential approach, although it may not always be effective and could instill a false sense of security. Alternatively, placing emphasis on authenticating trusted sources, similar to “Made in Australia” or “European CE tag,” could assist users in selecting reliable media by verifying sources through rigorous checks.
The rise of hyperrealistic AI-generated faces blurs the line between reality and deception. While the incredible realism of AI poses numerous benefits, it also carries potential risks for individuals who are unaware of their limitations in distinguishing between AI-generated and real faces. Acknowledging this challenge prompts us to reevaluate our trust in AI and take the necessary steps to protect ourselves in an increasingly AI-driven world.