Even individuals with exceptional facial recognition abilities, known as super-recognizers, face serious difficulties in distinguishing faces generated by artificial intelligence from real ones. Results of a new study published in the 'British Journal of Psychology' indicate that their effectiveness is only slightly higher than that of laypeople, highlighting the advancement and disturbingly realistic nature of modern deepfakes.
Failure of Super-Recognizers
Individuals with extraordinary facial recognition abilities achieve only slightly better results than ordinary people in deepfake identification tests.
Realism of AI Images
Images and videos generated by artificial intelligence are now so realistic that they effectively deceive even specialized experts.
Scientific Publication
The study that revealed these problems was published in the prestigious scientific journal 'British Journal of Psychology'.
Generative artificial intelligence technology has reached a level where human faces it creates are practically indistinguishable from real ones, even for specialized experts. The latest research, whose results were published in the 'British Journal of Psychology', shows that so-called super-recognizers perform this task only slightly better than individuals without such a special talent. Their accuracy in tests proved surprisingly low, challenging previous assumptions about human perception and visual authenticity verification. Experts emphasize that this gap in human recognition abilities poses a serious challenge to security, law, and media credibility. Deepfake technology is evolving faster than human detection capabilities, raising questions about the future of digital authenticity. Problems with distinguishing truth from falsehood in media are as old as communication itself, but the scale and realism of contemporary manipulations are unprecedented. As early as the 1990s, simple graphics programs allowed for photo retouching, but the revolution in machine learning and generative networks over the last decade has enabled the creation of real-time content indistinguishable from authentic recordings. The research results suggest that relying solely on human perception to combat visual disinformation may already be insufficient. It is becoming necessary to develop advanced technological tools that will support or replace human judgments in this area. The lack of effective defenses against advanced deepfakes poses a growing threat to electoral processes, witness credibility in courts, and public trust in recordings documenting events.