The latest research published in the 'British Journal of Psychology' proves that super-recognizers, individuals with above-average facial identification abilities, perform only slightly better than average people at distinguishing photorealistic, AI-generated portraits from real photographs. Their success rate is about 66%, while individuals without special skills achieve about 59% accuracy. These results undermine the possibility of relying on human perception as an effective line of defense against advanced deepfakes in disinformation campaigns and in the judiciary.
Low Effectiveness of Super-Recognizers
Specialists in facial identification correctly classified deepfakes in only 66% of cases, while individuals without special abilities achieved 59% accuracy.
Threat to Security and Law
The difficulty in detecting advanced deepfakes undermines the credibility of visual evidence in courts and creates a risk of manipulation in politics and media.
Need for Technical Solutions
Experts indicate that, given the limitations of human perception, algorithmic tools for detecting forgeries and legal systems become crucial.
Advanced generative artificial intelligence techniques now create synthetic human faces that are practically indistinguishable from real ones, even for specialized experts known as super-recognizers. A study published in the prestigious 'British Journal of Psychology' found that in tests aimed at distinguishing a photorealistic AI-generated portrait from a photograph of a real person, super-recognizers achieved an average accuracy rate of only about 66%. For comparison, individuals without particular skills in this field classified faces correctly in about 59% of cases. „This shows that even those who are the best at face recognition are not immune to sophisticated deepfakes.” — Matt Oxley, co-author of the study This marginal advantage of specialists over laypeople is alarming and undermines previous assumptions that human perception could serve as a reliable line of defense against digital forgeries. Photography and film, since their invention in the 19th century, have struggled with the problem of manipulation, from simple retouching to editing. A breakthrough occurred around 2014 with the development of generative adversarial networks (GANs), which enabled the creation of completely synthetic, yet photorealistic images not based on any existing source. Experts point to the multidimensional threats arising from this gap in human recognition capabilities. In the legal dimension, deepfakes can undermine the authenticity of video evidence in court proceedings, making visual testimony unreliable. In the political sphere, fabricated recordings can be used to discredit public figures and destabilize electoral processes. In digital media, any material documenting an important event can be called into question, leading to the erosion of public trust in visual messages. Effectiveness of Distinguishing AI Faces from Real Ones in the Study: Super-recognizers: 66, Individuals without special abilities: 59 The technology for generating realistic faces by AI is evolving at a pace exceeding the adaptive capacity of the human brain. According to scientists, the only real response to this challenge is the development of advanced, algorithmic tools for detecting forgeries and creating appropriate legal and ethical frameworks. It becomes crucial to develop systems capable of identifying subtle artifacts left by generative algorithms, invisible to the human eye. Without such solutions, societies will face a future where 'seeing' will no longer be synonymous with 'believing'.
Mentioned People
- Matt Oxley — co-author of the study published in the British Journal of Psychology