ugc_banner

Humans believe AI-generated faces are more trustworthy than real-life faces 

WION Web Team
NEW DELHIUpdated: Feb 17, 2022, 11:19 PM IST
main img
A deepfake image of Tom Cruise. Photograph:(Others)

Story highlights

According to research from the University of Oxford, Brown University, and the Royal Society, most individuals are unable to determine if they are viewing a deepfake video, even when they are warned that the content they are watching could have been digitally manipulated. 

According to new research, fake faces made by artificial intelligence appear more trustworthy to humans than genuine people.

Artificial intelligence and deep learning—an algorithmic learning process used to educate computers – are utilised to create a human who appears authentic, a technology known as a 'deepfake.'

They can also be used to transmit messages that have never been expressed, such as an altered video of Richard Nixon's Apollo 11 presidential address or a phoney Barack Obama attacking Donald Trump. 

The account name was the only clear sign that this wasn't the real deal when TikTok videos appeared in 2021 that appeared to depict "Tom Cruise" making a penny disappear and enjoying a lollipop.

Watch | Gravitas: Tom Cruise's deepfake videos raise an alarm

On the social media platform, the creator of the "deeptomcruise" account was employing "deepfake" technology to present a machine-generated version of the famous actor performing magic tricks and having a solo dance-off. 

In an experiment, participants were asked to classify faces created by the StyleGAN2 algorithm as authentic or artificial.

The participants' success percentage was 48%, which was somewhat lower than flipping a coin.

Participants were trained on how to spot deepfakes using the same data set in a second experiment, but the accuracy rate only improved to 59 percent.

According to research from the University of Oxford, Brown University, and the Royal Society, most individuals are unable to determine if they are viewing a deepfake video, even when they are warned that the content they are watching could have been digitally manipulated. 

(With inputs from agencies)