Building a Thesis and Testing Theories, One Deepfake Video at a Time

Richard Catrambone knew early on in Zachary Tidler’s Georgia Tech experience that he was going to ask some big questions about the Internet.

“The drum that he was pounding from day one was, ‘why do we take classes at all in school?’” recalls Catrambone, a professor in the School of Psychology and Tidler’s advisor. “With the interconnectedness of the Internet, why do I need to take Calculus One? Why couldn’t I just get on the Internet and find either a solution, or information that would help me solve it?”

Catrambone would always play devil’s advocate – debating with Tidler and getting him used to the idea of defending his theories in front of mentors and peers. In Tidler, Catrambone also recognized a student willing to doggedly pursue those theories in search of new understandings about how our minds work, at one of the leading research institutions in the country.

“I think that’s his enduring interest, pushing the idea of, ‘Why do we need to have classes? Maybe some things are worthwhile to learn in a classroom, maybe some things aren’t? Let’s question that.’”

Tidler is a graduate student working on his master’s degree in engineering psychology with Catrambone, and the questions and curiosity haven’t stopped. He’s widened his research interest to include other aspects of digital life in the 21st century, while Catrambone continues to illustrate how Georgia Tech’s faculty mentors and scientists encourage students and peers to keep asking complicated questions — and also how they help foster the academic and scientific rigor required to develop, test, and prove working theories.

“I think there’s a wealth of questions that I’m hoping will fuel the next few years of research for me,” Tidler says.

Developing a New Theory on Deepfakes

For Tidler, many of those questions now center on deepfakes: computer-enhanced videos that, thanks to machine learning and software advances, come so close to capturing a real person’s image and voice that the average person may not be able to tell the difference. Recent news reports show that deepfakes, a portmanteau of ‘deep learning’ and ‘fake,’ are indeed getting better and harder to differentiate from real videos, and some technology companies are working to help identify and flag them on social media and Internet sites. It’s a perfect example of Tidler’s focus on digital technology and its impact on everything from education to entertainment. 

Tidler explains that his interest in deepfakes began with Ellen DeGeneres. “She used to do bits where you would see Pope Francis, and he’s walking up to an altar, and he looks like he’ll be solemn — but what he does instead is he grabs the cloth on the altar and pulls it out, leaving everything on top still standing.” 

He adds that in hindsight, believing this version of the old magician’s tablecloth trick, perfectly performed by Pope Francis, now seems ridiculous. “I can’t believe I fell for that. But in the intervening six or seven years since, they (deepfakes) definitely seem to have gotten better.”

The computer science community is also focusing on the problem, “and they’re making much more headway than somebody like me,” Tidler surmises. “Microsoft has announced a tool that can give you a deepfake score, but it becomes something of an arms race because the (deepfake) networks and algorithms get a little better, and then the algorithms and neural networks trying to identify (deepfakes) get a little better, and so on.”

That’s how Tidler started wondering who is most susceptible to being duped by deepfake videos, and why. “I’m not seeing any research trying to understand the psychological traits that could predict who’s more or less adept at identifying these videos, and I’m certainly not seeing anybody use scientific rigor.”

Deepfakes, mind-blindness, and ‘mindreading’

Tidler’s exploration of those traits is the focus of his master’s thesis, which he recently successfully defended. He is now preparing to submit the work for publication in a scientific journal.

For his thesis research, Tidler started with a mix of 173 college students and those who sign up for freelance work on Amazon’s Mechanical Turk service. He also relied on the existing research of Simon Baron-Cohen, a University of Cambridge professor of developmental psychopathology. (Yes, Baron-Cohen’s cousin is comedian and actor Sasha Baron-Cohen, who himself likes to fake out people under the personas of Borat and Bruno)

Cambridge’s Baron-Cohen helped spotlight something called affect detection ability, which is how a person “reads” cues in another person’s eyes, face, or body language to determine how that person is feeling. It’s an offshoot of mind-blindness, another theory of Baron-Cohen’s related to how children with autism and Asperger's syndrome might visualize and attribute emotions and beliefs in themselves and others differently than their peers who seem to ‘mindread’ these aspects. 

Baron-Cohen detailed this work in his 1995 book “Mind-Blindness,” which MIT Press explained as “a theory that draws on data from comparative psychology, from developmental, and from neuropsychology,” wherein Baron-Cohen “argues that specific neurocognitive mechanisms have evolved that allow us to mindread, to make sense of actions, to interpret gazes as meaningful, and to decode ‘the language of the eyes.’”

Cohen and his Cambridge research team “came up with all these tasks that are meant to detect the theory of mind-blindness,” Tidler says. “They’re classic tasks called affect detection tasks. The premise is you watch a video or picture of someone’s face, and you’re presented with words and definitions — with the examinees being asked to pick or impute (assign) what that person is feeling in the video or picture.” Tidler had his subjects complete some of those same tasks in his exploration of how affect detection ability relates to deepfake dupes.

“I asked all my participants to look at people’s eyes and tell me what they’re feeling, with the ‘right’ and ‘wrong’ answer validated by Cambridge. They get their ‘score’ — and that’s your affect detection ability, or the degree of your mind-blindness.”

Tidler’s research findings led him to conclude that “affect detection ability is highly correlated with deepfake detection ability,” he says. “If a person isn’t so great at knowing how other people feel, they are likely to have a hard time recognizing that a video is a deepfake.”

He realizes the work is, at the moment, more correlations than causation, but believes enough has been uncovered to keep digging. Catrambone agrees. 

“Zack is demonstrating a phenomenon and suggesting some possible causal mechanisms for which we only have correlational data,” Catrambone says. “The experimental manipulations we do in the future will be the crucial extra step.” 

How to recognize deepfakes: A new media literacy class?

Tidler shares that in doing this research towards his master's degree in engineering psychology, he covered the costs of this research on his own, including paying survey participants. He is hoping for outside funding for future research towards his Ph.D. that will help him continue to investigate some of the characteristics of those who could be more susceptible to deepfakes. 

Personality types or particularly strong political affiliation could be two of those traits, he adds. “We could look at all sorts of things. Attention control is another big construct — that’s the ‘individual differences’ pathway” that distinguishes unique characteristics like intelligence, personality traits, and values in individuals.

Catrambone shares that one possible application of Tidler’s research could be media training to help people avoid being taken in by deepfake videos. He explains, “Maybe we can teach people how to spot deepfakes through a 10-minute training intervention.” 

“We could try to come up with something that could help people,” Tidler says. “Maybe the reason some people are not very good at imputing emotion in others is that simply they’re not likely to attend (look at) the eyes. Maybe simply asking people to attend to the eyes, or the curl in the lips — maybe if we intervene and suggest these things, maybe test scores go up” for those Cambridge ‘mindreading’ assessments — tests and training that could perhaps eventually lead to a downturn in deepfake dupes.

Related Media

Click on image(s) to view larger version(s)

  • An example of deepfake technology: in a scene from Man of Steel, actress Amy Adams in the original (left) is modified to have the face of actor Nicolas Cage (right). (Credit: Wikipedia)

  • Zachary Tidler

  • Richard Catrambone

For More Information Contact

Renay San Miguel
Communications Officer
College of Sciences
404-894-5209