cover image

Illustration by Sidra Dahhan

Breaking Down the Pope Francis Meme: Deep-Fake AI

Albeit amusing at times, AI’s unregulated and rapid growth is raising serious questions about privacy, misinformation and political propaganda.

Apr 9, 2023

On March 24 2023, a picture of Pope Francis sporting a stylish white puffer jacket and a bejeweled crucifix went viral. Millions of people viewed and commented on the photograph, lauding the Pope’s fashionable choices. Not long after, it was revealed that the image was fake, generated by an AI using the program Midjourney. On one level, the incident was undoubtedly amusing, allowing millions to enjoy the idea of the Pope all dressed up in “papal athleisure”. On another, it is a daunting indication of the power of deep-fake AI technologies, one that could damage our sense of truth and trust in various institutions if allowed to spread unrestrained.
In an interview with Buzzfeed, the image’s creator Pablo Xavier (who declined to share his last name) claimed that he generated the photo because he wanted to do something funny. Amused with the results of his work, he then posted the Pope’s images to the AI Art Universe Facebook group and Reddit, from where they were taken and reshared on Twitter.
“I didn’t want it to blow up like that,” Xavier shared with Buzzfeed, stating that it was scary that “people are running with it and thought it was real without questioning it.” Some Twitter users even co-opted the images to criticize the Catholic Church, which upset him even further.
This aftermath, and Xavier’s reaction to it, underscore two of the issues with deep-fake AI technology. One: it blurs the line between what is real and what is not, starting conversations and spurring speculations that are not based on actual evidence or facts. Two: the nature of the Internet is such that images like these can be used in unscrupulous ways, without the creator’s permission, to support agendas and arguments that are far removed from their intent. While the Pope Francis images can still be taken as being ‘all in good fun’, recent deep-fake photos of former U.S. President Donald Trump being arrested highlight just how potentially damaging such technologies can be when used in conjunction with public figures. The pictures sparked a surge of online threats against President Biden’s government, demonstrating how false information can spark very real (and aggressive) responses. Perhaps even more disturbingly, one doctored image of Trump posing for a mugshot has been used to his political advantage, printed and distributed on t-shirts emphasizing that he is not guilty as the former President underwent indictment and arraignment last week. The use of deep-fake images for political campaigns is a dangerous prospect, one that could potentially lead to misinformation and disinformation around politicians and their opponents. If such disinformation is allowed to spread unrestrained, maintaining democratic values becomes that much harder. People may even vote based on inaccurate information and understandings of political figures whose messages and values are distorted and reframed by technologies beyond their control.
Politicians are not the only victims of deep-fake images. AI technologies have recently been used to create pornographic representations of women by swapping out faces in videos, causing immense psychological damage to the individuals featured. Other tools have also emerged allowing users to strip clothes off female bodies in images. While many face swaps and images are obviously fake with distortions shining through at different angles, they are still passably believable to a casual observer, especially when much of the public may be unaware of the abuses of such technology. How, then, can we continue to maintain our privacy and dignity in online spaces when our images could be distorted and used for these non-consensual purposes?
As AI technology advances, it is becoming increasingly pervasive in the media that surrounds — from images of Pope Francis to CGI Carrie Fisher in Star Wars movies to depictions of Trump struggling with the police. While not all AI development is necessarily damaging, deepfake images indicate the dangers of allowing such technologies to grow unregulated, as they impact our sense of truth and trust for the worse.
Amal Surmawala is a Staff Writer. Email them at feedback@thegazelle.org.
gazelle logo