Person Behind AI Deep Fake Of Pope Francis Calls For More Regulation

The power of artificial intelligence to generate AI deep fake images of public figures has been brought into sharp focus, following the viral spread of a photo-shopped image of Pope Francis. The photo, which showed the pontiff sporting Balenciaga a white coat more fitting for a fashion runway than a papal visit. The image was created by Pablo Xavier, a Chicago construction worker who used the AI-powered tool Midjourney V5 to generate it.

Xavier expressed concern about the ethical implications of using AI to create images of public figures, calling for greater regulation of the technology. “Using it for public figures, that might be the limit,” he told BuzzFeed. However, he disseminated it anyway.

The viral spread of the fake image sparked an online conversation about the potential misuse of AI and prompted concerns about AI-generated misinformation.


An AI deep fake is a type of synthetic media created using deep learning algorithms to manipulate or generate images, audio, or videos of people or things that did not actually happen or exist

Deep learning algorithms use artificial neural networks to analyze and learn patterns from large amounts of data, which can be used to generate or modify new content

AI Deep fakes can be used for entertainment purposes, such as creating realistic special effects in movies or video games, or for malicious purposes, such as spreading fake news or creating hoaxes


Ryan Broderick, a writer for web culture newsletter Garbage Day, said that “Balenciaga’s Pope could be the first real case of AI misinformation on a mass level” in a tweet that was viewed more than 4 million times. However, the popularity of the image did not escape Xavier’s notice, who admitted that he “just thought it was funny to see the Pope in a funny jacket.”

The controversy highlights the ongoing debate about the role of AI in society and the need for regulation to prevent its misuse. As Pope Francis warned Silicon Valley in 2019, AI and other technological advances could “lead to an unfortunate regression to a form of barbarism.” It seems that now, more than ever, we need to be vigilant in monitoring the impact of AI and ensure that its power is used for good, not evil.

As technology and usability improves, more incidents like this will manifest. How long before government attempts to regulate AI deep fake production with steep fines and sentences to its creators and disseminators?

You might also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More