Will AI or Jesus Save us?

Artificial Intelligence (AI)  is reshaping industries, societies, and even our perception of truth itself. The question isn’t just what science can achieve—it’s how we guide it.

AI deep fakes and the Pope - should we worry?

In 2023, images of Pope Francis wearing a Balenciaga puffer jacket circulated widely on the Internet. Quickly revealed as a fake, the pictures were a humorous reminder of a serious theme: the ability of AI to manipulate data (in this case pictorial) and to distribute it in a manner that manipulates the beliefs and opinions of many. 

Two years later, the Vatican warned that AI carries "the shadow of evil" due to its potential to spread misinformation and destabilize societies. At the same time, many people truly believe that AI will solve many if not most of the world’s problems.

In his message to the World Economic Forum in 2025, Pope Francis expressed additional concern about AI's role in the "crisis of truth." He warned that AI-generated content, which can be indistinguishable from human-produced work, has the potential to erode trust and distort public discourse

Despite his words of warning, Pope Francis has been a vocal advocate for the ethical development of technology. In a historic address at the G7 summit of 2024, he stressed the need for human oversight of AI systems, urging world leaders to ensure that technology serves humanity and never dominates it.

A fake image of Pope Francis generated by AI. Source: r/midjourney via Reddit.com created using Midjourney v5

A fake image of Pope Francis generated by AI. Source: r/midjourney via Reddit.com created using Midjourney v5

Why should religious leaders get involved in AI ethics?

People often say that the problem with technology does not lie in its invention but rather how it is used. This does not seem to be the case with AI, as ethicist Timnit Gebru has demonstrated. Before being forced out of her role as co-leader of Google’s ethical AI team, Gebru warned that the Large Language Models (LLMs) used by Google and other companies were inherently predisposed to prejudice. In other words, AI replicates the biases of its creators.

Gebru’s co-written 2018 study, Gender Shades, revealed that AI-driven facial recognition products were trained on biased data. Gebru  and her co-author, Joy Buolamwini,  found that while facial recognition systems could identify light-skinned male subjects with nearly 100% accuracy, the error rate for dark-skinned female subjects ranged from 23% to 36%. Error rates, in real life, are false arrests and miscarriages of justice. 

Another example concerns something that has been called surveillance capitalism’. Without our knowledge, personal data is constantly being hoovered up from our tens of thousands of online clicks: what we put into Google, what images cause us to pause for a moment on Instagram, what songs we listen to. Even our mood at different times of day may be revealed. AI companies process this data in order to create a profile of our hopes, needs and desires, so that we can be constantly targeted by advertisers . 

In a similar way, social media companies use AI to draw users down ever more extreme paths of content, keeping them engaged and so remaining within the sights of advertisers. Surveillance capitalism shades into surveillance politics when voters are similarly profiled and targeted in deceitful ways.

Regardless of whether a particular AI system is intended for ethical use, Gebru and others have demonstrated a fundamental truth: when AI is developed unethically, it is likely to be used unethically. This is the  “shadow of evil” the Vatican has warned against,  allowing discrimination and exploitation to be  embedded into systems governing human life. 

AI Research Scientist Timnit Gebru (Photo by Kimberly White/Getty Images for TechCrunch)

AI Research Scientist Timnit Gebru (Photo by Kimberly White/Getty Images for TechCrunch)

A collaborative path forward?

Religious leaders, often acting through non-governmental organizations, play an increasingly important role in discussions surrounding the global governance of technology. Their input is essential in shaping ethical standards for the use of emerging technologies, ensuring that the development of AI and other digital tools remains grounded in respect for human dignity and the promotion of justice. 

The Vatican's document Antiqua et Nova is one example, exploring the relationship between artificial and human intelligence, and advocating for a balanced approach that respects human dignity and ethical principles.

Initiatives such as the Council on Foreign Relations’ Religion and Foreign Policy Program demonstrate the value of interdisciplinary-interfaith dialogue, providing a platform where faith leaders, scholars, and policymakers engage on global issues. They  have recently conducted a webinar concerning Religion and AI under the banner of their social justice series.  

Collaborations such as this  work towards ensuring that technological advancements and policy decisions are ethically grounded and serve the common good.

Pope Francis meets Facebook founder and CEO Mark Zuckerberg, Monday, Aug. 29, 2016. Source: Washington Post

Pope Francis meets Facebook founder and CEO Mark Zuckerberg, Monday, Aug. 29, 2016. Source: Washington Post

What about the future?

We are barreling toward a future where AI governs hiring, automated drones make life-and-death decisions, and tech giants wield more power than Presidents. The pace of innovation is exhilarating—but also dangerous. Who is calling for wisdom and restraint?

Enter modern faith leaders. Not relics of the past, but voices urging that just because we can doesn’t mean we should. When AI deepfakes manipulate elections and China’s social credit system assigns “trust scores” to citizens, the ethical stakes are real—and urgent.

Religious leaders must step into the tech debate, pushing back against surveillance capitalism and unchecked AI. Imagine Vatican-backed ethical AI policies influencing global regulations or interfaith coalitions forcing tech companies to address bias and discrimination.

The future is coded, but it’s not just a technical problem—it is a moral one. Will faith leaders step up, or will machine learning and market forces dictate our fate?

Questions for discussion

  • What social media and internet browsers do you use, and what data do you think they gather about you?
  • How can we protect and develop empathetic human relationships in the face of technology that is trying to move everything online?
  • What guidance or teaching could the church offer on the use of social media?

Further reading

Council for Foreign Relations Religious Leaders Platform
https://www.cfr.org/outreach/religion-and-foreign-policy-program

Antiqua Et Nova: Note on the Relationship Between
Artificial Intelligence and Human Intelligence
https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html

Robet M. Geraci, "Religion among Robots: An If/When of Future Machine Intelligence." Zygon: Journal of Religion and Science 59, no. 3 (2024). https://www.zygonjournal.org/article/id/10860/ [open access]

Ugochukwu Stophynus Anyanwu, "Towards a Human-Centered Innovation in Digital Technologies and Artificial Intelligence: The Contributions of the Pontificate of Pope Francis." Theology and Science 22, no. 3 (2024): 595-613. [behind a paywall]

Credits

Written and Produced by the Equipping Christian Leadership in an Age of Science project

To Download a free text version of this article to use with your congregation click below