By Professor Robert Geraci, Manhattan College
– – –
The Narratives of Artificial Intelligence
My work is at the intersection of religion, science, and technology—and because of that, I’m particularly interested in the narratives surrounding artificial intelligence. There are secular narratives, explicitly religious narratives, and implicitly religious narratives, and I’ve spent significant time studying how they intersect with each other and how we can engage with them.
As religious communities, we recognize the significance of narratives—whether in text, tradition, or community action. Narratives frame our worlds; they structure how we interpret the facts around us. And as such, narratives about AI matter.
Public conversation around the future of AI is polarized. Some people raise terminator-type narratives of disaster where humans risk annihilation. We also see narratives that promise salvation and utopia, solving the problems of energy development, climate change, pollution, and more. Interestingly enough, though, our disaster narratives generally share a common theme, but our paradise narratives compete with one another. We agree on hell but differ on heaven!
Perhaps this says something important about how we should collaborate with one another. We can focus on what we’re striving to avoid so that we can work together instead of forcing “the plan for the future” upon one another. I suggest we familiarize ourselves with one another’s stories, find ways of thinking about how AI fits into them, and turn to our shared resources—building a global narrative for AI in our religious contexts.
Humanity’s Tendency to Draw Lines
In the study of religion, we discriminate between “insider perspectives” and “outsider perspectives.” There are pros and cons to both, because insiders and outsiders ask different questions and see different things. This positioning—drawing boundaries and borders—is something we see around us as human beings all the time.
However, while scholars try to balance this the best we can, human beings as a whole have a relatively terrible history of behavior toward our fellows—sometimes painting “outsiders” as not even human, which disposes us toward control and cruelty. This plays into how we think about AI and how we’re going to employ AI. If both religion and culture have this dark colonial history, it’s in danger of happening again.
But I wish to be hopeful. I remind everyone of our almost universally shared ethical injunction to welcome the outsider and turn strangers into guests. That power for transformation is in us.
Algorithmic Decision-Making and Bias
The G20 Interfaith Forum aspires, I think, to fix some of the tragedies of the past. As it engages AI technology, the Interfaith Forum must press members of the G20, especially the most powerful among us, to protect the most vulnerable—particularly in algorithmic decision-making.
I refer to AI-fueled decisions currently being made in banking, the judicial sphere, the medical world, and more. To a considerable extent, we’re absolving ourselves of decision-making power without first fixing our own moral failures. The data we’re using are prejudiced. We need to look for what we ought to be and define that, especially when we’re working with technologies like machine learning.
AI decision-making could be beneficial, but at present it is not. As the data used for machine learning are our data, they are already prejudiced. If we have a technology that screens applicants for the hiring process, we feed successful people’s data into that system—and it shows us the applicants who are most likely to succeed. But if you draw all those data from a culture where 90 percent of people in CEO positions are white men, then you’re looking for white men … or people from certain universities and even certain neighborhoods, which ends up being pretty much the same thing.
We allow these machine learning systems, through their algorithms, to continue our prejudices and make them more powerful—and the same thing happens in housing loans, judicial sentencing, and more. So far, many (most?) algorithms have been sexist and/or racist.
So, what must we do? To begin, we must become educated about the racist biases in our countries, which will differ; about how algorithms can be used in decision-making; and about the bias inherent in these algorithms. We have to think about what domains of life algorithms are appropriate in, and make wise decisions. If the data that come to us are wrong, we aren’t going to do anything productive.
On the religious side, we must accept a form of public reasoning. If my argument is only reasonable to me, we fight over it—so it has to be reasonable to you, too. We don’t always get everything we want, especially if it’s uniquely relevant to us.
And on the policy side, we have to institute anti-bias auditing. Government regulation, despite how little people like it, has to be involved. Religious communities can help generate these kinds of policy decisions. They can help direct governments toward particular kinds of outcomes. They can help define shared values, and press governments to get involved—using their shared pursuit of justice and their shared desire to protect those on the margins. The better side of humanity is there in our religious organizations, so we must think about how to make these shared goals ones that are policy-mandated and regulated.
The Dangers of Control and Efficiency
Our shared goal—to promote human wellbeing—brings us together in our consideration of AI. From the early days of AI, theorists have known that these technologies are very much technologies of control. “Cybernetics” is the study of control—in systems, people, and machines. We haven’t yet learned to relinquish our goal of controlling one another, and so, from military weapons to public policy, AI threatens us.
Efficiency is typically seen as a “win.” And AI offers efficiency. However, we must push back against the entire idea of efficiency as a central goal for corporate and government policy. This kind of blinded efficiency almost always results in deferred costs. As one example, industrial capitalism and its “efficiency” resulted in the deferred cost of our present environmental catastrophe. We don’t want a blinded efficiency to be the guiding principle for the future of humanity. The relentless pursuit of efficiency cannot but go wrong when it omits more humane goals.
Conclusion: The Role of Religion
What is the role of religion here?
People are typically reactive. Technologies get created, and then we try to decide what to do with them. We must be more mindful of what we create in the first place. We must avoid looking to efficient modes of profit-making and social control as the answers. Similarly, we must understand that global competition is not the answer for humanity.
In addition to understanding these principles, I invite you to look at the Technology and Innovation Working Group’s policy brief from 2021, which suggests concrete domains where we can get involved—from religious perspectives and from political perspectives.
The future is an open one, and we have a role to play in making it. The history of technology shows us clearly that human beings can and do intervene regularly. Yes, a world that’s open has mystery and even monsters in it, but it’s also a world where there’s hope. I hope that the G20 Nations and the G20 Interfaith Forum can be leaders in that shared task.
– – –
The initial draft of this post was composed by JoAnne Wadsworth based on Prof. Geraci’s notes and public lecture. The final draft resulted from their collaborative revisions.
Robert M Geraci is Professor of Religious Studies at Manhattan College. He is the author Futures of Artificial Intelligence: Perspectives from India and the U.S. (Oxford 2022) and Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality (Oxford 2010), along with other books and essays. He can be found at: https://robertgeraci.com
JoAnne Wadsworth is a Communications Consultant for the G20 Interfaith Association and acting editor of the “Viewpoints” blog.