Researchers Simulated Religious Groups With AI to Try to Understand Religious Violence
The researchers modeled how xenophobia and anxieties mutually escalate between religious groups using “psychologically realistic” AI.
Painting of the 1643 Battle of Rocroi by August Ferrer-Dalmau. Image: Wikimedia Commons
Religious violence is as old as religion itself, but the dynamics that lead to clashes between religious groups are remarkably complex. To get a better grasp on this problem, an international team of researchers turned to simulations that use “psychologically realistic” artificial intelligences to model conflicts between religious groups.
As detailed in a study published Tuesday in the Journal for Artificial Societies and Social Simulation, the AI models demonstrated that people, while generally peaceful, will endorse violence when an outside group threatens the core principles of their religious identity.
“Our study uses something called multi-agent AI to create a psychologically realistic model of a human,” Justin Lane, a researcher at the University of Oxford’s Institute of Cognitive and Evolutionary Anthropology, said in a statement. “For example, why would someone identify as Christian, Jewish, or Muslim? Essentially, how do our personal beliefs align with how a group defines itself?”
Multi-agent AI models are systems in which AIs that are programmed to have certain traits interact with one another. The nature of this interaction depends on the traits of each AI agent, and how the other agents act. As a naive example, consider the scenario below, in which two different agents (red and green) have different goals. The goal of the red agents is to touch the green agents, and the goal of the green agents is to avoid the red agents and touch the blue dot.
The way these red and green agents make decisions can be thought of as their psychological profile.
An example of a multiagent system. Image: Open AI .
In the new research on religious violence, the researchers had to explicitly describe how people make decisions based on their religious beliefs. This could then be used to create a simulation in which a number of AI agents make decisions about how to interact with one another based on their own religiosity, environmental constraints, and the actions of others in their own group and outside of it. The end result is a model of how religious tensions escalate between two groups in real life.
How to make an AI religious
Although the definition of religion can vary widely across cultures, the researchers used a working definition that defined it as engagement with “disembodied intentional forces that members of a group consider in some sense germane to their values and capable of influencing their future” as well as “participation in ritualized behaviors organized around such agents.”
In other words, this minimal definition of religion involves deities that can intervene in the world and the rituals centered around them. While this definition of religion might be contested by some, the researchers needed to operationalize this notoriously fuzzy concept in order to create a model. These two aspects—deities and rituals—are found in most major world religions, even if the particular ways they manifest are quite different.
The researchers’ model consisted of an arbitrary number of individuals, each of which had different levels of religious conviction. The strength of this conviction was measured according to two qualities: the level that the individual attributed happenings in the world to a deity, and the level that religious morals affected its decisions about how to act in society.
The simulations run by the researchers might be thought of as a game in which each player is an AI. Each AI player makes decisions based on the strength of their religious conviction as measured by the previously two mentioned traits. Each player has its own personal space where it interacts with its environment and other players.
Each player belongs to one of two teams: a majority religious group or a minority religious group. The goal of each team is to decrease the sum of anxieties among its individual members. The goal of the researchers studying the game is to determine under what conditions the anxiety levels of both teams increase at the same time. It’s scenarios like these that the researchers believe underpin some of the most significant instances of religious violence between groups in recent history, such as the Troubles in Ireland or the Gujarat riots in India.
Each game—or simulation—consists of 250 rounds, and the researchers played the game 20,000 times. A game begins by randomly distributing players around the board. Every round, each AI player would search its personal space on the board to appraise threats from its environment, threats posed by members of the opposite team, and for any support that might be sought from teammates.
The researchers set up the game so that every player could be faced with four possible environmental threats that affect everyone on the board. These threats include natural hazards such as earthquakes; predation hazards such as animal attacks; social hazards such as encountering a member of the opposite team that denies the beliefs of your own team; and contagion hazards such as when a player that doesn’t belong to their team has an infectious disease.
Each of these threats increases the anxiety level of the player that experiences it. Importantly, each player has a different threshold at which they begin to experience these environmental hazards. If the hazard isn’t strong enough, some players won’t feel its anxiety inducing effects at all.
As the anxiety level of a player increases, they are incentivized to seek out their teammates and form groups, which lowers each member’s anxieties. The researchers called these groups “ritual clusters” and they can only form among teammates who are close to one another on the board. This restraint is supposed to be analogous to an individual’s social network in real life.
After 20,000 iterations of the simulation, the researchers found that religious anxieties were escalating between one or both of the groups during only a quarter of the rounds. When only one group was experiencing escalating anxiety, it tended to be the minority group. The researchers note that this is because there are fewer members within the minority group so they are more likely to come into contact with the majority group members when trying to form connections, thereby increasing their anxiety.
According to the researchers, mutual escalation in anxiety levels seemed to happen under three main conditions. The first was when the difference in size between the majority and minority group wasn’t very large. The second and third factors were when social or contagion environmental hazards crossed the threshold that a critical number of agents experienced it.
“The combination of these circumstances creates an environment where agents in the majority and the minority groups regularly identify agents from the other group within a specified radius and perceive them as social and contagion threats,” the authors of the study write. “Mutually escalating intervals are produced because both groups are operating under circumstances where they are likely to experience hazards, which increases anxiety.”
This research is an interesting application of artificial intelligence to real human social problems, and the researchers hope that it will help mitigate outbreaks of religious violence by helping policy makers understand how individual religious sentiments shape group action.