Foto: Henry Be
An important part of morality is the ability to cooperate, in particular in situations where we have the choice of benefiting ourselves at the expense of our community, or sacrifice some of our good for the sake of helping others. Much of this is handled not by deep ethical thinking coming up with moral principles but through individual strategies we acquire through a mixture of personality, upbringing, imitation and countless other influences.
Over millennia philosophers and preachers have tried to get us to behave better. This is particularly important as our societies become huge and global: many of our intuitions and behaviors were shaped in the past when social groups were far smaller, and we now find ourselves limited in coordinating with millions of others. Worse, many challenges – climate, resource scarcity, risky new technologies – may require unprecedented cooperation to allow humanity to survive. This has led some philosophers to suggest that ought to investigate biomedical moral enhancement: deliberate interventions to expand our circles of concern, ability to empathize, tendency to be altruistic or cooperative. There are some early examples of drugs or hormones that point in this direction.
What our paper investigates is whether a society where people can change their social strategies at will remains stable. It is known that when playing a game with someone we weigh together our own payoff (how much money or happiness we gain) with the other’s payoff into a subjective value. “Individualistic” players only care about themselves. “Prosocial” players care equally about themselves and the other (so they may want to maximize the sum of the outcomes). “Altruists” want the other to get as good result as possible. “Competitors” care about winning, but also that the other loses. The rare “Martyrs”, “Sadists” and “Sado-Masochists” try to minimize rewards for themselves, others or both. Real humans are distributed between the competitors over the individualists and prosocials to altruism.
We ran a computer simulation where virtual agents with different strategies randomly met and played simple games against each other. Altruists meeting competitors might forfeit their rewards (making both content), while individualists would each try to win as much as possible. We calculated the actual satisfaction for each agent with what was happening – not just how much they objectively won. After running the simulation for a while we had the agents consider if they wanted to change their outlook. One simple method is to look at other agents that are very satisfied and try to imitate them: if the prosocials are doing fine by their own measure, maybe one should become prosocial. We also tried more elaborate models with celebrities (more influential than others), and agents that considered whether the satisfied agents were satisfied for reasons they themselves would endorse. Some agents also randomly changed outlook.
The main result was very simple: no matter what state we started in there were two end-states. Either a bizarre sadomasochist world where everybody tried to make everything as bad as possible and were very satisfied with the mess, or a prosocial world where most agents tried to get good scores but also wanted their fellow players to get good scores. If we introduced the reasonable condition that agents that lost too many games disappeared (e.g. they might starve to death because they did not succeed in earning money, despite possibly being fine with losing) then the sadomasochist attractor state disappeared and only the prosocial state remained. If we made the necessity of scoring well stronger, it shifted towards a less altruistic individualist state.
The basic reason for this is just self-consistency: competitors, individualists and altruists meeting each other want the games to go in opposite directions, so they struggle. But prosocial agents want the same thing, and can hence achieve it much better.
The importance of this result is that it tells us we do not need to worry too much about “morality pills” leading to a society where we all become sociopaths in order to make maximal money. But our simulations also showed that the behaviour can be quite complex even with this simple game. Diversity and speed of learning played a large role. Introducing elements such as group membership, punishing rule-breakers, reputations or governance can lead to various forms of emergence and instability. Updating our moral strategies may not always have the intended effects: we need to simulate them carefully first.