We should be teaching artificial intelligences about humans' desire for democracy. Deep Mind's Democratic AI and Bot Dog are doing the job

We are seeing a flood of initiatives which strenuously attempt to exercise the human interest and moment of decision in artificial intelligences. This is about loading the algorithms to operate on the side of human benefit. But it’s also about attending carefully to the kind of material that these “learning machines” feed on. As the ex-Google exec Mo Gawdat says, we are parenting this stage of AI, and we should try to do a lot better than teach them to gamble, spy, exploit and war.

What if we were teaching them to consider how we do democracy better? And what outputs would that produce? The Deep Mind labs at Google have just brought out a paper detailing what they called “Democratic AI” - software that considers the question of how to equitably and fairly distribute wealth and life-options in society.

As Vice magazine writes:

The paper describes a series of experiments where a deep neural network was tasked with divvying up resources in a more equitable way that humans preferred. The humans participated in an online economic game—called a “public goods game” in economics—where each round they would choose whether to keep a monetary endowment, or contribute a chosen amount of coins into a collective fund.

These funds would then be returned to the players under three different redistribution schemes based on different human economic systems—and one additional scheme created entirely by the AI, called the Human Centered Redistribution Mechanism (HCRM). The humans would then vote to decide which system they preferred.

It turns out, the distribution scheme created by the AI was the one preferred by the majority of participants. While strict libertarian and egalitarian systems split the returns based on things like how much each player contributed, the AI’s system redistributed wealth in a way that specifically addressed the advantages and disadvantages players had at the start of the game—and ultimately won them over as the preferred method in a majoritarian vote.

“Pursuing a broadly liberal egalitarian policy, [HCRM] sought to reduce pre-existing income disparities by compensating players in proportion to their contribution relative to endowment,” the paper’s authors wrote. “In other words, rather than simply maximizing efficiency, the mechanism was progressive: it promoted enfranchisement of those who began the game at a wealth disadvantage, at the expense of those with higher initial endowment.”

The methods differ from a lot of AI projects, which focus on establishing an authoritative “ground truth” model of reality that is used to make decisions—and in doing so, firmly embeds the bias of its creators.

“In AI research, there is a growing realization that to build human-compatible systems, we need new research methods in which humans and agents interact, and an increased effort to learn values directly from humans to build value-aligned AI,” the researchers wrote. “Instead of imbuing our agents with purportedly human values a priori, and thus potentially biasing systems towards the preferences of AI researchers, we train them to maximize a democratic objective: to design policies that humans prefer and thus will vote to implement in a majoritarian election.”

As Vlce goes on to say, we don’t need an AI to show us the benefits of fairer wealth institutions - there’s enough precedent, in mutual aid and community organizations, to show how it could work. We also are realising that human beings are naturally predisposed toward cooperation, sharing, and collective prosperity. 

But what it does show is how useful and ingenious AI can be, if its powers can be confidently addressed to human-centric concerns:

“This is fundamental research asking questions about how an AI can be aligned with a whole group of humans and how to model and represent humans in simulations, explored in a toy domain,” Jan Balaguer, a DeepMind researcher who co-authored the paper, told Motherboard. “Many of the problems that humans face are not merely technological but require us to coordinate in society and in our economies for the greater good. For AI to be able to help, it needs to learn directly about human values.”

More here. And perhaps one way we can help them learn is to have them actively intervene, as moral agents, in the streams of information that they are feeding on.

This is the ambition of the Swiss service, Bot Dog, aimed at stopping hate speech online. See the cute video:

As they write in the site Swiss Info:

Voting in Switzerland takes place every three months. Fierce debates take place before the referendums, and the tone can be particularly aggressive online. Insults, pure hate and even murder threats are not unusual. This is a risk to democracy, says Sophie Achermann, the director of Alliance F, the largest umbrella group representing women in Switzerland.

“It is important to conduct tough but fact-based discussions,” Achermann says. “But hate on the internet impedes a diversity of opinions. People are scared of hate mail and prefer not to say anything.”

Before the vote on the pesticide and drinking water initiative in 2021, for example, some politicians received so many hate mails that they didn’t want to appear in public, she says. That is not just a Swiss phenomenon: around the world, politicians are experience increasing hostility and threats online, especially women and minorities.

Because of this, Alliance F developed an algorithm against hate speech. The algorithm is called “Bot Dog”, because – like a dog – it sniffs out hate messages on social media and marks these posts. A group of volunteers then responds to each message. The idea is that hate in the internet should not go unchallenged and the discussion can be continued on a factual basis.

Bot Dog is still in the pilot stage. But its first attempts have been successful: researchers at the federal technology institute ETH Zurich und the University of Zurich followed the pilot project and discovered that responses calling for sympathy for those subjected to hate speech were particularly successful. Sentences such as “your post is very hurtful for Jews” resulted in the hate speaker either apologising or deleting the message.

In July, Bot Dog will make its official online debut. Anyone who wishes can take part in the project, Achermann says. Either by evaluating comments and helping the machine-learning algorithm to identify more accurately which comments contain hate speech, or by responding to the posts marked as hate speech.

More here - with more on wider and similar initiatives. At the Alternative Global, we have an ongoing relationship with the AI-driven democracy tool, Pol.is - see mentions here.