Algorithms at your service, not you at theirs: Francois Chollet says exercise your agency over AI

We're always pushing at the idea that the big technological forces of the future can be deployed by humans in their communities for real benefit, not just suffered as a disruptive force.

So it's helpful when a tech-leader - this time, Francois Chollet from Google - seems to have an Oppenheimer moment. That is: realising, from inside the system, that people outside need to know the dangers of what's happening.

Chollet, who is an artificial intelligence researcher with Google, has written an essay with the title "What worries me about AI". The post is long and needs your sustained attention, but the conclusions Chollot comes to are clear: 

  • Not only does social media know enough about us to build powerful psychological models of both individuals and groups, it is also increasingly in control of our information diet. It has access to a set of extremely effective psychological exploits to manipulate what we believe, how we feel, and what we do.
  • A sufficiently advanced AI algorithm with access to both perception of our mental state, and action over our mental state, in a continuous loop, can be used to effectively hijack our beliefs and behavior.
  • Using AI as our interface to information isn’t the problem per se. Such AI interfaces, if well-designed, have the potential to be tremendously beneficial and empowering for all of us. The key factor: the user should stay fully in control of the algorithm’s objectives, using it as a tool to pursue their own goals (in the same way that you would use a search engine).
  • As technologists, we have a responsibility to push back against products that take away control, and dedicate our efforts to building information interfaces that place the user in charge. Don’t use AI as a tool to manipulate your users; instead, give AI to your users as a tool to gain greater agency over their circumstances.

This is useful, because it begins to give citizens some degree of literacy, maybe even input and pushback, on the devices, apps and sites that are prettily presented for them to click and use. We can think of other initiatives that are hoping to build up this awareness - primarily among designers of these systems, but with an eye to user and consumer power too.

Like another ex-Google engineer, Tristan Harris, who has launched The Center for Humane Technology. Their mission statement:

Technology that tears apart our common reality and truth, constantly shreds our attention, or causes us to feel isolated makes it impossible to solve the world’s other pressing problems like climate change, poverty, and polarization.

No one wants technology like that. Which means we’re all actually on the same team: Team Humanity, to realign technology with humanity’s best interests.

Or also check out the research stream on one of A/UK's collaborators, the holistic think tank Perspectiva, which focusses on the "attention crisis":

As the battle for our attention takes place, we find our attention is more fragmented than ever. While this has very real implications for mental health, it also challenges us at a deeper, spiritual level to shape our lives, set our own goals, and “want what we want to want” as the Philosopher Harry Frankfurt puts it.

There is also a concern that being increasingly distracted through information targeted directly at our habitual self-interest will make it harder for people to think socially and ecologically, precisely when these sensibilities beyond the self are most needed.

We will explore ways to navigate ways out of this predicament at personal and political levels. For instance, the links between paying attention and living a meaningful life; the place of contemplative practice in cultivating attention; and the roles that different political and economic actors in the attention economy could play to help bring about a world which respects and protects, rather than exploits, human attention.