What if AI sought not to simulate or automate humans, but amplified and revelled in our changeability? Two essays put humanity back in the technological saddle

Some of the more integrated and humanist pieces we’ve recently found on AI, excerpted below.

Evgeny Morozov has written in the FT about “the ai we could have had” - an essay based on his oral interviewing of the US computer pioneers Avery Johnson and Warren Brodey (and their circle), who tried to argue for a vision of AI as “craftsman” among humans, rather than substituting for human efforts:

Johnson and Brodey believed these companies had overlooked a crucial philosophical question about the technology they were working on: were computers really destined to be mere slaves, condemned to an eternity of performing repetitive tasks?

Or could they be something more? Could they evolve into craftsmen? While slaves unerringly obey commands, craftsmen have the freedom to explore and even challenge directives.

The finest craftsmen do more than just fulfil orders; they educate and enlighten, expanding our horizons through their skill and creativity. Johnson and Brodey wanted to wrest control away from those eager to mass-produce an army of subservient machines.

…Their vision of computing was not about prediction or automation. The tech they were building was supposed to expand our horizons. Instead of trusting a computer to recommend a film based on our viewing history, they wanted us to discover and appreciate genres we might have avoided before.

Their tech would make us more sophisticated, discerning and complex, rather than passive consumers of generative AI-produced replicas of Mozart, Rembrandt or Shakespeare.

What I discovered was that the types of interactivity, smartness and intelligence that are baked into the gadgets we use every day are not the only kinds available. What we now consider inevitable and natural features of the digital landscape are in fact the result of fierce power struggles between opposing schools of thought.

With hindsight, we know that Silicon Valley ultimately embraced the more conservative path. The Homo technologicus it produced mirrors the Homo economicus of modern economics, valuing rationality and consistency, discouraging flexibility, fluidity and chance. Today’s personalised tech systems, once the tools of mavericks, are more likely to narrow our opportunities for creativity than expand them.

Consider the much-criticised ad for Apple’s latest tablet. In “Crush!” a colossal hydraulic press steadily obliterates a mountain of musical instruments, books, cameras and art supplies, to the strains of Sonny and Cher’s 1971 hit “All I Ever Need Is You”, leaving behind only an ultra-thin iPad. This one device, we are meant to understand, has within it all the capabilities of the demolished objects. We won’t be needing them any more.

Was there another way? Perhaps.

It turned out there was a way for technology and ecology to coexist after all. The secret was the concept of “responsiveness”. Early, more conservative strands of cybernetics had fixated on the simple model of the adaptive thermostat, marvelling at its ability to maintain a preset temperature in a room. Our modern-day smart-home systems, which intuit our preferences and automate everything, quietly adapting to our needs, are just fancier versions of this idea.

But the rebels at the lab thought this kind of automation was the antithesis of true responsiveness. They saw human relations, art and identity as open-ended, always-evolving ecologies that could not be reduced to the thermostat’s simplistic model of optimisation. Can one really pinpoint the “right” cinema, music or loved one in the same way as the right temperature of a room? Today’s TV, music and dating apps seem to think so. The Boston contrarians did not.

…My interest in the Boston lab was sparked by the hunch that its members were the early proponents of solutionist thought. But after speaking with the people who worked at Lewis Wharf, I realised they were in direct opposition to those kinds of smart technologies.

Unlike some critics of Big Tech today, they did not champion a return to vintage or “dumb” tech. Instead, they envisioned a kind of digital smartness that remains almost unimaginable to us today. They saw people as fickle and ever-changing, qualities they did not view as flaws.

In 2014, when I asked Brodey about the possibility that his responsive mattresses and chairs would be able to find an ideal position for each user, his response struck me: “That wasn’t our purpose,” he said. “There is no ideal anything, because we are constantly changing. We’re not like machines.”

He is right; machines we aren’t. But the wrong technologies can make us machine-like. And maybe they have. Perhaps this is the root of our discomfort about the direction of the digital revolution: that rather than making machines more human, it is making people more mechanical.

Speaking at a 1967 conference, Brodey minced no words: “man becomes captured, captured behind the grid of what can be programmed into the machine . . . We have been captured by automobiles, by houses, by architecture, simplified to the point of unresponsiveness.”

The maddening efficiency of our digital slaves has obscured the idea that human agency depends on constant course correction. As Brodey noted in 1970, “Choice is not intellectual. It’s made by doing, by exploring, by finding out what you like as you go along.”

Sparling told me that a key question driving the lab’s work was, “What can we discover that allows the person in the loop to learn and progress with whatever they are trying to do?” The common thread uniting projects such as the dancing suit and the restraint blanket, she said, was their celebration of improvised learning — jazz style — as the core value that should underpin interactive tech.

Despite the lab’s failure, Johnson and Brodey’s insights carry an important message. If we want technology that expands our choices, we must recognise that someone has to fund it, much as our governments fund public education or arts and culture. Achieving this on a massive scale would require an effort comparable with the one that initiated the welfare state.

Consider this. Dumping all the world’s classical music on to your Spotify playlist, no matter how refined its recommendations, won’t turn you into a connoisseur. Yet, isn’t there a way to harness the latest technologies to serve that mission?

Here is a radical idea that Silicon Valley will not admit: technology is not just about freezing, stratifying and monetising existing tastes. It can also deepen, sophisticate and democratise them.

This kind of post-solutionist approach seems more realistic than continuing to hope that legions of algorithmic slaves can solve all our problems.

Despite the hype, generative AI—even if made widely accessible for free— is unlikely to spark a revolutionary wave of creativity and might, in fact, hinder it by depriving practising artists and educators of stable incomes. Making tablets ever thinner and more powerful won’t get us there either.

More here.

This next piece is from Matthew Crawford, author of Shop Work as Soul Craft, a worthy successor to Pirsig’s Zen and the Art of Motorcycle Maintenance. Crawford ponders on why we allow AIs to “erase” our sense of self. He takes as key an anecdote where a father gets a Generative AI to produce a toast for his daughter at her wedding, then abandons the generated text at the last moment:

What would it mean, then, to outsource a wedding toast [to Chat GPT}? To use Heidegger’s language, some entity has “leaped in” on my behalf and disburdened me of the task of being human. For Heidegger, this entity is “das Man,” an anonymized other that stands in for me, very much like Kierkegaard’s “the Public.” It is a generalized consciousness—think of it as the geist of large language models.

LLMs are built on enormous data sets—essentially, all language that is machine-scrapable from the Internet. They are tasked with answering the question, “given the previous string of words, what word is most likely to occur next?” They thus represent what the philosopher Talbot Brewer recently referred to as “the statistical center of gravity” of all language.

Or rather, all language that is on the Internet. This includes the great literature of the past, of course. But it includes a whole lot more of the present: marketing-speak, what passes for journalism, the blather produced by all who suffer from PowerPoint brain.

But put aside the impoverished quality of the language that these LLMs are being trained on. If we accept that the challenge of articulating life in the first person, as it unfolds, is central to human beings, then to allow an AI to do this on our behalf suggests self-erasure of the human.

In a presentation in Charlottesville in April that is yet unpublished, at University of Virginia’s Institute for Advanced Studies in Culture, Brewer referred to “degenerative AI.” Because the new AIs are language machines, they are “aimed right at our essence”…

…In the normal course of human society, you are born into a culture that has prepared the way for you. It initiates you into its language and tells a story of where you came from.

It is saturated with meaning due to a chain of begettings that reaches back in time, each generation of which started and grew through acts of love: at conception, and in the ongoing work of teaching, transmission and care.

The world is welcoming, in other words. It was built by your ancestors, and they imagined you long before you arrived.1 They wondered what sort of work you might do, before you knew there is such a thing as work. Your parents may have recognized the echo of a sibling or a parent in your face as you sought the nipple. They smiled at you.

This sense of a world handed down in love is interrupted when the basic contours and possibilities of life appear to be ordered by impersonal forces…

…This mood of interchangeability is likely to deepen as AI saturates the world and we are tempted to let it stand in for our own subjectivity. But, like the father at his daughter’s wedding, we are still free to refuse it.

More here. See our burgeoning archive on artificial intelligence.