Two takes on the current burst of power and capacity in AI - one is exultant, the other worried

“Both worried and exultant about AI, portrait” Prompt to Stable Diffusion

The range of semi-magical activities that the current wave of AI are responsible for - most of them incursing on intelligences, expertises and creativities that have usually been the preserve of sapient humans - is raising a storm-cloud of commentary.

From a cosmolocal and community power perspective, we’re doing our best to sample from it… For the sake of being generative, here’s an exultant and a worried take - and we’ll leave you to navigate between them.

Balaji Srinivasan: the professionals will fight hard against AI

The first is from the crypto-visionary Balaji Srinivasan, featured in these pages before, who often sears across the Twitter skies with a provocative take on developments, rooted in his belief in decentralised computing and organisation (the ethical creed of most blockchainers).

This tweet - its thread content is below - grabbed the reins of the current “Generative AI” wave:

AI means a brilliant doctor on your phone. Who can diagnose you instantly, for free, privately, using only your locally stored medical records.

Do you think the doctors will be happy about that? Or the lawyers? The artists? The others that AI disrupts?

They’ll fight it. Hard.

AI directly threatens the income streams of doctors, lawyers, journalists, artists, professors, teachers.

That happens to be the [US] Democrat base!

So they’ll lash out. Hard. AI safety for them means job security. Everything is on the table, from lawsuits to laws.

The last decade showed us that the US establishment hates tech.

Think about how they lost it over a better way to call a taxi!

We’ll soon see whether a down-the-middle, by-the-rules approach works — or whether every company that tries it gets sued & regulated for their trouble.

Will a teacher’s pension fund consciously back companies that replace every human teacher with a far superior AI tutor?

How about a college endowment, when AI does that to professors?

Will the AMA [American Medical Association] accelerate AI diagnosis?

Will journalists cheer their AI replacements?

No.

Overnight, we’ll see many blue checks [this refers to those on Twitter who have secured their authenticity of identity on the platform, indicated by a “blue check” symbol - Ed] start acting like blue collars.

The US establishment will turn even more protectionist, once politicians realize that AI doesn’t vote but their constituents do.

It won’t stop every use case. But it’ll fight many of the important ones.

AI likely increases global equality. But it’s the elephant graph to the Nth power.

Billions of people get free AI tutors, doctors, accountants, and lawyers.

And the top end also wins, the power users & platform operators.

But the median rich country citizen? Maybe not so much.

While the US establishment will pointlessly fight it, and the Chinese establishment will totally control it, if we’re lucky we may see the rest of the world build free and decentralized AIs.

As training costs come down, we should build the tech to enable that…

10 reasons to worry about generative AI

This is a very useful piece from Infoworld to think with - the writer aware that he’s batting for “Team Human”, against the jackhammers… Here’s a precise of his ten reasons to worry:

1.Plagiarism

AI’s like Dall-E and ChatGPT are “making new patterns from the millions of examples in their training set. The results are a cut-and-paste synthesis drawn from various sources—also known, when humans do it, as plagiarism.” The problem is, their blend or synthesis of these sources can be very convincing. But nothing “truly new” is being produced.

2. Copyright

"When AIs start producing work that looks good enough to put humans on the employment line, some of those humans will surely spend their new spare time filing lawsuits” [for plagiarism].

3. Uncompensated labour

“Should humans be compensated” when the creative labour they put into these AI services is further used to develop a company’s product? “Much of the success of the current generation of AIs stems from access to data. So, what happens when the people generating the data want a slice of the action? What is fair? What will be considered legal?”

4. Information is not knowledge

It’s a challenge to scholars and artists, when an AI absorbs technique or scholarship in a few months, where it’s taken a human expert many years to ingest material to the same extent. This is Balalji’s challenge above - imagine this level of expertise, and even creativity, being available to billions. Maybe “machines were made to decode the meaning of Mayan hieroglyphics”. Yet does it have the unpredictability of genuine human creativity and insight? “Isn’t something missing?”

5. Intellectual stagnation

|Speaking of intelligence, AIs are inherently mechanical and rule-based. Once an AI plows through a set of training data, it creates a model, and that model doesn't really change. Some engineers and data scientists imagine gradually retraining AI models over time, so that the machines can learn to adapt. But, for the most part, the idea is to create a complex set of neurons that encode certain knowledge in a fixed form.

“Constancy has its place and may work for certain industries. The danger with AI is that it will be forever stuck in the zeitgeist of its training data. What happens when we humans become so dependent on generative AI that we can no longer produce new material for training models?”

6. Privacy and security

So many forms of data are being poured into these AIs for training. Can we be sure personal info won’t leak from this? Won’t smart criminals be able to trick the AI into divulging it? "As an example, say the latitude and longitude of a particular asset are locked down. A clever attacker might ask for the exact moment the sun rises over several weeks at that location. A dutiful AI will try to answer. Teaching an AI to protect private data is something we don’t yet understand.”

7. Undetected bias

“The hardware at the core of generative AI might be as logic-driven as Spock, but the humans who build and train the machines are not. Prejudicial opinions and partisanship have been shown to find their way into AI models. Perhaps someone used biased data to create the model. Perhaps they added overrides to prevent the model from answering particular hot-button questions. Perhaps they put in hardwired answers, which then become challenging to detect. Humans have found many ways to ensure that AIs are excellent vehicles for our noxious beliefs.”

8. Machine stupidity

We’re beginning to know all the stuff that AI does well. But it doesn’t do basic arithmetic well. It gets the number of fingers (or tentacles) wrong for humans (and octopuses). “Machine intelligence is different from human intelligence and that means machine stupidity will be different, too.”

9. Human gullibility

Humans are gullible to entities that present knowledge with calm confidence. AIs are seeking patterns of information to answer our queries - and those patterns, if they’re trying to overcome gaps in the AIs knowledge, can also be plausible but entirely fictitious. “They can produce paragraphs of perfectly accurate data, then veer off into speculation, or even outright slander, without anyone knowing it's happened. Used car dealers or poker players tend to know when they are fudging, and most have a tell that exposes their calumny; AIs don't.”

10. Infinite abundance

Digitality has long challenged any idea of economics based on the management of scarcity. Generative AI just takes this challenge to the next level. What if its capacity to generate requested images and prose puts many artists and writers (and bureaucrats) out of work? What if we cease to give credibility to much human art, if machines are generating powerful artworks themselves? What if the whole internet ad model falls over, because no-one can tell whether it’s chatbots engaging with each other, or humans with money in their pockets?

From the Alternative Global perspective, this level of abundance is a massive opportunity for social, even civilisational transformation - it’s been a part of our futures thinking for years.

More here.