AI Politics: From Pausing to Regulating, It’s All About Winning the Hearts and Minds of People

Written by linked_do | Published 2023/05/03
Tech Story Tags: ai | artificial-intelligence | politics | news | analysis | ai-ethics | responsible-ai | hackernoon-top-story

TLDR“The Letter” was just the beginning. Welcome to the AI politics show. Grab some popcorn, or better yet, get in the ring.via the TL;DR App

“The Letter” was just the beginning. Welcome to the AI politics show. Grab some popcorn, or better yet, get in the ring.

“I got a letter from the government Future of Life Institute the other day

I opened and read it, it said they were suckers

They wanted me for their army or whatever

Picture me giving a damn, I said never

Here is a land that never gave a damn

About a brother like me and myself because they never did

I wasn’t with it, but just that very minute it occurred to me

The suckers had authority power

Black Steel In The Hour Of Chaos Lyrics by Public Enemy

The connection between Public Enemy and the state of AI today may not be immediately obvious. But if you swap “government” for “Future of Life Institute” and “authority” for “power”, those lyrics can be a pretty good metaphor for what’s happening in AI today.

“The Letter”, as it has come to be known on Twitter, is an Open Letter compiled by the Future of Life Institute (FLI) and signed by an ever-growing number of people. It calls for a pause on training of AI models larger than GPT-4 in order to “develop and implement a set of shared safety protocols for advanced AI design and development”.

FLI’s letter mentions that “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” A statement few would disagree with, including people who raised justified concerns about “The Letter”.

I signed “The Letter” too. When I did, it had less than 1.000 signatories. Today, it has about 50.000 according to the FLI’s FAQ. I did not sign because I fully agree with FLI or its framing – far from it. I also have reservations about the above statement, and I am extremely aware and critical of the so-called AI hype.

I signed “The Letter” expecting that it could raise attention and get a very much needed conversation going, and it did. The only other time I recall AI backlash to have fueled such hot debate was in 2020. That was the time when Google fired a number of researchers who raised concerns about the practice of building ever-bigger AI Large Language Models in a paper known as “Stochastic Parrots”.

Of course, 2,5 years is a lifetime in AI. That was pre-ChatGPT, before AI’s break into the mainstream. But that does not necessarily mean that the issues are widely well-understood today either, even if they are hotly debated.

The Future of Life Institute and TESCREAL

A first line of criticism against “The Letter” cites its origins and the agendas of the people who drafted and signed it – and rightly so. Indeed, the Future of Life Institute is an Effective Altruism, Longtermist organization.

In a nutshell, that means people who are more concerned about a hypothetical techno-utopian future than the real issues the use of technology is causing today. Even though FLI’s FAQ tries to address present harms too, somehow Peter Thiel and Elon Musk types citing “concentration of economic power” as a concern does not sound very convincing.

Philosopher and historian Emile P. Torres who was previously a Lontermism insider has coined the acronym TESCREAL to describe Lontermism and its family of ideologies. Claiming that we need to go to Mars to save humanity from destroying Earth or that we need super-advanced AI to solve our problems speaks volumes about TESCREAL thinking.

These people do not have your best interest at heart, and I certainly did not see myself signing a letter drafted by FLI and co-signed by Elon Musk. That said, however, it’s also hard not to. The amount of funding, influence and publicity Elon Musk types garner is hard to ignore, even for their critics.

Funding and goals

Case in point: DAIR, the Distributed AI Research Institute, set up by AI Ethicist Timnit Gebru. Gebru was one of the people who were fired from Google in 2020. DAIR was founded in 2021 to enable the kind of work that Gebru wants to do.

DAIR is “rooted in the belief that AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial”. That sounds commendable.

DAIR employs a number of researchers to work on its mission and has raised $3.7 million from the Ford Foundation, the MacArthur Foundation, the Kapor Center, George Soros’ Open Society Foundation and the Rockefeller Foundation. Research has to be funded somehow. But perhaps it’s worth pondering about the source of this funding too.

Gebru is aware of the conundrum and has spoken about “Big Tech billionaires who also are in big philanthropy now”. Presumably, DAIR founders believe that the use of these funds towards goals they find commendable may be more important than the origins of the funds. But should this line of thinking be reserved exclusively for DAIR?

DAIR published a “Statement from the listed authors of Stochastic Parrots on the “AI pause” letter”. In this otherwise very thoughtful statement, its authors write that they are “dismayed to see the number of computing professionals who have signed this letter, and the positive media coverage it has received”.

Motives, harms and politics

While I know and have worked with some of the professionals who signed FLI’s letter, I can’t speak for anyone but myself. But I do think it would be fair to give them the benefit of the doubt.

Some like Gary Marcus have stated that while they do not fully endorse “The Letter”, they signed in order to achieve a specific goal they find very important. Sound familiar?

People have questioned the motives of the signatories, claiming that some may simply wish to stall the ones currently leading AI in order to catch up. Case in point, Elon Musk is setting up a new AI company called x.ai. And OpenAI now says that maybe ever-larger AI models is not the way to go.

But not everyone who signed is motivated by self-interest. And the harms resulting from the deployment of AI systems today are real.

Worker exploitation and massive data theft; reproduction of systems of oppression and the danger to our information ecosystem; the concentration of power. The harms that DAIR cites are all very real.

The powers that be are either actively promoting or mindlessly enabling these via AI. Building coalitions to raise issues, draw awareness and undermine Big Tech’s march is the pragmatic thing to do.

If that sounds like politics it’s because it is, as people have noted. That means it’s about “opinions, fears, values, attitudes, beliefs, perspectives, resources, incentives and straight-up weirdness” – plus money and power.

That’s what it’s always been about. Gebru is not a stranger to this game, having tried to change things from inside Google before setting out to play the influence game from the outside.

Media influence and research

FLI’s call for an AI moratorium was not the first one, but it was the one that got traction. Being pragmatic called for signing, even critically. That’s what Marcus did, despite having proposed a moratorium before the FLI. That one did not attract  Elon Musk type signatories or the “positive media coverage” that DAIR saw for FLI’s letter.

It’s true that there has been some of that. Some outlets are always eager to play the sensationalism card, others are openly inspired by TESCREAL. But that does not mean that all coverage was positive. In any case, media influence is part of the game.

But what about research? Both DAIR and AI leaders like Marcus, Andrew Ng and Yann LeCun mention research. Ng and LeCun believe that research is part of the solution to bring “new ideas that are gonna make [AI] systems much more controllable”.

This epitomizes a widely held belief, which seems to boil down to problems being mainly of technical nature. If you hold this belief, then it makes sense to also believe that what’s needed to overcome problems is more research to come up with solutions.

As the “Stochastic Parrots” incident shows, however, this never was about lack of solutions. It’s more about agendas, politics, money and power.

Monopolies and fixing AI

Marcus notes that what scares him the most about AI is people. He argues for bridge-building and not focusing on a single issue. In this spirit, it’s important to not just focus on building better AI through research. Ensuring that people have access to it is crucial.

Ultimately, having access to a better browser than Internet Explorer was more important than the browser itself. If that was true for the 90s browser wars, it may also tell us something about AI today. That’s the gist of the argument Matt Stoller is making.

Stoller, a committed anti-monopolist, sees both Google’s and Microsoft’s appeal to AI as two sides of the same coin. Stoller thinks Google’s claim that AI poses a threat to its dominance in search is an effort to mislead the antitrust investigation against Google.

Stoller claims “it’s on us, as a democratic society, to tell our lawmakers that we don’t want this fantastic scientific knowledge controlled by the few”.  He is right to call for vigilance against Big Tech.

Some AI researchers and entrepreneurs are working on creating datasets and models that are open for anyone to use. That is great, but we need to keep in mind the browser metaphor. Datasets and models enable developers to build things. But if those things don’t get shelf space because Big Tech shuts them out, they won’t do much good.

Being an AI expert helps. But knowing what you’re talking about does not necessarily mean you will be heard, as the case of Melanie Mitchell and her exchange with Senator Chris Murphy goes to show.

Mitchell did a great job of debunking AI hype. Building alliances with people who may be able to do something about AI..not so much. Sometimes stepping in is the right thing to do, and to get there, alliances are needed.

A modest proposal

AI expert or not, there are a couple of very important things that each of us can do.

First, understand the nature of the monopolistic power that the ChatGPTs of the world are exerting. The more we use those Big Tech systems, the more we contribute to making them better and the more we feed Big Tech’s power. Let’s not fall for it this time. Let’s use alternatives, of which there are increasingly many.

Second, get into AI politics. Send a letter to a senator, collect signatures, march the streets or hold a Twitter Space to discuss an international agency for AI – whatever works. But do something, or at least, be aware of what others are doing. Pushing on with business as usual at AI’s breakneck speed is a recipe for disaster.

We are already in a situation where the scale of power is largely rigged in favor of Big Tech. Doing nothing means staying neutral in a situation of injustice, which is choosing the side of the oppressor. We are already in a situation where the scale of power is largely rigged in favor of Big Tech.

A version of this piece appears here.


Written by linked_do | Got Tech, Data, AI and Media, and he's not afraid to use them.
Published by HackerNoon on 2023/05/03