Calling For A Moratorium On Calling For A Moratorium On AI

The AI industry is heavily capitalized, and reliant on smoke and mirrors. Calls for regulation and open letters asking for a pause in AI research from leaders of AI companies make headlines every few months. These are not circulating out of genuine fear, they are self aggrandizing, and honestly monopolistic. In under a year, artificial…


First published in MasonPelt.com on June 9, 2023.
, ,

The AI industry is heavily capitalized, and reliant on smoke and mirrors. Calls for regulation and open letters asking for a pause in AI research from leaders of AI companies make headlines every few months. These are not circulating out of genuine fear, they are self aggrandizing, and honestly monopolistic.

In under a year, artificial intelligence has transformed from a once-geeky sci-fi concept into a widely fear-inducing topic. No group benefits more from these fears than companies in the AI industry.

A staggering number of new AI startups have emerged as artificial intelligence has become more mainstream. Building and training AI models can be extremely costly, whereas the fees charged by Open AI or other AI startups with hundreds of millions in funding are relatively low.

This is a great time to hop on the hype train and secure some VC money. Almost all of these new AI ventures, including grief tech and writing assistants, are powered by a small handful of large AI companies. Full send, fewer than 20 companies are poised to serve as the backbone of most AI projects.

Hype V. Reality Of AI

It is important to note that existing AI projects are simply advanced statistical models. While the technology is impressive, it is not as remarkable as its reputation may suggest. A knife is dangerous, but even a very large knife is not the same as a nuclear bomb.

In March, an open letter signed by Elon Musk, Stability AI CEO, Emad Mostaque and 31,808 others, read, “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

That letter went on to call for a six month pause in development of AI more powerful than GPT4. The letter when so far as saying if the, “pause cannot be enacted quickly, governments should step in and institute a moratorium.” But during that six month pause the letter proposed almost nothing concrete.

RELATED  Hey Elon Musk, Tinder Imperfectly Cracked Verification, So Can Twitter

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.” Reads the letter.

In corporate speak, the letter is calling to have a meeting that will discuss setting an agenda for a future meeting. The objectives stated are aethereal, and at best directional. But the call for a pause in development of some vaguely defined technology to have a discussion about the impending doom, creates a strong impression of the power of AI.

With nearly 40,000 signers, I’m sure not all were self serving. But my marketing hat keeps screaming that it’s far easier to sell a technology that could end human existence, than an advanced statistical model remixing existing intellectual property. Same goes for soliciting nonprofit donations or getting research grants.

Another Letter

In May, the Center for AI Safety published a single-sentence statement signed by, among others, executives from OpenAI and DeepMind, that AI could potentially drive humanity to extinction.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Reads the letter, in its entirety.

An observant reader may notice that while the open letter from March used many words to convey little meaning, the May open letter used even fewer words to say less. The letter doesn’t even specify which type of AI it refers to, except for implying that it may lead to a doomsday scenario. Doomsday AI is a technology that, to my knowledge, does not yet exist.

RELATED  How Forbes Monetizes The Frauds They Create

Not everyone signing these letters is necessarily self-serving. But almost everyone from CEO’s of companies building the technology, to academics researching AI benefit from a perception of a possible AI Armageddon. Think about it, are you more likely to donate money to save humanity from death, or from a future with spammy websites, and annoying automated decisions?

Calls For Self-Serving AI regulation

When it comes to calls for regulation, I also raise an eyebrow. While monopolies can happen due to a lack of regulation, regulation can also protect monopolies. When Sam Altman, the CEO of OpenAI calls for regulation, he’s calling to protect his company.

According to Crunchbase, OpenAI has over $11 billion in total funding. They are able to comply with regulation that would effectively leave smaller competitors dead in the water. Calls for regulation from the largest players in AI, are just calls to pull the latter up after them.

This is why I’m calling for a total and complete moratorium on calling for a moratorium on AI research, and for calls of regulation from anyone who financially benefits from the success of an AI company.


Article by Mason Pelt of Push ROI. First published in MasonPelt.com on June 9, 2023. Photo: “Japan Apocalypse IMG_7345” by Abode of Chaos is licensed under CC BY 2.0.