Image description: A header illustration featuring a robotic arm hanging over a silhouetted, suited human with strings attached to their limbs like a marionette. The background is a solid peach-pink. End ID
Image description: A header illustration featuring a robotic arm hanging over a silhouetted, suited human with strings attached to their limbs like a marionette. The background is a solid peach-pink. End ID

Illistration by Prakrati Mamtani

Pause Giant AI Experiments: A Panicked, Precautionary Letter

Recently an Open Letter was released, calling for an immediate halt of AI research until proper precautions and regulations can be defined. This brings up a lot of questions about how comprehensive the letter is and consequences for the AI industry.

May 8, 2023

The letter, signed by notable tech innovators, including one of Apple's founders Steve Wozniak, and CEO of SpaceX, Tesla and Twitter Elon Musk, discusses the risks posed by powerful AI systems to humanity. It warns against developing AI systems without understanding their consequences could become an existential threat, calling on “all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months” in a publicly verifiable way. Instead, the letter calls for AI research and development to be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. Given how AI companies have been competing against each other to rapidly develop and train new and more advanced AI systems, the haste the letter kicks up is somewhat apropos.
“As of writing, over 50,000 individuals have signed the letter (even if signature verification efforts are lagging), including over 1800 CEOs and over 1500 professors,” says the Future of Life Institute’s FAQ page. Notably amongst the signatories is the author Yuval Noah Harari, whose book Homo Deus: A brief history of tomorrow already predicts some of the terrible consequences of AI taking over the world. Powerful, large-scale, public models of AI like ChatGPT have already affected our lives, both positively and negatively, depending on the way we use it. On one hand, ChatGPT is useful, quick, and so versatile in its functionality. On the other hand, dependence on it is increasing by the day — many of us can’t write an essay now without asking ChatGPT for ideas, editing, research, or review.
In my opinion, the most important recommendation of the letter pertains to the accountability for the harm caused by AI, particularly when it harms humans in any way — physical, psychological, financial, social, etc. It is closely followed by regulatory authorities, rigorous auditing and certification, and funding for AI safety. The world will change once AI becomes more public and accessible. For example, an Amazon Prime special show ‘Guilty Minds’: Episode 6 depicts a trial where a driverless car crashed into and killed another driver, in order to save its own passenger. The episode was an exploration of how, because the car was driven by AI and not a human driver, the stakes and the placement of blame was completely different. Should we blame the programmers? The AI company? The eight-year-old passenger? Or the other driver who was not following proper driving protocol, leading to the AI car malfunctioning? It is simply not possible to judge AI instruments through a legal lens designed for humans. Even giving Sophia, the social humanoid developed by Hanson Robotics, citizenship of Saudi Arabia didn’t sit right with many people, because giving a robot human rights is, in fact, weird, just how judging an AI based on human law is.
On the one hand, I don’t think the letter is going to harm the AI industry in any way. It says so explicitly: “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.” All that to say, now that we’ve started, there really is no going back. Especially because a majority of signatories are major stakeholders working in the AI industry themselves, and they certainly won’t sign a petition to shut down their own businesses, interests, and livelihood.
Realizing that they can only curb and not cull the AI industry, the letter seems to be a good precautionary measure to cope with the financial and political disruptions that common AI use will cause. Particularly, the letter is worried about the unlicensed, large-scale distribution and training of powerful AI that is not designed specifically to protect its users. If centrally-mandated authorities don’t control such AI being made available to the public without proper checks, the damage to users is what the letter focuses on controlling the most.
At the same time, what is it exactly that the letter calls to do? Not letting AI systems develop until “we are confident that their effects will be positive and their risks will be manageable?” That’s simply not possible. No matter how many embargoes and child-locks they place on AI systems, once they’re released to the public like ChatGPT is, it is impossible not to have any unexpected risks and consequences. It is as though we give ourselves too much credit for being able to predict everything, when time and again, humanity as a whole has been positive something would or would not happen, and it has almost always been wrong. Just think of the editorial published by New York Times in 1920, claiming a rocket would never be able to leave the Earth’s atmosphere. Then on 16 July 1969, the day of Apollo’s launch, they published a correction, saying “Further investigation and experimentation have confirmed the findings of Isaac Newton in the 17th century and it is now definitely established that a rocket can function in a vacuum as well as in an atmosphere.”
“The Times regrets the error,” the correction said, and I think so will people like Wozniak and Musk, when they find out powerful AI has been developed behind their backs that they can’t control. Negative consequences won’t matter to them when the power shifts to someone else’s hands. Right now, Wozniak and Musk are both on the forefront when it comes to new technology that will be developed. However, Wozniak’s states how his concern stems from some“bad people out there”(https://fortune.com/2023/05/03/apple-cofounder-steve-wozniak-artificial-intelligence-challenges-agrees-with-microsoft-bill-gates/), using AI to steal data or harm users in some other way. Signing the letter not only puts such figures on the list of ‘well-wishers’ instead of being viewed as pioneers of the threat itself, but also puts them on the list of regulatory authorities able to monitor the proliferation of AI.
I’m not against taking the time to build safe systems, I think that’s a great idea, provided everyone actually universally pursues it. Still, I like the caution being exercised, it seems like we’ve learnt a lesson in patience and controlling our ambitions from the mistakes of the past. I only hope they don’t ban ChatGPT, I would miss it.
Tiesta Dangwal is Deputy Features Editor. Email them at feedback@thegazelle.org
gazelle logo