An open letter calling for a six-month moratorium on the development of powerful AI systems is not enough, says an AI expert with more than 20 years of experience in AI safety research.
Eliezer Yudkowsky, a decision theorist at the Institute for Machine Intelligence, wrote in a recent op-ed that Tesla CEO Elon Musk and hundreds of others are working to develop “AI systems more powerful than GPT-4.” The six-month “pause” called on other innovators and pundits to underestimate the “seriousness of the matter”. He’d go even further and introduce “indefinite and global” pauses for new large-scale AI learning models.
The letter, published by the Future of Life Institute and signed by more than 1,000 people, including Musk and Apple co-founder Steve Wozniak, argues that safety protocols need to be developed by independent watchdogs to guide AI The future of the system.
“Robust AI systems should only be developed when we are sure their effects are positive and their risks manageable,” the letter reads. Yudkowski doesn’t think that’s enough.
“The key issue isn’t ‘human competition’ for intelligence (as the open letter puts it); it’s about what happens after AI becomes smarter than human intelligence,” Yudkowski wrote for Newsweek.
“Many researchers working on these topics, including myself, predict that the most likely outcome of building a superhumanly intelligent AI in a situation as remote as the current one is that everyone on Earth dies,” he claims. “It’s not like ‘maybe it’s a distant coincidence,’ it’s like, ‘That’s clearly going to happen.’”
OpenAI CEO Sam Altman speaks during the keynote address where Microsoft announced the integration of ChatGPT for Bing on February 7, 2023 in Redmond, Washington. OpenAI’s new GPT-4 learning model is the most advanced AI system ever developed and can be generated, edited and iterated with users on creative and technical writing tasks.
OpenAI CEO Sam Altman speaks during the keynote address where Microsoft announced the integration of ChatGPT for Bing on February 7, 2023 in Redmond, Washington. OpenAI’s new GPT-4 learning model is the most advanced AI system ever developed and can be generated, edited and iterated with users on creative and technical writing tasks.
The problem, for Yudkowsky, is that an AI smarter than humans might disobey its creators and not care about human life. Don’t think of “Terminator” – “Imagine an entire alien civilization thinking millions of times faster than humans, initially limited to computers – in a world of creatures who, from their point of view, are very stupid and Very slowly,” wrote him.
Yudkowsky cautions that there are currently no proposed plans for a superintelligence that thinks the best solution to every problem it’s asked to solve would wipe out all life on Earth. He also worries that AI researchers don’t really know whether learning models have become “confident,” and whether it’s ethical to have them when they become “confident.”