Key figures in artificial intelligence want the training of powerful AI systems to be suspended amid fears of a threat to humanity.
According to more than 1,000 experts, human safety is at risk from “AI experiments” and they should be paused to prevent harm. In the next six months, researchers need to stop developing new AI System – and if they do not, governments must intervene.
This is the grave conclusion of an open letter signed by academics and technology leaders, including Apple co-founder Steve Wozniak and Elon Musk.
AI offers significant positive possibilities for humanity. The letter says that we can now enjoy an “AI summer” in which we adapt to what has already been created.
It is possible, however, that the world could face a much more difficult situation if scientists continue to train new models.
According to the authors of the letter, AI labs have engaged in an out-of-control race to develop and deploy ever more powerful digital minds that nobody – not even their creators – can comprehend, predict, or reliably control.
In the letter, AI labs should pause work on any system more powerful than GPT-4, which was released earlier this month by OpenAI, for at least the next six months.
The authors recommend a public, verifiable pause that involves all key players. If a pause cannot be enacted quickly, governments should step in and institute a moratorium.
This should mean that any AI system built within them is safe beyond a reasonable doubt, they argue. AI labs and experts should work on creating new principles for the design of AI systems during those six months.
Rather than pausing Artificial Intelligence work in general, it would mean stopping the development of new models and capabilities. Instead, that research should refocus on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In addition, the same pause might allow policymakers to create new governance systems for AI. This would involve setting up authorities that can track their development and ensure they don’t pursue dangerous goals.
As of right now, the letter includes the signatures of the founders and chief executives of Pinterest, Skype, Apple, and Tesla. There are also experts in the field from Berkeley and Princeton universities.
It has also been signed by researchers from companies working on their own AI systems, such as Deepmind, owned by Alphabet’s parent company Google.
As a founding member of OpenAI, Musk contributed funding when it launched at the end of 2017. But he has recently become more critical of its work, arguing that it is becoming fixated on new systems and developing them incorrectly for profit.