Artificial Intelligence Developers and Experts Seek 6-Month Pause in Development

(BrightPress.org) – Leaders in the technology industry are calling for a six-month pause in the development of powerful new AI systems as they may pose a significant risk to civilization. Steve Wozniak, Elon Musk, and over a thousand others have signed an open letter published by Future of Life Institute calling for the immediate pause of research and development of any AI system that’s as advanced or more advanced than OpenAI’s Chat GPT-4

OpenAI’s language model GPT-3 made a splash late last year when it busted onto the scene, making short work of writing assignments, crafting short scenes in the style of Shakespeare, or being used as an advanced search engine. The newest iteration can even recognize images and has more advanced language and understanding capabilities, meaning it’s even more human-like than its predecessor. 

The letter and its signatories are asking for the development of independent overseers who develop careful protocols to ensure that these AI systems are only used for the good of humanity. Developing powerful systems should only continue once those protocols are in place, and any risks are identified and managed in advance, the letter states. 

It also warned against the use of AI systems to generate lies on massive scales. A single system could generate thousands of users and flood social media with lies; if misinformation was a problem before, it’d be even more troubling with AI in the picture without proper oversight. There is also a concern that AI could displace human workers, leading to droves of unemployed people, something that should be avoided. 

The pause in development would allow different labs and independent experts to come together and chart a safety plan so that their experiments don’t run amok. The letter also encourages those experts and developers to communicate with legislators so that public policy can be brought in line with current technologies.

Crucially, the letter is not asking for a general pause in AI development, only those powerful systems which have the potential to exhibit emergent behavior or properties that the developers did not anticipate. Will these AI systems be managed for the benefit of all, or will they contribute to a future catastrophe? 

Copyright 2023, BrightPress.org