At a US Senate hearing, OpenAI CEO Sam Altman proposed the creation of an agency to oversee AI models that perform “above a certain level of capability.” The agency should issue licenses for cutting-edge artificial intelligence developments and take them away if companies violate established rules.
Altman’s proposal was made in response to senators’ concerns about the excessive enthusiasm shown by representatives of the technology industry for AI technology – forcing developments could lead to uncontrollable consequences. He agreed with the senators’ point that the agency could act like the Nuclear Regulatory Commission, which licenses nuclear power plants and strictly controls their operation.
Altman confirmed that AI can indeed get out of control:
“I think if something goes wrong with this technology, it can go completely wrong. We want to speak out loud about this and work with the government to make sure this doesn’t happen.”
OpenAI’s AI language model, which underpins the GhatGPT chatbot, has become one of the most successful and in many ways sparked a boom in artificial intelligence technologies at the end of last year. The initial excitement of the community among many of its representatives was replaced by misgivings. Thousands of industry leaders and personalities signed an open letter to temporarily limit the development of AI until the rules for its operation are created.
Elon Musk and over 1,000 experts sign an open letter on the dangers of advanced AI – a moratorium on development and regulation is proposed