OpenAI has released the “GPT-4o System Card,” a research paper detailing the security measures and risk assessments it conducted before launching its latest AI model. And according to OpenAI’s assessment, GPT-4o is rated “medium risk.” This rating is derived from the highest rating among four risk categories: cybersecurity, biothreats, persuasion, and model autonomy.
All categories were considered low risk except persuasion, where some GPT-4o writing samples proved more effective at convincing readers than human-written texts.
La scorecard di GPT-4o
The publication of this risk assessment comes at a crucial time for OpenAI. The company has faced ongoing criticism for its security standards, both from its own employees and from U.S. senators. Launching such a powerful multimodal model so close to the U.S. presidential election raises concerns about the potential for accidental spread of misinformation or malicious use by malicious actors. OpenAI hopes to demonstrate its commitment to preventing abuse by publishing these tests.
There have been calls for OpenAI to be more transparent, not just about the training data for its models, but also about its safety testing. In California, where OpenAI is based, Senator Scott Wiener is working on a bill to regulate large language models, introducing restrictions that would make companies legally liable if their AI is used maliciously.
The release of the GPT-4o System Card is a move to demonstrate OpenAI’s commitment to transparency, but it also raises questions about the company’s ability to objectively assess the risks of its models. Growing pressure for more AI regulation could lead to significant changes in how companies like OpenAI develop and release their models.
What do you think? Do we need more attention on the so-called artificial intelligences? Tell us your opinion in the comments below.