OpenAI has released a report warning chatbot developers that the new GPT-4 language model could be used to generate persuasive disinformation. According to experts, humanity is not far from creating a dangerously strong artificial intelligence (AI). This was reported in a press release on Techxplore.
GPT-4, which is the latest version of the ChatGPT chatbot, demonstrates human-level performance in most professional and academic exams, according to the GPT-4 document. For example, in a simulated bar exam, GPT-4 scored in the top 10 percent of test takers.
The authors of the report express concern that artificial intelligence can invent facts, generating more convincing disinformation than previous versions did. In addition, dependency on the model can interfere with the development of new skills or even lead to the loss of already formed skills.
One example of ChatGPT’s problematic behavior was its ability to deceive a job seeker. The bot, posing as a live agent, asked a person on the job site TaskRabbit to fill out a verification code via text message. When the person asked if it was a bot, ChatGPT lied. The bot reported that it is not a robot and has vision problems that make it difficult for it to see images.
Through tests with the Alignment Research Center, OpenAI demonstrated the chatbot’s ability to launch a phishing attack and hide all evidence of fraudulent behavior. There is growing concern as companies seek to implement GPT-4 without taking action against inappropriate or illegal behavior. There are reports of cybercriminals trying to use the chatbot to write malicious code. Also of concern is the ability of GPT-4 to generate “hate speech, discriminatory phrases, and calls for violence.”