OpenAI sparked a generative AI boom last year when it released ChatGPT, a chatbot that reached 100 million monthly users in two months, setting the record for the fastest growing app. At the moment, the problem of misinforming users with artificial intelligence has become acute, and the company has proposed its own solution to combat this phenomenon. Some independent experts expressed skepticism about how effective the proposed method would be.
Image Source: pixabay
AI misinformation or “hallucinations” occur when models like ChatGPT or Google Bard completely fabricate information by behaving as if they are stating facts. “Even the most advanced AI models tend to give false information. They show a tendency to invent facts in moments of uncertainty, the OpenAI researchers write in their report. “These hallucinations cause many problems in areas that require multi-step reasoning, since one logical error is enough to derail a much larger solution.”
OpenAI’s new anti-fiction strategy is to train AI models to reward themselves for every single, correct step of reasoning as they arrive at an answer, instead of rewarding only the correct final conclusion. The researchers called this approach “process control.” In their opinion, it can lead to the creation of a more logical AI, since this strategy encourages the model to follow a “chain of thought” similar to a human.
“Detecting and mitigating model logic errors, or hallucinations, is a critical step towards creating a consistent AGI [искусственного интеллекта общего назначения]’ said Karl Cobbe, staff mathematician at OpenAI, noting that while the company hasn’t invented the process observation approach, it is helping to drive it forward. According to Kobbe, OpenAI made available an accompanying dataset of 800,000 labels that it used to train the custom model mentioned in the research paper.
Ben Winters, senior adviser at the ePrivacy Clearing House and AI and Human Rights project leader, expressed his reservations about the study, saying he would like to explore the full dataset and accompanying examples. “I just don’t think the study alone does much to mitigate concerns about misinformation and misleading results when AI is actually being used in real life,” Winters said.
AI Now Institute Managing Director Sarah Myers West said OpenAI did not provide key details about the data used to train and test GPT-4. “Therefore, there is still a huge lack of transparency that hinders any meaningful effort to bring AI accountability, even when these systems already directly affect people,” she said.
If you notice an error, select it with the mouse and press CTRL + ENTER.