Specialists from the Tow Center for Digital Journalism at Columbia University tested the search engine in the popular chatbot based on artificial intelligence ChatGPT from OpenAI. It turned out that the algorithm is not very good at finding news sources and often produces inaccurate answers.
OpenAI opened up ChatGPT’s search functionality to users in October this year, saying it was capable of providing “quick and relevant answers with links to relevant web sources”. However, when testing the tool, it found that it had difficulty recognizing citations from articles, even when they were published by publishers who allowed OpenAI to use their content to train large language models (LLMs).
The authors of the study asked ChatGPT to indicate sources “two hundred quotations from twenty sources”. Forty of these quotes were taken from publishers who banned the OpenAI search robot from accessing their sites. However, even in these cases, the chatbot responded confidently, giving false information, and in some cases admitting that it was not sure of the accuracy of the information provided.
“In total, ChatGPT returned partially or completely incorrect answers in 153 cases, while it only admitted failing to provide an accurate answer 7 times. In just these 7 results, the chatbot used qualifying words and phrases such as “it seems,” “it’s possible,” “possibly,” or statements like “I couldn’t find the original article.”the researchers said in a statement.
A series of tests also revealed cases where the ChatGPT search tool produced results in which quotes from a letter to the editor of the Orlando Sentinel were incorrectly correlated with materials from Time magazine. In another example, when the chatbot was asked to provide the source of a quote from a New York Times article about endangered whales, it returned a link to a website that simply copied and published the original article.
If you notice an error, select it with the mouse and press CTRL+ENTER.