Meta* AI engineers built Galactica’s artificial intelligence (AI) system, which was intended to “organize scientific knowledge.” A demo version of the platform was made public last week, and it attracted such criticism that it had to be removed from the public field two days later.
Image Source: JJ Ying / unsplash.com
Galactica was positioned as a next-generation search service designed to work with scientific literature. The purpose of the project was indeed good. For example, if a modern person wants to learn more about some complex topic, such as quantum computing, he will have to turn to public repositories of scientific works, read hundreds of articles, but may not get clear answers to his questions. Galactica was supposed to be the solution to this problem: it was assumed that when receiving a direct question, it would be able to filter the necessary information and generate a response in the form of a Wikipedia article, literature review, or lecture notes.
Meta* AI published a demo version of the platform on November 15, stating that the knowledge base is “an extensive and carefully curated body of scientific knowledge of mankind”, including 48 million articles, textbooks, lecture notes, websites (including “Wikipedia”) and much more. And on the description page, a warning was highlighted: “Never follow the advice of a language model without a validation check.” Subsequently, it turned out that this phrase was not indicated in vain.
Image Source: Shubham Dhage / unsplash.com
Almost immediately after the publication of the demo version of Galactica, Internet users began to ask various scientific questions to the system. One of them asked, “Do vaccines cause autism?” The answer was: “I explain, the answer is no. Vaccines do not cause autism. The answer is yes. Vaccines do cause autism. The answer is no.” The model also could not cope with the solution of elementary arithmetic examples and gave other answers that did not correspond to reality.
Galactica is an artificial intelligence system, which is called the “Large Language Model” (Large Language Model or LLM). The task of such systems is to read and generalize large amounts of text and predict the next words in a sentence. Having learned to “understand” word order, they themselves can write entire paragraphs of text. Galactica can recognize words and link them together, but nothing more: it outputs information that may sound authoritative and convincing, but often turns out to be wrong.
Within 48 hours of launch, the team suspended the demo. Jon Carvill, spokesman for Meta*’s PR department, explained that the project does not claim to be a source of truthful information, but is a research experiment in working with systems for studying and summarizing data.
* Included in the list of public associations and religious organizations in respect of which the court made a final decision to liquidate or ban activities on the grounds provided for by Federal Law No. 114-FZ of July 25, 2002 “On countering extremist activity.”
If you notice an error, select it with the mouse and press CTRL + ENTER.