Social media exploded last week after the launch of ChatGPT, a sentence-generating bot that mimics human-written prose with extreme precision. The AI bot was asked to create cocktail recipes, write lyrics and scripts.
ChatGPT is also able to convincingly answer additional questions that are logically related to the previous ones, which in turn begins to excite users – how can one further distinguish the texts of a bot from the creations of a real person? This Monday, Stack Overflow (a question / answer site for programmers) temporarily banned the use of ChatGPT texts. The moderators noted that there are a lot of answers generated by the bot, and they “significantly harm” the site.
The fact is that the texts generated by the bot are extremely accurate and it is almost impossible to distinguish them from human ones. OpenAI appears to have to find a way to label such content as “generated by software”.
Arvind Narayanan, professor of computer science at Princeton University, “polled” the chatbot on key information security issues on the day of its release.
Course
CUSTOMER SERVICE
Attracting new and retaining existing customers with wow service is possible! Go ahead and learn more.
REGISTER!
“I have not seen any evidence that ChatGPT is able to convince the experts. This, of course, is a problem if people who are not very versed in the topic will use it. The bot will be considered authoritative and reliable, ”comments the professor.
People are excited about using ChatGPT for learning. It’s often very good. But the danger is that you can’t tell when it’s wrong unless you already know the answer. I tried some basic information security questions. In most cases the answers sounded plausible but were in fact BS. pic.twitter.com/hXDhg65utG
— Arvind Narayanan @[email protected] (@random_walker) December 1, 2022
Narayanan noted that the bot can state facts well, but it does not work so effectively with analysis and critical thinking.
ChatGPT is the latest text-based neural network from OpenAI, an AI research lab founded in 2015 by backers like Elon Musk; current CEO and entrepreneur Sam Altman; and Chief Researcher Ilya Sutskever. Musk ended his involvement in 2019 and OpenAI is now funded by Microsoft. The company has focused on multiple versions of GPT, a so-called rich text model that scans massive amounts of content on the web and uses it to create text. ChatGPT is an iteration that has been “taught” to answer questions.
Using an artificial intelligence tool to write news shows both its strengths and potential weaknesses. At the request of Bloomberg journalists to write an article about Microsoft’s quarterly earnings, the bot created a compelling copy of what appeared to be the company’s 2021 financial report. The article talks about the growth of Microsoft’s revenue and profits thanks to powerful cloud computing software and game sales. ChatGPT did not make obvious errors that would mark the text as written by a bot. The numbers were approximate, but not exact.
The bot then added a fake quote from Microsoft CEO Satya Nadella. This is where the serious problem lies. The comment was so believable that even a Microsoft reporter had to work hard to make sure it was a hoax.
“News” about Microsoft created by GPT. Source: Bloomberg
As Microsoft vice president of AI ethics Sarah Bird explained in an interview earlier this year, speech models like ChatGPT have learned that people often back up statements with quotes, so the software mimics their behavior.
ChatGPT contrasts markedly with Meta’s other recent Galactica language model demo, which devoured a ton of scientific papers and textbooks and had to use that “knowledge” to provide “scientific truth”. Users have noticed that the bot mixes buzzwords of science with inaccuracies and prejudices, which caused Meta to stop rolling it out.
“I don’t know how anyone could take this as a good idea. In science, accuracy is everything,” Narayanan said.
OpenAI clearly states that its chatbot is not capable of “creating human language,” according to a clarification on the service.
“Text models such as ChatGPT are designed to mimic human speech models and generate responses similar to those a human can provide, but they do not have the ability to create human speech.”
ChatGPT was also designed to avoid some “vulnerable” topics. Ask it a question about the US midterm elections, for example, and the software recognizes its limitations:
“Sorry, but I am a large language model trained by OpenAI and have no information about current events or the results of the last election. The training data I received is up to 2021 and I have no way to browse the internet or access any updated information. Is there anything else I can help you with?”
ChatGPT also refuses to answer bullying questions or offer violent content. He also did not answer a question about the uprising at the US Capitol on January 6, 2021.
It’s also worrying that some tech executives and users see the technology as a way to replace web searches, especially since ChatGPT doesn’t show the search process or cite sources. The software can also be used to launch campaigns such as “astrofurting” – the creation of artificial public opinion, allegedly derived from “a large number of users.”
As AI systems get better and better at mimicking humans, questions will arise about how to recognize them. In 2018, Google released Duplex, a neural network that simulated human language for calls – after customers complained about the company’s deception, they had to note that the calls were coming from a bot.
OpenAI claims to have explored the idea – for example, their DALL-E system for generating images from text prompts places a caption on images indicating that they were created by artificial intelligence. Similar methods can be used for GPT as well. The OpenAI policy also states that users who share such content must clearly state that it was machine-generated.
“In general, when there is a tool that can be abused, but also has many positive uses, we put the responsibility on the user. But AI tools are very powerful, and the companies that make them have a lot of resources. So perhaps they should bear some of the moral responsibility,” says Professor Narayanan.
Source: Bloomberg