Stephen Totilojournalist of Axios, recently conducted an interesting experiment related to ChatGPT, the AI-powered text generator. Totilo used it to create a series of questions to game developers, but with a couple of tricks: to avoid too generic questions, he forced ChatGPT to use a specific word in the interview; it also made the interviewees choose the word.
With the first questions, ChatGPT created sensible, simple questions and Totilo got his own answers. In one case, for example talking to Mads Vadsholt who worked at The Forest Quartet and that asked a question with the word “jazz”, the AI created a good question about the jazz influences of the video game music.
However, things become complex when words that make little sense for the subject are chosen. The authors of Immortality – cinematic narrative adventure – they required the use of the word “first person shooter”. ChatGPT then confidently stated that Immortality departed from the author’s previous FPS games and asked what inspired this decision. Previous games from the creators of Immortality are all similar to this one, so there hasn’t been any change.
In contrast, the AI was able to more “naturally” enter the word “banana” in a question about the sound design of God of War Ragnarok, asking if some kind of strange object – like a banana – was used to record the sound. We assure you that it is not an impossible possibility, fruits and vegetables are easily used to generate various sounds for movies and video games.
However, errors began to pile up at a certain point. In the case of Obsidian and their Pentiment game, they were asked to use the word “manuscript” (in keeping with the game) and ChatGPT simply made up a piece of the game’s plot, stating that the adventure revolves around a mysterious manuscript (It is not so). He then said that John Romero is the creator of DOOM (he is co-creator) and said that this year’s GDC Awards host had hosted the event in the past as well (it was his first time). . All the information comes from a long list of Totilo’s tweets, the beginning of which you can see below.
In the end, Totilo came to three conclusions: AIs can be wrong easily, the interviewees were all very kind and always tried to give interesting answers when faced with wrong questions and his work is safe, he won’t be replaced by an AI anytime soon.
Let’s hope the same goes for ChaosGPT, a ChatGPT-based artificial intelligence that wants to destroy mankind.