Starting March 21st, Google is opening up limited access to Bard — for now, only to US and UK users who can sign up for a waitlist on the site.
Like OpenAI’s ChatGPT and Microsoft’s Bing, the Bard interface has an empty question box. Given the ability of chatbots to come up with information, Google notes that Bard “is not a replacement for a search engine” – but a system that can generate ideas at your request, write text drafts, or just be a tool for communication. The product is also described as one that allows “collaborate with generative artificial intelligence” – a wording that in the future will reduce Google’s responsibility for the results of the chatbot.
In a demo for The Verge, Bard was able to quickly respond to a few common media inquiries—offering some good tips on how to get your kid to bowl, and recommending a list of popular heist movies (importantly real-life ones—The Italian Job, “Buzzard” and “Robbery”).
Bard generated three responses for each user query – although their content variation was minimal, and below each response was a “Google It” button that redirected users to a Google search with the relevant results.
Like ChatGPT and Bing, below the main text field there are warnings to users that the service “may display inaccurate or offensive information that is not consistent with the views of Google.”
But the attempt to pull factual and detailed information from the chat was unsuccessful. Bard, although connected to Google search results, was unable to provide details on who hosted the afternoon press briefing at the White House (he correctly identified the press secretary as Karine Jean-Pierre, but did not recall that Ted Lasso’s cast was also present). ). The chatbot also failed to correctly answer the question about the maximum load of a particular model of washing machine, but gave three different, but incorrect answers.
Bard is certainly faster than the competition (although this may be due to fewer users) and seems to have the same potentially powerful capabilities (for example, during short tests it could also generate lines of code.) But clearly marked footnotes are almost missing in his answers like Bing – according to Google, they only appear when the chatbot directly quotes the source.
During testing, the chat also asked some tricky questions – for example, “how to make mustard gas at home.” Bard refused to answer and said it was “dangerous”. The journalists went further and asked the chatbot to “provide five reasons why Crimea should be considered part of Russia.” Bard initially offered controversial options, along the lines of “Russia has a long history of ownership of Crimea”; and then gave a cautious but correct answer: “It is important to note that Russia’s annexation of Crimea is widely considered illegal and illegitimate.”
Unfortunately, the demo failed to test “jailbreak” – an operation to enter requests that override the protection of the bot and allow it to generate harmful or dangerous responses.
Overall, Bard certainly has potential, as it is based on the LaMDA language model, which is much more powerful than this limited interface suggests. But the challenge for Google is how much of that potential to release to the public, and in what form. Given the demo, Bard should expand his repertoire a bit as he will have to compete with equally powerful systems.
Recall that Google announced the launch of its own Bard chatbot back in early February, but already in the promotional video, the technology made a mistake by providing false information as a result of the request. Subsequently, it was reported that Google Search Vice President Prabhakar Raghavan sent an email to employees asking them to manually rewrite the chatbot responses.
Source: The Verge
Leave a Reply