During the last major downsizing, Microsoft, along with 10,000 other employees, fired specialists responsible for “reducing the social damage” that AI products of the corporation can cause.
Ex-employees that Platformer spoke with said they were responsible for mitigating the risks associated with using OpenAI technology in Microsoft products. The team has developed a “responsible innovation toolkit” to help the company’s engineers predict and mitigate the harm AI can cause.
The announcement of the dismissal in the media appeared immediately after the announcement of the launch of a new generation of the GPT-4 AI language model, which, as it turned out, had been working in the Bing search engine for a long time.
Microsoft said it remains “committed to the safe and responsible development of AI products and tools” and called the AI ethics team’s work “groundbreaking.” Over the past 6 years, the company has been actively investing in and expanding its own AI Responsibility Office.
Emily Bender, an expert on computational linguistics and ethics in natural language processing at the University of Washington, joined other critics in condemning Microsoft’s decision to disband the AI ethics team. Bender called Microsoft’s decision “short-sighted”:
“Given how hard this job is, any significant reductions in people here are dangerous.”
Microsoft focused on the people who researched responsible AI back in 2017. Until 2020, the team consisted of 30 specialists. But since October 2022, when the AI race with Google kicked off, Microsoft began to transfer most of the specialists to teams responsible for specific products – so only 7 people remained in the ethics group.
Some laid-off employees said that Microsoft didn’t always follow through on their recommendations, such as those regarding Bing Image Creator that could prevent it from copying artists.
In the fall, Microsoft corporate vice president of artificial intelligence, John Montgomery, said there was some pressure to “get the latest OpenAI models into the hands of customers as soon as possible.” Employees also warned Montgomery of the potential negative consequences of this strategy.
Even though the membership of the ethics team has dwindled, Microsoft executives have said it won’t be completely eliminated. But already on March 6, during a meeting at Zoom, they told the rest of the employees that the complete dissolution of the team was “critical to the business.”
Emily Bender said the decision was particularly disappointing as Microsoft “managed to put together some really great people working on AI, ethics and the impact of technology on society.” But the company’s actions demonstrate that Microsoft perceives the team’s recommendations “as contrary to making money.”
“Worst of all, we put businesses and people at risk,” one former employee told Platformer.
When Microsoft updated Bing, users found that the chatbot (which now has over 100 million monthly active users) often misinforms or even insults people. Bender is pushing for regulators to get involved in controlling AI.
“I think that every user who encounters AI should have a clear idea of what he is working with. And I don’t see any company that does it well,” says Bender.
Until proper rules are in place and transparent information about potential harm is provided, Bender recommends that users “do not take medical, legal, or psychological advice from AI into account.”
The Office for Responsible Artificial Intelligence will now be responsible for the consequences of AI work at Microsoft, company officials said.
Source: Arstechnica