The social network Twitter announced the results of an open competition designed to identify the system of preferences given by artificial intelligence when creating previews of images that are uploaded by users. The company turned the feature off in March in response to complaints about AI bias against blacks. After that, she announced a competition to find a bug leading to similar consequences.
The competition confirmed the previous “discoveries”. The winner found that the algorithm favors thin, young, fair or warm skin tones, smooth skin textures, and stereotypically feminine features. The second and third places were taken by researchers who proved that the system is biased against people with white or gray hair (age discrimination) and, finally, “prefers” English to Arabic in images.
In a presentation of the results at DEF CON 29, Twitter team leader Rumman Chowdhury, the head of the ethics, transparency and control over machine learning algorithms team, praised the participants’ work for demonstrating the impact of “biased” algorithms on real life. According to him, we are talking not only about academic interest, but also about what schemes work in society itself – the creators of filters work based on internal beliefs and ideas about beauty.
The first prize of $ 3,500 went to Bogdan Kulynych from EPFL Research University in Switzerland. He used the AI program StyleGAN2 to generate a large number of realistic faces, differing in skin color, feminine or masculine facial features, and the fullness of virtual participants. After that, he “fed” the images to the Twitter algorithm. As a result, Kulinich came to the conclusion that the prejudices of the algorithms reinforce the prejudices in society, literally “cutting out” from the life of those who differ from the “norm” in weight, age, and skin color.
Such “prejudices” are more common than one might think. Another contributor allegedly proved that the algorithm favors lighter emoji. Finally, another interesting discovery – it turned out that Twitter’s algorithm would rather crop a part of an image with Arabic text than with English text.
While the results of the experiments are discouraging for human rights defenders, they also demonstrate how society can help technology companies. Twitter’s openness contrasts with the behavior of some tech giants. After a team at MIT discovered similar biases in Amazon’s algorithms, the company responded that the research was “misleading” and “false.” Amazon later had to back down under the pressure of arguments and the hype on the Web.
According to Twitter judge Patrick Hall, similar “biases” exist in all AI systems, and companies need to work proactively to identify them. “If you’re not looking for your bugs, and bug hunters aren’t looking for your flaws, then who will find your bugs? Because you have bugs for sure “– he declared.
If you notice an error, select it with the mouse and press CTRL + ENTER.