The social media giant Twitter said it would investigate its image-crop work after users complained that it favored white faces over black ones.
The image preview function of Twitter’s mobile app automatically captures images that are too large to fit on the screen and chooses which parts of the image to display and cut.
Asked by a graduate student who found a picture of him being printed on the face of a black colleague, a San Francisco-based programmer found that the Twitter system would crop up images of President Barack Obama when Republican Senate leaders Mitch will be posted with McConnell.
Programming, Tony RCeri wrote on Twitter, “Twitter is just one example of the manifestation of racism in machine race algorithms.”
Twitter is one of the most popular social networks in the world, with around 200 million daily users.
Other users shared similar experiences online, saying that Twitter’s cropping system is in favor of whites.
Twitter acknowledged that the company still had work to do.
“Our team tested the model for bias before shipping and we found no evidence of racial or gender bias. But it is clear from these examples that we have received more analysis to do. A Twitter spokesperson said: “We will continue to share our learning, what actions we take, and the source of our analysis so that others can review and repeat it.”
In a 2018 blog post, Twitter stated that the cropping system is based on a “neural network” that uses artificial intelligence to predict which part of a photo will be interesting for a user and out for the rest. crop.
A Twitter representative also pointed to an experiment by a Carnegie Mellon University scientist who analyzed 92 images and found the algorithm 52 times in favor of a black face.
But Marith Whitaker, co-founder of AI Now Institute, which studies the social effects of artificial intelligence, said she was not satisfied with Twitter’s response.
“Everywhere you look today, the tide of protectionist sentiment is flowing.
“This is another in a long and tired list of examples of automated systems encoding a history of racism, abuse and discrimination.”
While many studies have found evidence of racial bias in facial recognition software, white faces are more likely to be found then black faces can be identified.