Netflix documentary Coded bias (2020) is directed by Shalini Kantay And examines the racist and sexist bias of artificial intelligence (AI) behind facial recognition algorithms. The plot may be part of the dystopian series black Mirror, But it is the subject of heated moral debate in the real world.
Coded bias Shows how Buolamini understood the problem of facial recognition: in a task MIT Media Lab, She puts her face in front of a screen with an artificial intelligence device, but it is not recognized.
When she applies a white mask, the system can detect it. Thus, the researcher began to realize that artificial intelligence programs are Trained to identify patterns The data is based on a set of (white males), and therefore, they do not properly identify female or black faces.
The recognition technology was created using a small sample of black faces and women. This fact prevents a high hit rate. It was skewed. However, it is possible to feed back the algorithm to reduce bias and improve classification ”, says Simone Diniz Junqueira Barbosa, Expert in the field of human-computer interaction and professor PUC-Rio.
In addition to Buolamwini’s study, the documentary features several works by other researchers and activists who struggle against the irregular use of recognition technology. Is a matter of Silky Carlo. He is director Look big brotherAn initiative that monitors the use of facial recognition by UK police. The concern worldwide is that the recognition techniques used for public safety will accuse and arrest suspects based on inaccurate analysis.
AI is already used to determine whether a person is entitled to receive credit from the bank, whether or not a suspect should be arrested and a patient should be given priority in hospital care Or not.
“Big companies use recognition to score employees. Law enforcement officers find fugitive criminals. Doctors are able to identify diseases through images. When the data has a more balanced mass, we can reduce many gases. Obtaining this data is a major challenge to avoid bias in algorithms, says Barbossa.
A in PUC-Rio Research group on Ethics and Algorithm Mediation of Social Processes (EMAPS) Is devoted to the subject.
“Multidisciplinary teams are involved in research. We leave digital footprints all the time, which makes ethical considerations more urgent and important. Data is valuable because it can be used by companies or governments to manipulate our behavior. And we should avoid these manipulations.
A movie Coded bias Gives example: In China, protests in Hong Kong are carried out with masked protesters to prevent facial recognition. Recognition technology is used by the Chinese government to arrest suspects. In these demonstrations, groups are organizing to spray security cameras. In the United States, a condominium in Brooklyn, where the black and latino population already lives, uses facial recognition without the consent of residents.
According to data from the documentary, more and more 117 million people In the United States they can use the police that have their faces in the face recognition network. On June 25, 2020, under the influence of Buolamini’s research, which analyzed skewed data from several technology companies, US lawmakers introduced a bill banning the federal use of facial recognition.
Not even in brazil Specific regulation for technology, but Information Security Act Talks about the topic and demands greater transparency of practices adopted by companies.
To learn more about the research presented in Coded Bias
Book Artificial Unintelligence: How Computers Misunderstand the World ()Artificial Anesthesia: How Computers Misunderstand the World, In free translation), by journalist Meredith Brusserd
Book Algorithm of Conflict: How Search Engines Strengthen Racism ()Algorithm of Harassment: How Search Engines Reinforce RacismO, in free translation), by Professor of the University of California at Los Angeles Safia Umoja Noble