Facial recognition systems biased toward white men

Share

Today, facial recognition software is being deployed by companies in various ways, including to help target product pitches based on social media profile pictures.

At least that's according to research conducted by Massachusetts Institute of Technology and Stamford University, where a team of boffins examined three AI-powered facial recognition systems and broke down the accuracy of the results served up into gender and race. But with two of the systems, the error rates for the darkest-skinned women in the data set were worse - 46.5 per cent and 46.8 per cent.

Facial recognition technologies show bias towards gender and skin colour and appear to favour white men, scientists have found. In a test to identify the sex of people from their faces, when the software was able to do so with more than 99% accuracy for light-skinned men.

But when the photo was of a darker skinned woman, there was a almost 35% error rate.

'What's really important here is the method and how that method applies to other applications, ' Buolamwini said.

Each face was then assigned a rating for skin type based on the six-point Fitzpatrick rating system, which dermatologists use as "the gold standard" for classifying different shades of skin, the paper notes.

Yet it can be hard to tell when a data set is biased, especially when these systems are built by homogenous teams mostly consisting of white men. Face++ and IBM had a classification error rate of 34.5 and 34.7 per cent, respectively, on dark-skinned women.

According to the researchers' paper, one "major U.S. technology company" had a data set that was more than 77 per cent male and 83 per cent white, thereby making it naturally better or indeed more biased at picking out lighter-skinned men than darker-skinned women. "The same data-centric techniques that can be used to try to determine somebody's gender are also used to identify a person when you're looking for a criminal suspect or to unlock your phone", said Buolamwini.

More news: 'Extraordinary' growth could see United States produce more oil than Russian Federation
More news: Congress calls 12-hour Kannur bandh today protesting youth leader's murder
More news: Milwaukee County Jail Death Results In Three Staffers Being Charged

The algorithms aren't intentionally biased, but more research supports the notion that a lot more work needs to be done to limit these biases. "I'm really hopeful that this will spur more work into looking at [other] disparities".

In a statement, Microsoft said the company has "already taken steps to improve the accuracy of our facial recognition technology", while IBM responded saying it has "several ongoing projects to address dataset bias in facial analysis - including not only gender and skin type, but also bias related to age groups, different ethnicities, and factors such as pose, illumination, resolution, expression and decoration".

But a few years later, after joining the MIT Media Lab, she ran into the missing-face problem again.

HOW DOES iPHONE X FACIAL RECOGNITION WORK?

Buolamwini is joined on the paper by Timnit Gebru, who was a graduate student at Stanford when the work was done and is now a postdoc at Microsoft Research. These systems treated gender classification as male or female which made the task very easy statistically.

Buolamwini has now started to advocate for 'algorithmic accountability, ' which is working to make automated decisions more transparent, explainable and fair, the Times said.

"This is an area where the data sets have a large influence on what happens to the model", says Ruchir Puri, chief architect of IBM's Watson artificial-intelligence system.

Share