AI Weekly: AI phrenology is racist nonsense, so of course it doesn’t work

AI Weekly: AI phrenology is racist nonsense, so of course it doesn’t work


In a paper titled “The ‘Criminality From Face’ Illusion” posted this week on Arxiv.org, a trio of researchers surgically debunked recent research that claims to be able to use AI to determine criminality from people’s faces. Their primary target is a paper in which researchers claim they can do just that, boasting some results with accuracy as high as 97%.

But the authors representing the IEEE — Kevin Bowyer and Walter Scheirer of the University of Notre Dame and Michael King of the Florida Institute of Technology — argue that this sort of facial recognition technology is “necessarily doomed to fail,” and that the strong claims are primarily an illusory result of poor experimental design.

In their rebuttal, the authors show the math, so to speak, but you don’t have to comb through their arguments to know that claims about being able to detect a person’s criminality from their facial features is bogus. It’s just modern-day phrenology and physiognomy.

Phrenology is an old idea that the bumps on a person’s skull indicate what sort of person they are and what type and level of intelligence they can attain. Physiognomy is essentially the same idea, but it’s even older and is more about inferring who a person is by their physical appearance rather than the shape of their skull. Both are inherently deeply racist ideas, used for “scientific racism” and clear-eyed justification for atrocities such as slavery.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

And both ideas have been widely and soundly debunked and condemned, yet they’re not dead. They were just waiting for some sheep’s clothing, which they found in facial recognition technology.

READ ALSO  Germany ignores US' pleas and will allow Huawei to supply 5G infrastructure, joining UK, Switzerland and others – Ausdroid

The problems with accuracy and bias in facial recognition are well documented. The landmark Gender Shades work by Joy Buolamwini, Dr. Timnit Gebru, Dr. Helen Raynham, and Deborah Raji showed how major facial recognition systems performed worse on women and people with darker skin. Dr. Ruha Benjamin, author, Princeton University associate professor of African American Studies, and director of the Just Data Lab said in a talk earlier this year that those who create AI models must take into consideration social and historical contexts.

Her assertion is echoed and unpacked by cognitive science researcher Abeba Birhane in her paper “Algorithmic Injustices: Towards a Relational Ethics,” for which she won the Best Paper Award at NeurIPS 2019. Birhane wrote in the paper that “concerns surrounding algorithmic decision making and algorithmic injustice require fundamental rethinking above and beyond technical solutions.”

This week, as protests continue all around the country, the social and historic contexts of white supremacy and racial inequality are on full display. And the dangers of facial recognition use by law enforcement is front and center. In a trio of articles, VentureBeat senior AI writer Khari Johnson detailed how IBM walked away from its facial recognition tech, Amazon put a one-year moratorium on police use of its facial recognition tech, and Microsoft pledged not to sell its facial recognition tech to police until there’s a national law in place around its use.

Which brings us back to the IEEE paper. Like the work done by the aforementioned researchers in exposing broken and biased AI, these authors are performing the commendable and unfortunately necessary task of picking apart bad research. In addition to some historical context, they explain in detail why and how the data sets and research design are flawed.

READ ALSO  When I Took My Zipcar Into the Wilderness

Though they do discuss it in their conclusion, the authors do not engage directly in the fundamental moral problem of criminality-by-face research. In taking a technological and research methodology approach to debunking the claims, they leave room for someone to make the argument that future technological or scientific advancements could make this phrenology and physiognomy nonsense possible. Ironically, in their approach, there’s a danger of legitimizing these ideas.

This is not a criticism of Bowyer, Scheirer, and King. They’re fighting (and winning) a battle here. There will always be battles, because there will always be charlatans who claim to know a person from their outward appearance, and you have to debunk them in that moment in time with the tools and language available.

But the long-running war is about that question itself. It’s a flawed question, because the very notion of phrenology comes from a place of white supremacy. Which is to say, it’s an illegitimate question to begin with.



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com