IBM is walking away completely from the facial recognition business, CEO Arvind Krishna announced yesterday in a letter urging Congress to act against police misconduct and regulate the way technology can be used by law enforcement.
“IBM no longer offers general purpose IBM facial recognition or analysis software,” Krishna wrote bluntly in the letter. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.”
IBM is admittedly already in the middle of a massive restructuring, shedding business units and conducting layoffs, so it is possible that facial recognition was a segment of the company that might have been on the chopping block for unrelated reasons. Even if that is true, however, it doesn’t make Krishna’s reasoning wrong.
Facial recognition systems historically have a bias problem. Generally speaking, the darker someone’s skin, the more likely a system is to generate a false-positive match. In 2018, for example, the ACLU performed a test on Amazon’s facial recognition system and found it incorrectly matched 28 members of Congress with mugshots, including six members of the Congressional Black Caucus. A year later, the ACLU repeated the test and this time found the tech falsely matched 27 professional athletes to images in a mugshot database. This algorithmic bias built into the tech can then amplify the disparities that already exist in law enforcement.
As recently as 2019, IBM was trying to reduce bias in its facial recognition platform by drastically increasing the diversity of the data set it used for training its machine learning models. “Facial recognition systems in commercial use performed better for lighter individuals and males and worse for darker females,” the company noted at the time. “While IBM Research is committed to continuing study and investigation of fairer facial recognition systems, we don’t believe we can do it alone… We urge others to contribute to the growing body of research and advance this important scientific agenda.”
Krishna re-emphasized that point in his letter to Congress this week, saying that, while AI “is a powerful tool” that can assist law enforcement, “vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularily when used in law enforcement, and that such bias testing is audited and reported.”
The bigger picture
A recent report from OneZero found more than 45 companies actively advertising facial recognition for real-time surveillance to both private and governmental clients. Concerns about police use of such software has increased amid the current mass protest movement.
The proposed bill (PDF) IBM wrote to Congress in support of the Justice in Policing Act includes limits on the use of facial recognition both by body cameras and other cameras and recording devices. The body camera section reads:
Body cameras shall not be used to gather intelligence information based on First Amendment protected speech, associations, or religion, or to record activity that is unrelated to a response to a call for service or a law enforcement or investigative encounter between a law enforcement officer and a member of the public, and shall not be equipped with or subjected to any real time facial recognition technologies.
The bill also calls for a study on “issues relating to the constitutional rights of individuals on whom facial recognition technology is used” and “issues relating to limitations on the use of facial recognition technology.”