Reputable Companies Rejecting Facial Recognition Due to Racial Bias

Facial recognition is the cutting-edge science that allows programs to recognize and match a scan of a person’s face to a databased photo of that same person’s face for the purposes of identification. Although the practical applications for such technology are vast, unfortunately, the technology still has significant problems, at least not as far as many large companies are concerned.
As it exists at present, facial recognition technology has a racial bias problem. According to a facial recognition race test by MIT, three facial recognition programs produced identification errors at a rate of 0.8% for men with light skin. However, the error rate increased to 20% to 34% for women with dark skin.
Additionally, Amazon’s facial recognition program reportedly misidentified 28 members of Congress as people arrested for committing crimes. While only 20% of the congressional members were people of color, the group was still misidentified at a rate of 39%.
Founder of facial recognition tech company Kairos, Brian Brackeen offers some potential explanations for the disproportionate errors and bias of facial recognition race tests in identifying people of color. He notes that the algorithms used by American facial recognition programs operate using programmed datasets of images. These imagines often reflect the creators of the programs, which in this case happen to be overwhelmingly white and predominately men. Brackeen points out that similar software created in Asian countries tend to have lower error rates when identifying Asian people.
While there may be fixes to the technology down the road, many of the top tech companies are either extensively retooling or abandoning plans to provide facial recognition altogether.
Most specifically, companies like Amazon, IBM, and Microsoft are withdrawing their programs from government and law enforcement clients for fear of the consequences of such racially-biased technology.
Facial Recognition Bias and Law Enforcement
It is no coincidence that these companies are withdrawing facial recognition programs especially in the context of law enforcement at this current moment in time. The United States is in the midst of a pivotal moment in terms of police relations, particularly with minority citizens. Although it is still uncertain what American policing will look like when all the chips have fallen, utilizing technology designed to identify criminals, that is known to possess racial biases would do nothing more than exacerbate an already precarious situation.
The evidence of facial recognition bias is far from incidental. In a government-funded facial recognition race test, federal researchers found racial bias in nearly 200 facial recognition algorithms. What’s worse, several of these programs were intended for use by law enforcement.
The National Institute for Standards and Technology (NIST) pointed out that people of Asian, African, and Native American descent were the most likely to be misidentified.
It is difficult to nail down the exact number of law enforcement agencies still making use of facial recognition despite its glaring flaw and in what ways they incorporate it into their investigatory activities. According to a 2016 study by the Georgetown Law Center on Privacy and Technology, there is no notable regulation on the use of facial recognition by law enforcement bodies. This is an alarming reality, considering the existence of a bug as substantial as racial bias.
Clare Garvie, who worked on the aforementioned study noted, “We ourselves were surprised by some of the findings we had, mostly in terms of the sheer scope of use of facial recognition at the state and local level across the country.”
She continued, “Also, [we were surprised] at the complete absence of any legal imposed regulations or even policies implemented by law enforcement to constrain its use.”
Facial Recognition Bias and Customer Identification
Some companies have tried to make use of facial recognition software as a means of customer identification. Customer identification has become extremely important in this age of highly sophisticated hackers. However, using technology with clear racial biases is almost certainly a misstep.
“We do not use facial recognition” has become a mantra for various automated customer identification providers. One such company noted that: “[r]eports of racial bias in facial recognition technologies have been percolating for years. We saw this coming, and avoided these technologies altogether”. The same company added that “[t]here are much better and more reliable ways of achieving automated customer verification.”
One of these methods includes “selfie ID” photo and video verification. Instead of using facial recognition, selfie ID photos and videos are checked for recency and authenticity. This provides an easy way to automate ID verification that is not dissimilar to providing ID in-person to purchase age-restricted products.
Conclusion
Creators of facial recognition software are understandably pulling their products from use by law enforcement agencies, and other contexts. When taking into account the racial biases present in the current versions of the technology, not doing so could have disastrous consequences.
These tech giants have acknowledged that a problem exists. If law enforcement or any other businesses try to make use of facial recognition in its current form, they are taking on the unnecessary risks of their own accord.
When it comes to customer verification, there are better choices available that don’t have a downside as incendiary as racial bias. Moreover, racial bias is only one of the problems inherent in facial recognition. In fact, Facebook had to pay upwards of $550 Million in a settlement having to do with the privacy concerns presented by the use of the technology.
Of course, beyond the ethical concerns — if facial recognition simply doesn’t work the way it is supposed to, what real use is it to anyone? Federal studies have shown that more often than not, the software just doesn’t work properly. False positives and false negatives could lead to injury on the side of the customer and a damaged reputation for the company in a time when race relations are under a bright spotlight.