Failures in Facial Recognition
“Yeah, the computer got it wrong,” Robert Williams said to 60 Minutes’ Anderson Cooper about his arrest. Williams was wrongly arrested in 2020 due to the Detroit Police Department’s facial recognition software inaccurately identifying him as having stolen $3,800 worth of watches. Williams has since sued the City of Detroit.
Recently when Facebook users viewed a video featuring Black men, its AI prompted a message to viewers asking if they would like to “keep seeing videos about Primates.”
In 2015, Google Photos AI facial-recognition program mistakenly tagged Black people as “Gorillas.”
Over the last fifteen years, facial recognition technology has exploded in popularity with governments and businesses alike, but with significant failures.
Racial Bias in Facial Recognition
A 2019 US federal study reported widespread racial bias in nearly 200 facial recognition algorithms. The study discovered that minorities were misidentified at a significantly higher rate signifying a lack of impartiality. One of the study’s researchers, Patrick Grother, noted that racial biases were found in “the majority of the face recognition algorithms we studied.” In fact, some of the algorithms tested were up to 100 times more likely to mix up minorities as compared to white Americans.
The research study published by the National Institute for Standards and Technology (NIST) discovered that those of Asian, African, and Native American descent were the most prone to misidentification.
The facial recognition technology was tested for practical application. One such test was identification through a large database of mugshots, in this case an FBI-curated database. In this test, Black women were found to be the most likely to be falsely identified.
Furthermore, how the technology is used can be amiss. Dr. Timnit Gebru, prominent computer scientist and former co-leader of Google’s ethical AI team, recognizes that even a perfect AI facial recognition has issues when used inappropriately. She noted that during the 2015 Freddie Grey protests in Baltimore, police were using facial recognition technology to identify protesters by matching images to social media profiles. Gebru states “the combination of overreliance, misuse and lack of transparency…is dangerous.”
When it comes to crime and punishment, the use of inaccurate technology has the potential to be devastating on both an individual and civil levels. The fact that many businesses and government institutions continue to advocate for widespread adoption of the demonstrably flawed technology is alarming.
The Legal Quagmire of Facial Recognition
The United States lacks an overarching federal law and consolidated best practices directing the use of facial recognition. As such, state and local governments have been left on their own in determining what to allow and not allow for facial recognition. Some choose to remain uninvolved while others choose to embrace facial recognition wholeheartedly including its use for surveillance and law enforcement purposes.
For instance, the Ninth Circuit United States Court of Appeals ruled in 2019 that Illinois Facebook users have the right to sue the business over its use of face recognition technology. This decision is in accordance with the Illinois Biometric Information Privacy Act (BIPA). BIPA requires businesses to obtain expressed consent from their customers before collecting biometric data. This includes fingerprints and facial recognition information.
Both Texas and Washington have similar biometric laws. Under Texas Business and Commercial Code, Section 503.001, the state strictly prohibits facial recognition for identification purposes with certain exceptions. Under this rule, Texas allows civil penalties up to $25,000 per violation. In Washington, State Bill 6280 outlines significant provisions in using facial recognition technology such as tests to prevent discriminatory effects and human review of decisions based on the technology.
Portland, Oregon is considering a strict ban on the technology’s use not only by private businesses but also by the government. California, New Hampshire, and Oregon all restrict police body cameras from using facial recognition. On the other side of the coin, some police forces in northern Texas do use facial recognition.
Federally, the Department of Homeland Security had considered requiring facial recognition checkpoints for US citizens and Green Card holders entering and exiting the country. Due to public outcry, the plan was eventually abandoned. Yet, a recent US Government Accountability Office report asserts that the federal government does plan to expand their use of facial recognition.
While the current trend can be considered a victory for local democracy (as those directly affected by the regulations are the ones who make the rules), it is a huge headache for big business. At the moment, companies such as Facebook and Google have the option of either skirting more stringent facial recognition regulations (at a significant financial cost) or maintaining blanket policies and addressing legal issues as they arise (also at great financial cost).
Social Media’s Facial Recognition Troubles
For some time now, major technology companies like Facebook and Google, which employ facial recognition to sort and categorize images, have been defending themselves against litigation made possible by the passage of BIPA. Facebook, for its part, has sought to amend the law (and other comparable measures) at the legislative level. They have been unsuccessful thus far.
The ACLU’s Speech, Privacy, and Technology Project’s Nathan Freed Wessler lauded the Facebook-BIPA ruling as “a strong recognition of the dangers of unfettered use of face surveillance technology[.]”” Wessler continued with “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale.”
In the case of Facebook, the issue stems from the company’s “tag suggestions” tool. Tag recommendations analyze photographs and classifies facial traits, producing and storing facial templates that enable Facebook to recognize persons in other users’ photos. Because photo uploads to Facebook are frequently geo-tagged, this generates a record of when a certain person was in a particular location.
Facebook argued that its customers did not suffer any concrete harm as a result of the company’s use of face recognition. However, the 9th Circuit Panel considered that injury can be concrete even when it is not visible. Additionally, the 9th Circuit Panel underlined the heightened risk of personal privacy breaches brought about by evolving technology, reflecting similar worries raised by the Supreme Court.
Their judgement makes it abundantly clear that Facebook’s use of face recognition technologies “invades an individual’s private affairs and concrete interests.”
There is considerable uncertainty surrounding the implementation of facial recognition software. If you’re a business considering using it innocuously for customer verification, you’re putting yourself and your company into a potentially dangerous situation.
Given the recent federal study confirming that many facial recognition applications suffer from racial bias and inaccuracies, as well as the potential for damaging lawsuits, regulatory issues, and reputational harm, the potential for devastating lawsuits, regulatory issues, and reputational harm is immense.
However, the “selfie ID photo” verification method used by Konfirmi does not rely on facial recognition. Rather it enables a simple method of automating ID verification while also protecting your business from lawsuits, regulatory issues, and reputational damage associated with facial recognition.
As senior policy analyst at the ACLU, Jay Stanley warned about facial recognition “One false match can lead to missed flights, lengthy interrogations, watchlist placements, tense police encounters, false arrests, or worse.”