From its uses in law enforcement to biometric security, facial recognition (FR) technology with a history has thrived for over 60 years. It can be used on an individual level and on a much larger scale by private and government agencies. In June 2019, a study estimated that the global FR market would be worth $8.5 billion US dollars by 2025.
In the last few years, many have protested the lack of governmental regulations surrounding this tool — especially in light of several projects conducted by federal bodies and tech companies without the consent of online users. Racial profiling, privacy laws and abuse by law enforcement are among some of the major issues indicated in these protests. As such, it has become more and more important to interrogate the use of this technology in daily life and the consequences it can have on vulnerable populations.
History of Facial Recognition
In the 1960s, researcher Woodrow Wilson Bledsoe conducted several CIA-funded experiments. He programmed computers to recognize different photos of the same face by storing the coordinates of these facial features into a database. While this experiment was limited by the processing power at that time and the variability of facial features in the photos, it was an important first step in the development of FR technology.
Breakthroughs in the 1970s to 1990s saw further leaps as algebraic manipulation was used to recognize faces in real-time environments (i.e. detecting faces within a photograph in a natural environment). This breakthrough generated interest in the commercial use of FR for law and security enforcement. This was partially supported by projects such as the Face Recognition Technology program, along with Face Recognition Vendor Tests and Face Recognition Grand Challenge in the 2000s, all funded by the United States Department of Defence. This ultimately led to the adoption of FR tools by many local police departments to aid with criminal investigations.
Since the 2000s, facial recognition technology has progressed into everyday use. In 2010, Facebook started to scan user photos to suggest tags for each person detected; Google launched similar tagging programs in 2015. Apple started using FR as a layer of biometric security for iPhone users in 2017, while other software companies such as Microsoft and IBM have also made their way into the field and produced their own FR programs.
Today, FR is used widely for international border and police checks, CCTV systems, issuing identity documents such as passports and identifying missing people. It is also used in health systems for diagnosing medical conditions such as DiGeorge syndrome. This is a genetic disorder that affects children physically and mentally, but can be hard to diagnose as it appears differently from individual to individual. Facial algorithms can scan many images of people with the condition and learn the pattern of diagnosis.
FR is also used for identity confirmation in banking and retail, and for large scale projects such as security purposes at the 2021 Tokyo Olympics — a first for the sporting event. The world’s largest biometric database in India, termed the Aadhar Project, also involved FR technology.
The Problem with Facial Recognition
Much of the debate is about how FR disregards the privacy of the general public, often enabled by inadequate protection from federal laws. Canada’s own privacy laws have been criticized for lacking sufficient protection and guidelines for consent for facial data. This lack of regulation often allows groups such as the RCMP to conduct FR projects without public knowledge or consent. As well, major cities in China have used FR to identify and fine jaywalking among other petty crimes, despite flawed systems involving misidentification. The 2019 protests in Hong Kong also saw allegations of public surveillance and a lack of transparency from the government.
Besides federal and municipal bodies, social media companies have also come under fire for unchecked facial data collection. In February 2021, Facebook paid a privacy lawsuit over its photo-tagging feature, while Google, Amazon and Microsoft have faced similar claims.
Other issues concern the algorithms themselves. There is evidence to suggest that bias and inaccuracy in this technology can exacerbate issues faced by marginalized communities. Studies comparing multiple FR algorithms across different demographic groups found that women of colour, particularly darker-skinned women, were consistently misidentified relative to other groups. A 2018 study found that all three FR algorithms examined had the poorest proportional accuracy identifying darker-skinned women, with the error rates up to 34.7% compared to 0.8% for lighter-skinned men. These results were corroborated by a study from the US National Institute of Standards and Technology in 2019 and another study at the Massachusetts Institute of Technology in 2020.
These examples of poor accuracy from FR, and its mere existence as a tool used by law enforcement, only serve to highlight the devastating impacts for communities that already suffer from a history of abuse and discrimination. Across the U.S. and Canada, Black and Indigenous people already face disproportionate policing and surveillance, a practice that influences mugshot databases used for FR algorithms. This increases the likelihood of wrongful convictions and future surveillance tactics such as disproportionately installing security cameras in majority-Black neighbourhoods. One such project includes Detroit’s Project Green Light, which was criticized for using FR systems to divert funding for housing, public benefits and employment opportunities.
American software company Clearview AI was also under investigation in the last two years for building a database without user consent for police departments. At the height of the Black Lives Matter protest in 2020, the Minneapolis police department was accused of using this database to survey protestors, which the American Civil Liberties Union says “amplifies the overall concerns with law enforcement having this technology to begin with.”
Better Regulated Software
Building more representative datasets and enhancing image quality, along with regular auditing for biases, can help to reduce chances of inaccurate detection. However, fully addressing concerns surrounding FR technology requires reviewing multiple aspects of not only the algorithms but its high likelihood for abuse enabled by the justice system.
A 2013 report from the Office of the Privacy Commissioner of Canada suggested that institutions assess the need, retention, consent and accuracy of FR projects before conducting them. An article from Lawfare provides starting points for better use and distribution of FR, including transparency requirements (when and how surveillance data is collected), standard and certification mechanisms and outright bans on certain aspects of FR technology. Other advice says to limit storage time, data sharing and reduce collateral information collection such as body cameras.
Major U.S. districts are starting to take note of this criticism, however with varied legislation. Several Californian cities have banned the use of FR by government agencies and city officials (including police body cams), while states such as Oregon, Maine and Massachusetts are considering extending the ban to private businesses.
The European Union has separate laws governing institutions and private businesses, although this has led to real-time surveillance by law enforcement. June 2020 also saw several tech companies, such as IBM, Amazon and Microsoft, halting FR software sales to varying degrees.
Comprehensive national laws are important to regulate and prevent abuse of FR software. While it may seem like an exciting gateway to the future, it’s important to scrutinize the impact it can have on current social issues to ensure a more responsible future.