Facial-Recognition—A Powerful Tool for Authoritarian Surveillance
Page Media
Companies are selling facial-recognition technology to governments and unabashedly marketing their products as tools of indiscriminate surveillance. Given facial recognition’s dangerous potential, we all need to be taking a much closer look.
Facial-recognition technology refers to the automated recognition of a person based on their face. Unlike other biometric technology (like fingerprints) facial recognition can be used to identify multiple people at a time, at a distance, without the person’s knowledge. As a result, if deployed without limits, facial recognition would enable the unprecedented and secret surveillance of people going about their daily, private lives.
This unprecedented secret surveillance is, unfortunately, for sale. Browse some websites of facial-recognition vendors; you’ll see faces being identified in public squares, airports, and communities where people live and work. These companies often offer an interface for developers to use cloud-based facial recognition, allowing federal, state, and local law enforcement to easily apply facial recognition to existing databases (like mugshots or archival body-camera footage).
As if that’s not enough, the software doesn’t stop at recognizing a person and storing their face. Companies are advertising the ability to categorize people according to a variety of characteristics and behaviors, including (they claim) race and gender, but also emotional state and the direction of a person’s gaze.
Other facial-recognition companies are selling technology that has tremendous potential for the abuse of civil rights. One such company claims to be able to determine whether a person is a potential terrorist based only on an image of their face.
Gathering this level of data—even if it were accurate, which it often isn’t—would allow law enforcement to reconstruct a comprehensive record of people’s movements, interests, and associations. And governments could use that information to subject people to coercive state interventions based on personal choices, location histories, or political beliefs.
Imagine, for example, a person recorded speaking out against police violence at a city-council meeting. Using facial recognition and the police department’s video feeds from across the city, officers could look up that person’s previous activities and whereabouts, even if they weren’t previously identified by name in the system. Maybe the person attended a march in support of immigrants’ rights. Maybe they frequently visited a community mosque with their young child and spoke with friends after the service. Government agencies have no justification for collecting these details about people’s lives within a persistent, searchable database. But facial-recognition software allows governments to do just that.
The spread of facial-recognition technology represents a serious threat to civil liberties and civil rights. And like other surveillance technologies, it will be communities of color, immigrants, and other minority groups that are disproportionately affected. Transparency, accountability and oversight for facial recognition is critical to preventing government misuse. Companies and communities should not remain silent as governments seek to adopt this technology. Is your company working on facial-recognition technology for government customers? Is an agency in your community considering its own deployment?
Companies developing facial-recognition software need to consider how their products enable dragnet surveillance, discriminatory enforcement, and abuse. Then those companies should take action to protect civil rights. Communities should be passing local laws to make sure that discriminatory surveillance systems are not secretly deployed in their neighborhoods. And people should press both companies and governments to adopt these reforms.