Why police using facial recognition technology is wrong

We often forget that surveillance technologies are used by many institutions beyond the private sector. Take a look at modern police departments across the United States for instance. In New York City, the NYPD has transformed into a modern police department by using artificial intelligence technologies to collect digital information. One major development has been relying on techniques like using facial recognition software to run background checks on individuals.

The NYPD rationale for using facial recognition is to speed up the process of identifying, and when appropriate, arresting criminals. This rationale is controversial today, and for understandable reasons. Three main ones come to mind.

First, there is racial discrimination built into face recognition algorithms. Studies have repeatedly shown that from the dominant biometrics we use such as, fingerprints, voice, and face – face recognition is the least accurate and has the most growing privacy concerns. What’s more, photos are usually gathered by police without consent. And of course, there is a lack of legislative oversight, but what’s worse is that the facial recognition technology empowers police officers to widen the pre-existing inequalities. So, these biases are just being exposed at a higher rate with the use of technology.

Second, police departments are known to have a racism problem, and this type of technology is likely to reinforce these. Molly Griffard, an attorney with the Legal Aid Society’s Cop Accountability Project who represents plaintiffs in these types of cases said, “The NYPD’s own data confirms that the vast majority of people who they stop each year are Black and Latinx.” It’s normal for NYPD officers to stop Black and Latino people without suspicion, take their IDs and run a background check on their database. That is troublesome.

Third, there is a lack of oversight in data collection and protection, which may lead to citizens being more vulnerable now than to begin with. It’s still unclear on how police departments gather data and for what purpose. And it’s even more worrisome to not have a full understanding of the protection for this data. My skepticism here is that without a strong digital infrastructure (e.g. data storage), these police departments will ultimately face a data security problem.

So, what are police departments across the United States doing to address these issues?

It seems there is little hope of stopping law enforcement from taking advantage of technological advances in fields such as facial recognition. What can be done is ensuring that these advances don’t continue to perpetuate everything that is wrong with society. As a first step, leaders from every county must come together to encourage the development and implementation of facial recognition algorithms built from diverse and representative datasets. As a second step, investment in higher quality cameras will lower the chances of falsely identifying a person.

At the end, these issues indicate just how badly we need to reimagine policing and public safety in this country. And we all – local, regional, and national – need to work together to confront these historic inequalities.

 

Reference: https://www.cityandstateny.com/articles/policy/criminal-justice/nypd-has-surveillance-problem.html

 

 

Previous:

Microsoft Analyzed Data on Its Newly Remote Workforce

Next:

Employers are watching their workers. It’s their responsibility to create privacy lines.

Student comments on Why police using facial recognition technology is wrong

  1. Alfred, this is a great topic. With systems like facial recognition, both false positives and false negatives raise concerns that affect real human beings. An abundance of care must be shown in creating such systems, training the models, and, as you described so well, using them. It is true that the use of such systems is too alluring to police and law enforcement, and it is disappointing that meaningful action is not being put to place to address these legitimate concerns. I sometimes wonder if the way Massachusetts took (https://www.nytimes.com/2021/02/27/technology/Massachusetts-facial-recognition-rules.html) is the only viable solution so far. The article notes that other states and localities had to resort to “all or nothing” approach and ban the technology altogether. Unless we see tangible effort on the part of local police authorities that they are addressing these concerns, many more states may have to resort to policy making to deter abuse of facial recognition systems.

Leave a comment