EFF in NJ court: Provide defendants with information about police use of facial recognition technology


We’ve all read the news: studies show that facial recognition algorithms aren’t always reliable, and error rates rise dramatically when involving the faces of people of color, especially black women, as well as trans and non-binary people. Yet this technology is widely used by law enforcement to identify suspects in criminal investigations. By refusing to release details of this process, law enforcement effectively prevented the defendants from challenging the reliability of the technology that ultimately led to their arrest.

This week, EFF, along with EPIC and NACDL, filed an amicus brief in New Jersey State vs. Francisco Arteaga, urging a New Jersey appeals court to allow a strong discovery regarding law enforcement’s use of facial recognition technology. In this case, a facial recognition search conducted by the NYPD for the NJ Police was used to determine that Francisco Arteaga was a “match” for the armed robber. Despite the correspondence’s centrality to the case, nothing was disclosed to the defense about the algorithm that generated it, not even the name of the software used. Mr. Arteaga requested detailed information about the research process, with an expert testifying to the need for this material, but the court denied these requests.

The full discovery regarding law enforcement facial recognition searches is crucial because, far from being an infallible tool, the process involves many steps, all of which pose a substantial risk of error. These steps include the selection of the “probe” photo of the person wanted by the police, the editing of the probe photo, the choice of photo databases against which the modified probe photo is compared, the specifics of the algorithm which performs the search and human review of the results of the algorithm. .

Police analysts often select a probe photo from a video still image or cell phone camera, which are more likely to be of poor quality. The characteristics of the chosen image, including its resolution, clarity, face angle, lighting, etc. all have an impact on the accuracy of subsequent algorithmic research. Surprisingly, analysts can also significantly modify the probe photo, using tools closely resembling those of Photoshop to remove facial expressions or insert eyes, combining face photographs of two different people even if only one is from the author, using the blur effect to add pixels in a low quality image, using the clone tool or 3D modeling to add parts of a subject’s face not visible on the original photo. In one outrageous case, when the original probe photo returned no potential matches by the algorithm, the NYPD’s Facial ID Section analyst, who thought the subject looked like actor Woody Harrelson, ran another search using the celebrity’s photo instead. Needless to say, these changes greatly increase the risk of misidentification.

The photo database against which the probe photo is compared, which may include mugshots, DMV photos, or other sources, may also impact the accuracy of the results depending on the population that makes up those databases. of data. Mugshot databases will often include more photos of people in over-policed ​​communities and the resulting errors in the search are more likely to impact members of these groups.

The algorithms used by law enforcement are usually developed by private companies and are “black box” technology – it is impossible to know exactly how the algorithms arrive at their conclusions without looking at their source code. Each algorithm is developed by different designers and trained using different sets of data. The algorithms create “templates”, also called “facial vectors”, of the probe photograph and database photographs, but different algorithms will focus on different points of a face when creating these templates. Not surprisingly, even when comparing the same probe photo to the same databases, different algorithms will produce different results.

Although human analysts review the probe photo and the algorithm-generated list of candidates for the match to study, numerous studies have shown that humans are prone to misidentify unfamiliar faces and are subject to the same biases as those present in facial recognition systems. Human review is also influenced by many other factors, including the analyst’s innate ability to analyze faces, motivation to find a match, fatigue from performing a repetitive task, time constraints and cognitive and contextual biases.

Despite the serious risk of error, law enforcement agencies remain reticent about their facial recognition systems. By filing this brief, EFF continues to advocate for transparency in law enforcement technology.

Previous OFAC Releases Updated Iranian General License for Certain Services, Software and Hardware for Internet Communications and New Related FAQs
Next Understanding the Adoption and Scale-up of Maternity Waiting Homes in Low- and Middle-Income Countries: A Program Theory from a Realistic Review and Synthesis - Global