Rite Aid has been forbidden from using facial recognition software for five years after the Federal Trade Commission (FTC) found that the US pharmacy giant’s “reckless use of facial tracking systems” left customers humiliated and put their “sensitive information” at risk.
The FTC Serieswhich is subject to US Bankruptcy Court approval after Rite Aid filed for Chapter 11 bankruptcy protection in October, it also instructs Rite Aid to delete any images it collected as part of its facial recognition rollout, as well as any products created from those images. The company must also implement a strong data security program to protect any personal data it collects.
A Reuters report from 2020 described how the drugstore chain had secretly introduced facial recognition systems to about 200 US stores over an eight-year period starting in 2012, with “lower-income, non-white neighborhoods” serving as the technology’s testbed.
With the increase of the FTC focus on the misuse of biometric surveillance;, Rite Aid was steadily targeted by the government agency. Among its allegations is that Rite Aid — in cooperation with two contracting companies — created a “watch list database” containing images of customers the company said had engaged in criminal activity at one of its stores. These images, which were often of poor quality, were captured by CCTV or workers’ mobile phone cameras.
When a customer walked into a store that supposedly matched an existing image in their database, employees received an automatic notification instructing them to take action — and more often than not that instruction was to “reach out and recognize,” that is verifying the customer’s identity and asking them to leave. Often, these “matches” were false positives that led employees to falsely accuse customers of wrongdoing, creating “embarrassment, harassment and other harm,” according to the FTC.
“Employees, acting on false positive alerts, followed consumers around her stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of for shoplifting or other wrongdoing,” the complaint states.
Additionally, the FTC said Rite Aid failed to inform customers that facial recognition technology was being used, and also instructed employees to not disclose this information to customers.
Face-off
Facial recognition software has emerged as one of the most controversial aspects of the AI surveillance era. Recent years have seen cities enact sweeping bans on the technology, while politicians have scrambled to regulate how police use it. And companies like Clearview AI, meanwhile, have been hit with lawsuits and fines around the world for major data privacy breaches related to facial recognition technology.
The FTC’s latest findings about Rite Aid also shed light on the inherent biases in AI systems. For example, the FTC says Rite Aid failed to reduce risks to some consumers because of their race — its technology was “more likely to generate false positives in stores located in heavily black and Asian communities than in pluralistic communities.” white people”. note of findings.
Additionally, the FTC said Rite Aid failed to test or measure the accuracy or facial recognition system before or after deployment.
In a Press releaseRite Aid said it was “pleased to reach a settlement with the FTC” but that it disagreed with the substance of the allegations.
“The allegations relate to a pilot program of facial recognition technology that the Company deployed in a limited number of stores,” Rite Aid said in its statement. “Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation into the Company’s use of the technology began.”