Clearview fine: The unacceptable face of modern surveillance
The UK’s Information Commissioner’s Office (ICO) has issued its third largest ever fine of £7.5m. It was imposed on Clearview AI, the controversial facial recognition company that has already been on the wrong end of similar decisions from regulators in Italy, France and Australia. Clearview collected more than 20 billion images of people’s faces from Facebook and other social media platforms. It then sold access to those to private companies and institutions such as police forces around the world.
The ICO found Clearview broke UK data protection law in several ways, including failing to have a lawful reason for collecting information on UK residents, failing to use it in a fair and transparent way, and failing to have a process in place to stop data being stored indefinitely. The ICO ordered the company to delete any UK residents’ images and refrain from collecting more in the future.
John Edwards, the UK information commissioner, said about his ruling: “The company not only enables identification of these people but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable.” The ICO also revealed Clearview’s technology had been offered on a “free trial basis” to UK law enforcement agencies, although that has since stopped.
Smile for the camera
Facial recognition software is becoming a fact of life in the UK. The proliferation of Ring and other video doorbells, for example, has provided police with a new way to fight crime by asking residents for their footage and audio (many have microphones capable of picking up conversations from passers-by). In 2019, the UK’s Police National Database held images of around 20 million faces, many of which were people who had never been charged or convicted of an offence.
There are issues with authorities using this technology to fight crime. For one, the risk of a false positive is high. When South Wales Police tested their facial recognition system for 55 hours, for example, 2,900 potential matches were flagged — 95% of those were false positives. There are stories, particularly in the US, of wrongful arrests and even convictions based on facial recognition software. Perhaps due to the technology sector’s lack of diversity, these systems aren’t good at recognizing women or those from an ethnic minority background, which serves to compound existing discrimination and bias.
The European Commission has expressed its intent to ban aspects of facial recognition technology in the future. But it’s not just your face: Surveillance technology is expanding at such a pace that it’s now possible to analyze the way you walk, your heartbeat, breathing pattern, and, controversially, emotions.
Surveillance technology has been normalized by the pandemic
Covid-19 propelled the growth of surveillance technology. In France, facial recognition technology was used on public transport to monitor whether passengers were wearing masks, and Australia trialed similar software to check people were at home during quarantine. Billions of people around the world had their movements logged by various Covid-19 test and trace apps.
There has been some public support for these sorts of measures. Almost two thirds (61%) of Brits say they’ve been happy to share their health status data during the pandemic, and 54% were happy to sacrifice some of their data privacy to shorten the length of lockdown.
But surveillance has slipped into other areas of our lives too. Workplace surveillance technology – from monitoring of emails and web browsing to video tracking and key logging – has become commonplace with the rise of remote working. Almost a third of workers are now being monitored in their jobs, up from 24% earlier in the year. Microsoft has even patented emotion detection software to monitor employee wellbeing after weighing up biometric factors such as voice and heart rate. Unions and MPs are calling for a new set of data rights to protect workers in these situations.
In some instances, regulators are taking action where practices breach existing GDPR legislation. H&M in Germany, for example, was handed a €35.2 million fine in 2020 for excessive surveillance of employees, and in the UK, Barclays is under investigation for its use of software to track staff computer activity.
Privacy is a fundamental human right
We all have a right to privacy; at home, at work and when we’re out in public. Organizations such as Clearview AI that take and store our images without our knowledge or consent must be fined and prevented from doing so again. But it’s only the tip of the iceberg.
Facial recognition technology, and biometric identification more broadly, is a slippery slope that threatens our fundamental human right to privacy. Technology will always hold its temptations. But organizations need to play their part in developing a culture of continuous privacy compliance to ensure that privacy is considered every step of the way. The way we operate now will have real consequences for the future we build – for ourselves, our children, and their children.