Clearview fined £7.5m by Data Protection Watchdog

The Information Commissioner’s Office (ICO) has issued an enforcement notice ordering Clearview AI to cease collecting and using the personal data of UK residents, and to delete any such data that it may have stored on its systems on the grounds that ‘People were not informed that their images were being collected or used in this way.’

Clearview AI has gathered more than 20 billion images of people’s faces, together with other data from the internet and social media sites. The UK’s information commissioner said Clearview’s methods of image collection enable identification of the people in the photos, as well as monitoring their behaviour were “unacceptable,” It noted in its joint investigation with the Office of the Australian Information Commissioner (OAIC) that Clearview had violated several UK data privacy rules after the OAIC had ordered Clearview to delete data after concluding that Clearview had violated Australian data protection laws back in November.

ICO highlighted:

  • failing to use the personal information of people living in the UK in a manner that is fair and transparent;
  • failing to have a legitimate reason for collecting people’s information;
  • not having a process in place to prevent the data from being kept for an endless period of time;
  • failing to meet the more stringent data protection requirements that are necessary for biometric data;
  • asking for additional personal information, including photographs, from members of the general public who asked whether or not they were included in Clearview’s database.

The ICO has actually reduced the £17m fine it had proposed (we reported this in December 2021) saying that it had reduced the fine after taking into consideration a number of factors, including input from Clearview itself.

Clearview’s AI tool enables customers to run facial recognition searches and identify persons of interest. Customers submit people’s pictures, and the system tries to locate those people in the database, using facial recognition. If successful, it returns details like the individual’s name, social media handles and so on.

Jim Hendler on the History of the Semantic Web

WST Trustees Jim Hendler speaks abou the history of the Semantic Web on this recent Podcast.

In this episode of The Index, host Alex Kehaya joins James Hendler, one of the originators of the semantic web and the Director of the Institute for Data Exploration and Applications and the Tetherless World Professor for Computer, Web and Cognitive Science at RPI.

Hendler has authored over 450 books, technical papers, and articles in the areas of open data, the semantic web, artificial intelligence, data policy and governance. 

WST White Paper on Privacy

Law has granted individuals some rights over the use of data about them, but data protection rights have not redressed the balance between the individual and the tech giants. A number of approaches aim to augment personal rights to allow individuals to police their own information space, facilitating informational self-determination. This reports reviews this approach to privacy protection, explaining how controls have generally been conceived either as the use of technology to aid individuals in this policing task, or the creation of further legal instruments to augment their powers. It focuses on two recent attempts to secure or support data protection rights, one using technology and the other the law. The former is called Solid, a decentralised platform for linked data, while the latter is a novel application of trust law to develop data trusts in which individuals’ data is managed by a trustee with the individuals as beneficiaries. The report argues that structural impediments make it hard for thriving, diverse ecosystems of Solid apps or data trusts to achieve critical mass – a problem that has traditionally haunted this empowering approach.