Clearview AI has gathered more than 20 billion images of people’s faces, together with other data from the internet and social media sites. The UK’s information commissioner said Clearview’s methods of image collection enable identification of the people in the photos, as well as monitoring their behaviour were “unacceptable,” It noted in its joint investigation with the Office of the Australian Information Commissioner (OAIC) that Clearview had violated several UK data privacy rules after the OAIC had ordered Clearview to delete data after concluding that Clearview had violated Australian data protection laws back in November.
- failing to use the personal information of people living in the UK in a manner that is fair and transparent;
- failing to have a legitimate reason for collecting people’s information;
- not having a process in place to prevent the data from being kept for an endless period of time;
- failing to meet the more stringent data protection requirements that are necessary for biometric data;
- asking for additional personal information, including photographs, from members of the general public who asked whether or not they were included in Clearview’s database.
The ICO has actually reduced the £17m fine it had proposed (we reported this in December 2021) saying that it had reduced the fine after taking into consideration a number of factors, including input from Clearview itself.
Clearview’s AI tool enables customers to run facial recognition searches and identify persons of interest. Customers submit people’s pictures, and the system tries to locate those people in the database, using facial recognition. If successful, it returns details like the individual’s name, social media handles and so on.