Clearview overturn £7.5m ICO fine for storing facial images

A UK tribunal has made a significant legal decision favoring Clearview AI, a controversial US facial recognition tech company, by overturning a £7.5 million fine imposed by the UK’s privacy watchdog, the Information Commissioner’s Office (ICO). The fine was related to Clearview unlawfully storing facial images of UK citizens. The tribunal’s ruling was based on the question of jurisdiction and foreign law enforcement’s use of data from British citizens. The tribunal found in favor of Clearview because its technology is not used by UK police, and there’s an exemption in UK data protection laws for use by foreign law enforcement. The tribunal stated that the ICO didn’t have the authority to issue the fine, even though Clearview processed data from people in the UK. It emphasized that Clearview’s clients were primarily in the US, Brazil, Mexico, Panama, and the Dominican Republic, focusing on cross-border investigations. Clearview AI’s technology relies on a vast database of 30 billion facial images from the internet, which has caused controversy. The company’s facial recognition app is used by US law enforcement agencies to search for matches to specific faces. The ICO said it would review the judgment and noted that the ruling doesn’t impact its ability to take action against international companies processing data from people in the UK, particularly those involved in data scraping activities. This case was about a specific exemption for foreign law enforcement agencies.

New WSTNet Lab announced: University of Texas at Austin

We are pleased to announce a new WSTNet Lab as we welcome the University of Texas at Austin under the leadership of Dhiraj Murthy and his Computational Media Lab to the network.

We will catch up with Dhiraj and his team over the coming months for an interview and we are delighted that the network continues to growand shows wide and growing support for Web Science research and principles.

UK regulator AI warnings

The UK’s Competition and Markets Authority (CMA) has issued a warning about the potential risks of artificial intelligence (AI) foundation models. These AI systems, trained on massive, unlabeled data sets, underpin large language models and can be used for various tasks. The CMA has proposed principles to guide the development and use of foundation models, including accountability, access, diversity, choice, flexibility, fair dealing, and transparency. The report warns that poorly developed AI models could lead to societal harm, such as exposure to false and misleading information and AI-enabled fraud. The CMA also warns that market dominance from a few firms could lead to anticompetition concerns, with established players using foundation models to entrench their position and deliver overpriced or poor quality products and services. The CMA will provide an update on its thinking in early 2024. The UK government has tasked the CMA with weighing in on the country’s AI policy, but has opted to give responsibility for AI governance to sectoral regulators.