We are pleased to announce a new WSTNet Lab as we welcome the University of Texas at Austin under the leadership of Dhiraj Murthy and his Computational Media Lab to the network.
We will catch up with Dhiraj and his team over the coming months for an interview and we are delighted that the network continues to growand shows wide and growing support for Web Science research and principles.
The UK’s Competition and Markets Authority (CMA) has issued a warning about the potential risks of artificial intelligence (AI) foundation models. These AI systems, trained on massive, unlabeled data sets, underpin large language models and can be used for various tasks. The CMA has proposed principles to guide the development and use of foundation models, including accountability, access, diversity, choice, flexibility, fair dealing, and transparency. The report warns that poorly developed AI models could lead to societal harm, such as exposure to false and misleading information and AI-enabled fraud. The CMA also warns that market dominance from a few firms could lead to anticompetition concerns, with established players using foundation models to entrench their position and deliver overpriced or poor quality products and services. The CMA will provide an update on its thinking in early 2024. The UK government has tasked the CMA with weighing in on the country’s AI policy, but has opted to give responsibility for AI governance to sectoral regulators.