The UK’s Competition and Markets Authority (CMA) has issued a warning about the potential risks of artificial intelligence (AI) foundation models. These AI systems, trained on massive, unlabeled data sets, underpin large language models and can be used for various tasks. The CMA has proposed principles to guide the development and use of foundation models, including accountability, access, diversity, choice, flexibility, fair dealing, and transparency. The report warns that poorly developed AI models could lead to societal harm, such as exposure to false and misleading information and AI-enabled fraud. The CMA also warns that market dominance from a few firms could lead to anticompetition concerns, with established players using foundation models to entrench their position and deliver overpriced or poor quality products and services. The CMA will provide an update on its thinking in early 2024. The UK government has tasked the CMA with weighing in on the country’s AI policy, but has opted to give responsibility for AI governance to sectoral regulators.
ANN Blog Conference CSAIL DSI Ethics Events FOKUS Front page news HDL Hosting Humour INRIA Interview L3S Media News Newsletter NEXT NI OII People Podcast Publications RBDSAI Research Roadmap Resources SONIC Sponsors TSING TWC UC-CS Vault Video Web Observatory WebSci Conferences Web Science Blog WEST White Paper WSI WSL WSTNet Blog WSTNet Lab WSTNet News WST Trustees and Fellows