WST CEO named to UN AI Advisory Role

WST CEO Prof. Dame Wendy Hall has been appointed to the United Nations high-level advisory body on artificial intelligence. She is a Regius Professor of Computer Science from the University of Southampton and also Director of its Web Science Institute and was selected from more than 1,800 nominees across 128 countries. 

Having previously been named the first UK artificial intelligence skills champion in 2018, she joins 31 experts from across the world to undertake analysis and advance recommendations for the international governance of AI.

Her role in the UN advisory body will see Dame Wendy work with experts from government, private sector and civil society to provide insights from the breadth of her research at Southampton which includes the £31million Responsible AI UK programme, known as RAI UK, which intends to fuel Britain’s ambitions to be a science and technology superpower.

The new UN advisory body will commence this month, working together for a year, before submitting its recommendations in late 2024.

I feel very privileged to have been appointed by the United Nations to be on this new AI advisory body. As new AI technologies and capabilities emerge, it is so important that we harness them for good, while ensuring they don’t evolve in ways that would be harmful to society. It is very exciting to be part of the global discussions on the best way to manage this.

Professor Dame Wendy Hall

CEO, Web Science Trust

Read more about Dame Wendy's work

UK Government working towards AI Summit

Government officials, including Rishi Sunak’s advisers, are engaging in discussions with global leaders to formulate a formal statement addressing the risks associated with artificial intelligence (AI) before the upcoming AI Safety Summit scheduled for November 1st and 2nd. They aim to emphasize the UK’s leadership in AI safety, with a proposed domestic AI taskforce taking on a global role.

A draft agenda mentions the potential establishment of an “AI Safety Institute” focused on enabling national security agencies to evaluate advanced AI models effectively. However, the government had previously downplayed this idea. The Summit is not intended to create a new international institution but will concentrate on international collaboration.

The Summit’s objectives include updating safety guidelines previously released by the White House, exploring global cooperation on AI-related risks, and concluding with discussions among like-minded countries on how to scrutinize AI from a national security perspective.

This event will have a limited audience, with around 100 attendees, including cabinet ministers, CEOs of AI companies, academics, and representatives from international civil society. Prominent companies like OpenAI, Google, and Microsoft are expected to participate and disclose their adherence to AI safety commitments made in collaboration with the Biden administration in July. These commitments involve pre-release testing of AI models and ongoing scrutiny of their operation to ensure safety.