Bletchley Declaration released from AI Safety Summit

Source: The following is reproduced in full from the UK Gov Website:
Nov 6th 2023

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.

AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.

Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them.

Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI. In doing so, we recognise that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI. This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. We also note the relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct. With regard to the specific risks most likely found in relation to frontier AI, we resolve to intensify and sustain our cooperation, and broaden it with further countries, to identify, understand and as appropriate act, through existing international fora and other relevant initiatives, including future international AI Safety Summits.

All actors have a role to play in ensuring the safety of AI: nations, international fora and other initiatives, companies, civil society and academia will need to work together. Noting the importance of inclusive AI and bridging the digital divide, we reaffirm that international collaboration should endeavour to engage and involve a broad range of partners as appropriate, and welcome development-orientated approaches and policies that could help developing countries strengthen AI capacity building and leverage the enabling role of AI to support sustainable growth and address the development gap.

We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures. We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks.

In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:

identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.
In furtherance of this agenda, we resolve to support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration, including through existing international fora and other relevant initiatives, to facilitate the provision of the best science available for policy making and the public good.

In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024.

The countries represented were:

European Union
Kingdom of Saudi Arabia
The Philippines
Republic of Korea
United Arab Emirates
United Kingdom of Great Britain and Northern Ireland
United States of America
References to ‘governments’ and ‘countries’ include international organisations acting in accordance with their legislative or executive competences.

ACM WebSci’24 deadline approaching

Call for Papers ACM WebSci’24 ● 16th ACM Web Science Conference May 21 – May 24, 2024 ● Stuttgart, Germany Reflecting on the Web, AI, and Society Important Dates Thu, November 30, 2023: Paper submission deadline Wed, January 31, 2024: Notification Thu, February 29, 2024: Camera-ready versions due Tue-Fri, May 21 – May 24, 2024: Conference dates All dates are 23:59 Anywhere on earth time About the Web Science Conference Web Science is an interdisciplinary field dedicated to understanding the complex and multiple impacts of the Web on society and vice versa. The discipline is well situated to address pressing issues of our time by incorporating various scientific approaches. We welcome quantitative, qualitative, and mixed methods research, including social sciences and computer science techniques. In addition, we are interested in work exploring Web-based data collection and research ethics. We also encourage studies that combine analyses of Web data and other types of data (e.g., from surveys or interviews) and help better understand user behavior online and offline. Possible topics across methodological approaches and digital contexts include but are not limited to: Understanding the Web Automation and AI in all its manifestations relevant to the Web Trends in globalization, fragmentation, and polarization of the Web The architecture and philosophy of the Web Critical analyses of the Web and Web technologies Making the Web Inclusive Issues of discrimination and fairness Intersectionality and design justice in questions of marginalization and inequality Ethical challenges of technologies, data, algorithms, platforms, and people on the Web Safeguarding and governance of the Web, including anonymity, security, and trust Inclusion, literacy and the digital divide The Web and Society Social machines, crowd computing and collective intelligence Web economics, social entrepreneurship, and innovation Legal issues, including rights and accountability for AI actors Humanities, arts, and culture on the Web Politics and social activism on the Web Online education and remote learning Health and well-being online The role of the Web in the future of (augmented) work The Web as a source of news and information, and misinformation Doing Web Science Data curation, Web archives and stewardship in Web Science Temporal and spatial dimensions of the Web as a repository of information Analysis and modeling of human vs. automatic behavior (e.g., bots) Analysis of online social and information networks Detecting, preventing and predicting anomalies in Web data (e.g., fake content, spam) 2024 Emphasis: Reflecting on the Web, AI, and Society In addition to the topics at the heart of Web Science, we also welcome submissions addressing the interplay between the Web, AI and society. New advances in AI are revolutionizing the way in which people use the Web and interact through it. As these technologies develop, it is crucial to examine their effect on society and the socio-technical environment in which we find ourselves. We are nearing the crossroads wherein content on the Web will increasingly be automatically generated, blended with that created by humans. This creates new potential yet brings new challenges and exacerbates existing ones in relation to data quality and misinformation. Additionally, we need to consider the role of the Web as a source of data for AI, including privacy and copyright concerns, as well as bias and representativity of resulting systems. The potential impact of new AI tools on the nature of work may bring a transformation of some careers while creating whole new ones. This year’s conference especially encourages contributions documenting different uses of AI in relation to how people use the Web, and in the ways the Web affects the creation and deployment of AI tools. Format of the submissions Please upload your submissions via EasyChair: There are two submission formats. Full papers should be between 6 and 10 pages (including references, appendices, etc.). Full papers typically report on mature and completed projects. Short papers should be up to 5 pages (including references, appendices, etc.). Short papers will primarily report on high-quality ongoing work not mature enough for a full-length publication. All accepted submissions will be assigned an oral presentation (of two different lengths). All papers should adopt the current ACM SIG Conference proceedings template (acmart.cls). Please submit papers as PDF files using the ACM template, either in Microsoft Word format (available at under “Word Authors”) or with the ACM LaTeX template on the Overleaf platform, which is available at In particular; please ensure that you are using the two-column version of the appropriate template. All contributions will be judged by the Program Committee upon rigorous peer review standards for quality and fit for the conference by at least three referees. Additionally, each paper will be assigned to a Senior Program Committee member to ensure review quality. WebSci-2024 review is double-blind. Therefore, please anonymize your submission: do not put the author(s) names or affiliation(s) at the start of the paper, and do not include funding or other acknowledgments in papers submitted for review. References to authors’ own prior relevant work should be included but should not specify that this is the authors’ own work. It is up to the authors’ discretion how much to further modify the body of the paper to preserve anonymity. The requirement for anonymity does not extend outside of the review process, e.g., the authors can decide how widely to distribute their papers over the Internet. Even in cases where the author’s identity is known to a reviewer, the double-blind process will serve as a symbolic reminder of the importance of evaluating the submitted work on its own merits without regard to the authors’ reputation. For authors who wish to opt-out of publication proceedings, this option will be made available upon acceptance. This will encourage the participation of researchers from the social sciences that prefer to publish their work as journal articles. All authors of accepted papers (including those who opt out of proceedings) are expected to present their work at the conference. ACM Policies “By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.” “Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. The collection process has started and will roll out as a requirement throughout 2022. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.” Program Committee Chairs: Oshani Seneviratne (Rensselaer Polytechnic Institute) Luca Maria Aiello (IT University of Copenhagen) Yelena Mejova (ISI Foundation) For any questions and queries regarding the paper submission, please contact the chairs at