Law enforcement and intelligence agencies from 18 countries, including the EU, the US, and others, have signed an international agreement on AI safety to ensure new AI technologies are “secure by design.” This follows the EU’s AI Act, which bans certain AI technologies (such as predictive policing and biometric surveillance) and classifies high-risk AI systems. Notably absent from the agreement is China, a major player in AI development.

The agreement emphasizes the need for secure and responsible AI development and operation, with security as a core requirement throughout the AI life cycle. A particular concern continues to be around Adversarial machine learning, a concern in AI security, involves exploiting vulnerabilities in machine learning components to disrupt or deceive AI systems. The agreement is nonbinding and offers general recommendations without addressing complex issues such as proper AI applications or data collection methods.

In the US, there are ongoing legal battles over AI models’ data ingestion practices and their compliance with copyright law. Authors are suing OpenAI and Microsoft for copyright infringement, raising concerns about AI’s impact on traditional creative and journalistic industries. The legal future of AI litigation is uncertain, with courts cautiously approaching early cases.