Have OpenAI made an AGI breakthrough?

Before OpenAI CEO Sam Altman was ousted, researchers at OpenAI reportedly sent a letter to the board warning of a significant AI breakthrough with “potential risks to humanity”. The letter and a new AI algorithm, referred to as Q* (Q-star), it has been claimed, may have been key factors in Altman’s removal.

Some at OpenAI believe Q* could be a breakthrough in the quest for artificial general intelligence (AGI), showcasing promising mathematics and math-problem solving capabilities, something ChatGPT is not particularly good at. The researchers reportedly emphasized concerns about AI’s power and potential dangers in their letter, without listing specific safety issues. The capabilities of Q* mentioned by the researchers are unclear at this point. Researchers in AI see implementing mathematics as a crucial step towards AI with human-like reasoning abilities.

MIT Technology Review newsletter (The Algorithm) by contrast cites numerous researchers in the AI field who are currently characterising this reaction as “hype” rather than a new and dangerous breakthrough.  

Whilst we can only speculate at this point, Altman’s firing may have followed concerns at board level about commercialising advances before understanding the full implications and consequences whether or not these risks prove to be substantial.

An open letter with the threat by more than 700 OpenAI employees to leave and join Altman at Microsoft led to Altman’s reinstatement and the departure of several of the OpenAI board. 

Multiple countries sign AI accord with notable exceptions

Law enforcement and intelligence agencies from 18 countries, including the EU, the US, and others, have signed an international agreement on AI safety to ensure new AI technologies are “secure by design.” This follows the EU’s AI Act, which bans certain AI technologies (such as predictive policing and biometric surveillance) and classifies high-risk AI systems. Notably absent from the agreement is China, a major player in AI development.

The agreement emphasizes the need for secure and responsible AI development and operation, with security as a core requirement throughout the AI life cycle. A particular concern continues to be around Adversarial machine learning, a concern in AI security, involves exploiting vulnerabilities in machine learning components to disrupt or deceive AI systems. The agreement is nonbinding and offers general recommendations without addressing complex issues such as proper AI applications or data collection methods.

In the US, there are ongoing legal battles over AI models’ data ingestion practices and their compliance with copyright law. Authors are suing OpenAI and Microsoft for copyright infringement, raising concerns about AI’s impact on traditional creative and journalistic industries. The legal future of AI litigation is uncertain, with courts cautiously approaching early cases.