Before OpenAI CEO Sam Altman was ousted, researchers at OpenAI reportedly sent a letter to the board warning of a significant AI breakthrough with “potential risks to humanity”. The letter and a new AI algorithm, referred to as Q* (Q-star), it has been claimed, may have been key factors in Altman’s removal.
Some at OpenAI believe Q* could be a breakthrough in the quest for artificial general intelligence (AGI), showcasing promising mathematics and math-problem solving capabilities, something ChatGPT is not particularly good at. The researchers reportedly emphasized concerns about AI’s power and potential dangers in their letter, without listing specific safety issues. The capabilities of Q* mentioned by the researchers are unclear at this point. Researchers in AI see implementing mathematics as a crucial step towards AI with human-like reasoning abilities.
MIT Technology Review newsletter (The Algorithm) by contrast cites numerous researchers in the AI field who are currently characterising this reaction as “hype” rather than a new and dangerous breakthrough.
Whilst we can only speculate at this point, Altman’s firing may have followed concerns at board level about commercialising advances before understanding the full implications and consequences whether or not these risks prove to be substantial.
An open letter with the threat by more than 700 OpenAI employees to leave and join Altman at Microsoft led to Altman’s reinstatement and the departure of several of the OpenAI board.