Have OpenAI made an AGI breakthrough?

Before OpenAI CEO Sam Altman was ousted, researchers at OpenAI reportedly sent a letter to the board warning of a significant AI breakthrough with “potential risks to humanity”. The letter and a new AI algorithm, referred to as Q* (Q-star), it has been claimed, may have been key factors in Altman’s removal.

Some at OpenAI believe Q* could be a breakthrough in the quest for artificial general intelligence (AGI), showcasing promising mathematics and math-problem solving capabilities, something ChatGPT is not particularly good at. The researchers reportedly emphasized concerns about AI’s power and potential dangers in their letter, without listing specific safety issues. The capabilities of Q* mentioned by the researchers are unclear at this point. Researchers in AI see implementing mathematics as a crucial step towards AI with human-like reasoning abilities.

MIT Technology Review newsletter (The Algorithm) by contrast cites numerous researchers in the AI field who are currently characterising this reaction as “hype” rather than a new and dangerous breakthrough.  

Whilst we can only speculate at this point, Altman’s firing may have followed concerns at board level about commercialising advances before understanding the full implications and consequences whether or not these risks prove to be substantial.

An open letter with the threat by more than 700 OpenAI employees to leave and join Altman at Microsoft led to Altman’s reinstatement and the departure of several of the OpenAI board. 

Multiple countries sign AI accord with notable exceptions

Law enforcement and intelligence agencies from 18 countries, including the EU, the US, and others, have signed an international agreement on AI safety to ensure new AI technologies are “secure by design.” This follows the EU’s AI Act, which bans certain AI technologies (such as predictive policing and biometric surveillance) and classifies high-risk AI systems. Notably absent from the agreement is China, a major player in AI development.

The agreement emphasizes the need for secure and responsible AI development and operation, with security as a core requirement throughout the AI life cycle. A particular concern continues to be around Adversarial machine learning, a concern in AI security, involves exploiting vulnerabilities in machine learning components to disrupt or deceive AI systems. The agreement is nonbinding and offers general recommendations without addressing complex issues such as proper AI applications or data collection methods.

In the US, there are ongoing legal battles over AI models’ data ingestion practices and their compliance with copyright law. Authors are suing OpenAI and Microsoft for copyright infringement, raising concerns about AI’s impact on traditional creative and journalistic industries. The legal future of AI litigation is uncertain, with courts cautiously approaching early cases.

WebSci’24 upcoming dates

ACM WebSci’24: Call for Submissions
Conference Dates: May 21-24, 2024
websci24.org/

Hosted by the Interchange Forum for Reflecting on Intelligent Systems (IRIS) | Organized by the University of Stuttgart | Partners ACM • Cyber Valley • Web Science Trust • SigWeb

  • Papers [LINK] Submission Deadline: Nov. 30, 2023
  • Workshops/Tutorials [LINK] Submission Deadline: Dec 2, 2023
  • Posters [LINK] Submission Deadline: Feb. 15, 2024
  • PhD Symposium [LINK] Submission Deadline: Feb. 26, 2024

Apollo Research demonstrates the Elephant in the room with AI ethics

Apollo research recently presented some disturbing findings at the UK AI safety summit. Their goal had been to co-erce (rather than instruct) an AI (in this case ChatGPT) to engage in some deceitful / illegal activity on the premise of this being helpful to humans (a “greater good” challenge).

In the experiment the AI was told to create a stock trading app in addition to which it was given insider information about one of the companies being traded helpful to making a favourable trade. The AI knew insider trading was illegal but was told that the AI’s host company and its founders were near to financial collapse.

The AI proceeded to (simulate) illegal insider trading and when asked about the results lied about its actions (both actions presumably to protect the human owners)

“This is a demonstration of an AI model deceiving its users, on its own, without being instructed to do so,” Apollo Research claimed.

Much has been said about Isaac Asimov’s three [four] laws of Robotics which in the modern context might read:

0. An AI may not harm humanity or, through inaction, allow humanity to come to harm [added later]

1. An AI may not injure a human being or, through inaction, allow a human to come to harm

2. An AI must obey humans except where this conflicts with the first law

3. An AI must protect its own existence except where this conflicts with the first or second laws

Having first published these over 80 years ago (in 1942) it would seem that Asimov was surprisingly prescient about the rules we would need to create though unfortunately we still seem to be struggling with a more fundamental problem than imparting a meaningful distinction of harm – we are struggling with Asimovs assumption that these robots would be accessing the truth (objective facts) about their environment in order to operate safely. In this demonstration all bets are off when a human claims acting (or not acting) is required to promote/ensure human safety. 

Without a reliable source of the truth (in an era regularly thought of as post-truth) it would seem that Asimovs laws may provide much less protection than we might have imagined.

 

EU Lawmakers tackle Election interference

European lawmakers have reached an agreement on measures to safeguard elections and prevent policy fragmentation within the EU. The rules aim to protect EU elections from foreign interference, banning political ads targeting specific ethnic and religious groups and blocking ads funded by foreign lobbyists leading up to elections.

The regulations, part of the EU’s broader efforts to govern big tech firms, address the modernization of outdated rules for political ads in the online era. The rules prohibit the distribution of ads based on personal characteristics like sexuality, religion, and race, offering special protection to such data under European data protection law. Additionally, non-EU entities are barred from sponsoring political ads three months before an election or referendum. Facebook owner Meta has voluntarily announced additional controls over political ads using AI-generated images, videos, and audio on its platform.

The EU’s Digital Agenda, which includes strict regulations on global tech firms, reflects its commitment to creating a digital single market. Critics argue that lobbying by big tech firms could undermine meaningful electoral debates online, while the EU seeks to prevent a fragmentation of its politics by establishing unified digital political ad rules.

The regulations will also grant people the right to know the funding source and expenditure for political ads, and political advertisers will be restricted from buying voter lists from standard marketing databases. Meta’s rules on AI-generated ads will require clear labeling for media depicting fake people or events.

OpenAI release new features for DIY GPT’s

ChatGPT developer Open AI have introduced a slew of new features to their platform including faster (and cheaper!) versions of ChatGPT and DALL-E as well as a platform for building DIY AI Assistants and DIY versions of LLM apps known as GPT’s. Open AI is clearly looking to the financial success of the Apple Appstore and is setting up allowing custom GPT’s to be monetised / resold on the OpenAI platform.

These releases have been greeted with enthusiasm by some developers whilst others could see their current business offerings wiped out at a stroke by this news.

What doesn’t seem to be changing in this space is the dizzying speed of change and Open AI’s CEO Sam Altman playfully quipped that however impressed we may all be by this release it will “seem quaint” when compared to what they have in the works planned for next year.

A sober reminder reminder that those entering this space early may invest heavily in building services and features that Open AI could be giving away for free next year (next month!).

To borrow from the latin phrase Caveat Emptor (buyer beware) the businesses springing up daily around OpenAI may wish to CAVEAT MERCATOR (Merchant beware).