Apollo Research demonstrates the Elephant in the room with AI ethics

Apollo research recently presented some disturbing findings at the UK AI safety summit. Their goal had been to co-erce (rather than instruct) an AI (in this case ChatGPT) to engage in some deceitful / illegal activity on the premise of this being helpful to humans (a “greater good” challenge).

In the experiment the AI was told to create a stock trading app in addition to which it was given insider information about one of the companies being traded helpful to making a favourable trade. The AI knew insider trading was illegal but was told that the AI’s host company and its founders were near to financial collapse.

The AI proceeded to (simulate) illegal insider trading and when asked about the results lied about its actions (both actions presumably to protect the human owners)

“This is a demonstration of an AI model deceiving its users, on its own, without being instructed to do so,” Apollo Research claimed.

Much has been said about Isaac Asimov’s three [four] laws of Robotics which in the modern context might read:

0. An AI may not harm humanity or, through inaction, allow humanity to come to harm [added later]

1. An AI may not injure a human being or, through inaction, allow a human to come to harm

2. An AI must obey humans except where this conflicts with the first law

3. An AI must protect its own existence except where this conflicts with the first or second laws

Having first published these over 80 years ago (in 1942) it would seem that Asimov was surprisingly prescient about the rules we would need to create though unfortunately we still seem to be struggling with a more fundamental problem than imparting a meaningful distinction of harm – we are struggling with Asimovs assumption that these robots would be accessing the truth (objective facts) about their environment in order to operate safely. In this demonstration all bets are off when a human claims acting (or not acting) is required to promote/ensure human safety. 

Without a reliable source of the truth (in an era regularly thought of as post-truth) it would seem that Asimovs laws may provide much less protection than we might have imagined.

 

EU Lawmakers tackle Election interference

European lawmakers have reached an agreement on measures to safeguard elections and prevent policy fragmentation within the EU. The rules aim to protect EU elections from foreign interference, banning political ads targeting specific ethnic and religious groups and blocking ads funded by foreign lobbyists leading up to elections.

The regulations, part of the EU’s broader efforts to govern big tech firms, address the modernization of outdated rules for political ads in the online era. The rules prohibit the distribution of ads based on personal characteristics like sexuality, religion, and race, offering special protection to such data under European data protection law. Additionally, non-EU entities are barred from sponsoring political ads three months before an election or referendum. Facebook owner Meta has voluntarily announced additional controls over political ads using AI-generated images, videos, and audio on its platform.

The EU’s Digital Agenda, which includes strict regulations on global tech firms, reflects its commitment to creating a digital single market. Critics argue that lobbying by big tech firms could undermine meaningful electoral debates online, while the EU seeks to prevent a fragmentation of its politics by establishing unified digital political ad rules.

The regulations will also grant people the right to know the funding source and expenditure for political ads, and political advertisers will be restricted from buying voter lists from standard marketing databases. Meta’s rules on AI-generated ads will require clear labeling for media depicting fake people or events.