Ransomware gang makes an official complaint to SEC

In a mildly comedic response, a ransomware gang decided to file an official SEC complaint when a victim ignored their ransom demand.

 

The AlphV/BlackCat ransomware group has filed a complaint with the US Securities and Exchange Commission (SEC) against MeridianLink, a software company, for not disclosing a cyberattack within a supposed four-day deadline. The ransomware gang had threatened to leak stolen data unless a ransom was paid within 24 hours. MeridianLink, a publicly traded company providing digital solutions for financial organizations, allegedly suffered a breach on November 7th. The gang claims that rather than encrypting files, they had copied/removed them, and MeridianLink was aware of the attack on the same day.

Due to MeridianLink’s lack of response to ransom demands, the ransomware gang filed an official complaint with the SEC, stating that the incident had a material impact on customer data and operational information. However, the gang misunderstood the SEC’s cybersecurity disclosure rules, which are not set to take effect until December 15, 2023, and there is no four-day deadline yet in place.

MeridianLink responded by saying it acted immediately to contain the threat, engaged third-party experts for investigation, and found no evidence of unauthorized access to production platforms. The company is still assessing if any consumer personal information has been compromised. This incident is notable as it may be the first publicly confirmed case of a ransomware gang contacting regulators over a victim’s failure to disclose a cyberattack.

Have OpenAI made an AGI breakthrough?

Before OpenAI CEO Sam Altman was ousted, researchers at OpenAI reportedly sent a letter to the board warning of a significant AI breakthrough with “potential risks to humanity”. The letter and a new AI algorithm, referred to as Q* (Q-star), it has been claimed, may have been key factors in Altman’s removal.

Some at OpenAI believe Q* could be a breakthrough in the quest for artificial general intelligence (AGI), showcasing promising mathematics and math-problem solving capabilities, something ChatGPT is not particularly good at. The researchers reportedly emphasized concerns about AI’s power and potential dangers in their letter, without listing specific safety issues. The capabilities of Q* mentioned by the researchers are unclear at this point. Researchers in AI see implementing mathematics as a crucial step towards AI with human-like reasoning abilities.

MIT Technology Review newsletter (The Algorithm) by contrast cites numerous researchers in the AI field who are currently characterising this reaction as “hype” rather than a new and dangerous breakthrough.  

Whilst we can only speculate at this point, Altman’s firing may have followed concerns at board level about commercialising advances before understanding the full implications and consequences whether or not these risks prove to be substantial.

An open letter with the threat by more than 700 OpenAI employees to leave and join Altman at Microsoft led to Altman’s reinstatement and the departure of several of the OpenAI board. 

Multiple countries sign AI accord with notable exceptions

Law enforcement and intelligence agencies from 18 countries, including the EU, the US, and others, have signed an international agreement on AI safety to ensure new AI technologies are “secure by design.” This follows the EU’s AI Act, which bans certain AI technologies (such as predictive policing and biometric surveillance) and classifies high-risk AI systems. Notably absent from the agreement is China, a major player in AI development.

The agreement emphasizes the need for secure and responsible AI development and operation, with security as a core requirement throughout the AI life cycle. A particular concern continues to be around Adversarial machine learning, a concern in AI security, involves exploiting vulnerabilities in machine learning components to disrupt or deceive AI systems. The agreement is nonbinding and offers general recommendations without addressing complex issues such as proper AI applications or data collection methods.

In the US, there are ongoing legal battles over AI models’ data ingestion practices and their compliance with copyright law. Authors are suing OpenAI and Microsoft for copyright infringement, raising concerns about AI’s impact on traditional creative and journalistic industries. The legal future of AI litigation is uncertain, with courts cautiously approaching early cases.

WebSci’24 upcoming dates

ACM WebSci’24: Call for Submissions
Conference Dates: May 21-24, 2024
websci24.org/

Hosted by the Interchange Forum for Reflecting on Intelligent Systems (IRIS) | Organized by the University of Stuttgart | Partners ACM • Cyber Valley • Web Science Trust • SigWeb

  • Papers [LINK] Submission Deadline: Nov. 30, 2023
  • Workshops/Tutorials [LINK] Submission Deadline: Dec 2, 2023
  • Posters [LINK] Submission Deadline: Feb. 15, 2024
  • PhD Symposium [LINK] Submission Deadline: Feb. 26, 2024

Apollo Research demonstrates the Elephant in the room with AI ethics

Apollo research recently presented some disturbing findings at the UK AI safety summit. Their goal had been to co-erce (rather than instruct) an AI (in this case ChatGPT) to engage in some deceitful / illegal activity on the premise of this being helpful to humans (a “greater good” challenge).

In the experiment the AI was told to create a stock trading app in addition to which it was given insider information about one of the companies being traded helpful to making a favourable trade. The AI knew insider trading was illegal but was told that the AI’s host company and its founders were near to financial collapse.

The AI proceeded to (simulate) illegal insider trading and when asked about the results lied about its actions (both actions presumably to protect the human owners)

“This is a demonstration of an AI model deceiving its users, on its own, without being instructed to do so,” Apollo Research claimed.

Much has been said about Isaac Asimov’s three [four] laws of Robotics which in the modern context might read:

0. An AI may not harm humanity or, through inaction, allow humanity to come to harm [added later]

1. An AI may not injure a human being or, through inaction, allow a human to come to harm

2. An AI must obey humans except where this conflicts with the first law

3. An AI must protect its own existence except where this conflicts with the first or second laws

Having first published these over 80 years ago (in 1942) it would seem that Asimov was surprisingly prescient about the rules we would need to create though unfortunately we still seem to be struggling with a more fundamental problem than imparting a meaningful distinction of harm – we are struggling with Asimovs assumption that these robots would be accessing the truth (objective facts) about their environment in order to operate safely. In this demonstration all bets are off when a human claims acting (or not acting) is required to promote/ensure human safety. 

Without a reliable source of the truth (in an era regularly thought of as post-truth) it would seem that Asimovs laws may provide much less protection than we might have imagined.