New ACM Fellows include three Web Science colleagues

We are delighjted to annouce that the recently announced list of 68 2023 ACM fellows includes not one but three of our Web Science colleagues. In alphabetical order:

  • Prof. Sir Tim Berners-Lee – co-founder, WST patron and former trustee on the Web Science Trust board
  • Prof. Deborah McGuiness – Web Science Lab Director at RPI
  • Prof. Steffen Staab – current trustee of the Web Science Trust board and Web Science Lab Director at the University of Stuttgart

The ACM press release follows:

ACM, the Association for Computing Machinery, has named 68 Fellows for transformative contributions to computing science and technology. All the 2023 inductees are longstanding ACM Members who were selected by their peers for groundbreaking innovations that have improved how we live, work, and play.

“The announcement each year that a new class of ACM Fellows has been selected is met with great excitement,” said ACM President Yannis Ioannidis. “ACM is proud to include nearly 110,000 computing professionals in our ranks and ACM Fellows represent just 1% of our entire global membership. This year’s inductees include the inventor of the World Wide Web, the “godfathers of AI, and other colleagues whose contributions have all been important building blocks in forming the digital society that shapes our modern world.

In keeping with ACM’s global reach, the 2023 Fellows represent universities, corporations, and research centers in Canada, China, Germany, India, Israel, Norway, Singapore, the United Kingdom, and the United States. The contributions of the 2023 Fellows run the gamut of the computing field―including algorithm design, computer graphics, cybersecurity, energy-efficient computing, mobile computing, software analytics, and web search, to name a few.

Additional information about the 2023 ACM Fellows, as well as previously named ACM Fellows, is available through the ACM Fellows website.

Tim Berners-Lee
WWW Consortium

 

For inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the Web to scale

Deborah McGuinness
Rensselaer Polytechnic Institute

For contributions to knowledge technologies including ontologies and knowledge graphs

Steffen Staab
University of Stuttgart, University of Southampton

For contributions to semantic technologies and web science, and distinguished service to the ACM community

New UK tax rules on on-line transactions

To combat tax evasion and increase revenue, the UK’s HM Revenue and Customs (HMRC) has introduced new tax rules starting January 1, 2024, aimed at small sellers on platforms like Etsy, Depop, Airbnb, and Vinted.

These rules require platforms to record and report sellers’ income directly to HMRC, following guidelines from the Organisation for Economic Co-operation and Development (OECD). While HMRC already has power over UK platforms, the OECD rules will facilitate quick access to data from platforms outside the UK. This affects around two to five million businesses on digital platforms, including taxi services, food delivery, freelancers, and short-term rentals.

Platforms will gather sellers’ information like name, address, earnings, and fees, including property details for landlords. Sellers meeting their tax obligations won’t be much affected, but those neglecting them may face HMRC demands.

Individuals can earn up to £1,000 extra annually, called the Trading Allowance, tax-free, but surpassing it requires tax reporting. Sellers renting through platforms like Airbnb can utilize the rent-a-room scheme for tax-free earnings up to £7,500 yearly.

HMRC plans to send reminders to those unaware of their tax duties regarding online earnings. They’ve allocated £36.69 million and 24 full-time staff to enforce these rules.

The first reporting deadline for platforms is January 31, 2025, one year after implementation.

OpenAI argues AI tools are “impossible” without access to copyright material

OpenAI has stated that developing AI tools like ChatGPT would be “impossible” without access to copyrighted material.  Several AI firms are currently facing lawsuits, including one from The New York Times (NYT) accusing OpenAI and Microsoft of “unlawful use” of its content in creating AI products.

OpenAI defended its practices, emphasizing the necessity of using copyrighted materials for training large language models. The organization argued that limiting data to out-of-copyright works would hinder AI systems’ ability to meet contemporary society’s needs.

OpenAI and other AI companies often rely on the legal doctrine of “fair use” to justify using copyrighted content without permission. The NYT lawsuit is one of several legal challenges, and cloud giants like Amazon, Microsoft, and Google are warned to leave business customers exposed to copyright risks. Despite limited protection, legal experts believe winning copyright claims against AI companies might be challenging. Businesses are advised to carefully review terms of service and indemnity clauses before using AI tools.

New AI tool improves bug detection rates

A team of computer researchers at the University of Massachusetts Amherst has introduced an advanced method named Baldur to significantly reduce software bugs and improve code verification. Combining large language models (LLMs) like ChatGPT with the Thor tool, Baldur achieved an unprecedented efficacy rate of nearly 66%.

Traditional manual methods for code verification are error-prone and impractical for complex systems, while machine-checking, a more rigorous approach, is laborious and time-consuming. To address these limitations, the researchers developed Baldur, which uses an LLM called Minerva trained on mathematical scientific papers, refined with Isabelle/HOL language. Baldur collaborates with a theorem prover in a feedback loop to identify and rectify errors, achieving a 65.7% accuracy in automatically generating proofs when integrated with Thor.

The researchers claim Baldur represents the most effective means for software verification, earning them a Distinguished Paper award at a recent conference. The project received support from the US Defense Advanced Research Projects Agency and the US National Science Foundation.

 

Commentary

Whilst the UMA team are to be congratualted and this appears to be a significant improvement over current methods, the “elephant in the room” might be what does the remaining approx. 35% error rate say about the robustness (or lack of it) of software in the 21st century, about relying on such software without manual checks or about using AI generated software systems with little/no human input?

We may of course also be wondering if AI and LLM’s are being developed to find exploits in existing systems ..

Technology is neither good nor bad, nor is it neutral.

Melvin Kranzberg