New ACM Fellows include three Web Science colleagues

We are delighjted to annouce that the recently announced list of 68 2023 ACM fellows includes not one but three of our Web Science colleagues. In alphabetical order:

  • Prof. Sir Tim Berners-Lee – co-founder, WST patron and former trustee on the Web Science Trust board
  • Prof. Deborah McGuiness – Web Science Lab Director at RPI
  • Prof. Steffen Staab – current trustee of the Web Science Trust board and Web Science Lab Director at the University of Stuttgart

The ACM press release follows:

ACM, the Association for Computing Machinery, has named 68 Fellows for transformative contributions to computing science and technology. All the 2023 inductees are longstanding ACM Members who were selected by their peers for groundbreaking innovations that have improved how we live, work, and play.

“The announcement each year that a new class of ACM Fellows has been selected is met with great excitement,” said ACM President Yannis Ioannidis. “ACM is proud to include nearly 110,000 computing professionals in our ranks and ACM Fellows represent just 1% of our entire global membership. This year’s inductees include the inventor of the World Wide Web, the “godfathers of AI, and other colleagues whose contributions have all been important building blocks in forming the digital society that shapes our modern world.

In keeping with ACM’s global reach, the 2023 Fellows represent universities, corporations, and research centers in Canada, China, Germany, India, Israel, Norway, Singapore, the United Kingdom, and the United States. The contributions of the 2023 Fellows run the gamut of the computing field―including algorithm design, computer graphics, cybersecurity, energy-efficient computing, mobile computing, software analytics, and web search, to name a few.

Additional information about the 2023 ACM Fellows, as well as previously named ACM Fellows, is available through the ACM Fellows website.

Tim Berners-Lee
WWW Consortium

 

For inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the Web to scale

Deborah McGuinness
Rensselaer Polytechnic Institute

For contributions to knowledge technologies including ontologies and knowledge graphs

Steffen Staab
University of Stuttgart, University of Southampton

For contributions to semantic technologies and web science, and distinguished service to the ACM community

OpenAI argues AI tools are “impossible” without access to copyright material

OpenAI has stated that developing AI tools like ChatGPT would be “impossible” without access to copyrighted material.  Several AI firms are currently facing lawsuits, including one from The New York Times (NYT) accusing OpenAI and Microsoft of “unlawful use” of its content in creating AI products.

OpenAI defended its practices, emphasizing the necessity of using copyrighted materials for training large language models. The organization argued that limiting data to out-of-copyright works would hinder AI systems’ ability to meet contemporary society’s needs.

OpenAI and other AI companies often rely on the legal doctrine of “fair use” to justify using copyrighted content without permission. The NYT lawsuit is one of several legal challenges, and cloud giants like Amazon, Microsoft, and Google are warned to leave business customers exposed to copyright risks. Despite limited protection, legal experts believe winning copyright claims against AI companies might be challenging. Businesses are advised to carefully review terms of service and indemnity clauses before using AI tools.

New AI tool improves bug detection rates

A team of computer researchers at the University of Massachusetts Amherst has introduced an advanced method named Baldur to significantly reduce software bugs and improve code verification. Combining large language models (LLMs) like ChatGPT with the Thor tool, Baldur achieved an unprecedented efficacy rate of nearly 66%.

Traditional manual methods for code verification are error-prone and impractical for complex systems, while machine-checking, a more rigorous approach, is laborious and time-consuming. To address these limitations, the researchers developed Baldur, which uses an LLM called Minerva trained on mathematical scientific papers, refined with Isabelle/HOL language. Baldur collaborates with a theorem prover in a feedback loop to identify and rectify errors, achieving a 65.7% accuracy in automatically generating proofs when integrated with Thor.

The researchers claim Baldur represents the most effective means for software verification, earning them a Distinguished Paper award at a recent conference. The project received support from the US Defense Advanced Research Projects Agency and the US National Science Foundation.

 

Commentary

Whilst the UMA team are to be congratualted and this appears to be a significant improvement over current methods, the “elephant in the room” might be what does the remaining approx. 35% error rate say about the robustness (or lack of it) of software in the 21st century, about relying on such software without manual checks or about using AI generated software systems with little/no human input?

We may of course also be wondering if AI and LLM’s are being developed to find exploits in existing systems ..

Technology is neither good nor bad, nor is it neutral.

Melvin Kranzberg

New paper on AI public policy

The Web Science Institute at the University of Southampton are pleased to share with you their latest Position Paper authored by Ben Hawes and Prof Dame Wendy Hall

Abstract

The UK’s international Artificial Intelligence Safety Summit has answered some questions and sparked new ones. This is a good moment to reflect on what it delivered, what it didn’t cover, and how to influence development of AI in the future, in the interests of societies globally.

First, it’s great to be able to report that the Summit was in many ways a success, indeed more of a success than many people thought it could be. It was arranged and delivered fast. It had to manage difficult questions about the scope and the invitee list. There were good reasons to fear that it might not be more than a superficial, passing event. It is greatly to the credit of the organisers that it became more than that.

The Summit could also easily have been submerged among other recent developments, because there have been enough of those. The last month has been busy for AI and AI policy, even in the context of a packed year so far.

Immediately before the summit, the United Nations announced a new high-level advisory council on AI, and I’m proud to say that they invited me to be a member.

And then two days before the Summit, the White House issued President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The order “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

The Executive order sets out expansive, complex and diverse ambitions for AI in the USA, including on equity, civil rights and impacts on workers. It is a major step forward. The EU AI Act has been the subject of very heated debate within and between EU institutions. It has now passed, though the nature of recent debates shows how difficult it is for legislation to keep up with technology developments. The US had previously made much less ground in comparison on proposals for government action and legislation on AI. That has now changed, and in the UK we will need to track how those ambitions are taken forward in practice, and how potential conflicts between economic and social aspirations are managed.

 

After the Summit

Progress in public policy on AI

IBM & Meta form new AI alliance

IBM and Meta have launched the AI Alliance, a global community of technology developers, researchers, and adopters collaborating to advance open, safe, and responsible AI. The alliance includes over 50 founding members and collaborators, including:
  • AMD,
  • CERN,
  • Cornell University,
  • Dell Technologies,
  • Hugging Face,
  • Intel,
  • NASA,
  • Oracle, and many others.

The goal is to foster open innovation and science in AI, prioritizing safety, diversity, and economic benefits. The AI Alliance plans to develop benchmarks, foundation models, hardware accelerators, and educational resources to support responsible AI innovation. Working groups and partnerships with existing initiatives will drive the alliance’s efforts. For more information, visit https://thealliance.ai.

GenAI tools highlight potential flaws in Grant Applications

ChatGPT is continuing to show the cracks in established documentation processes and “gate-keeping” systems…

Historically the written word has been used as part of the application process to judge the quality of the applicant(ion) and to dissuade casual applicants by requiring a certain level of effort.  Along with the enormous problems around GenAI authored student essays (which most academics would consider cheating) it appears that academics are getting on in the act by experimenting with GenAI to write proposals.

Is this cheating or simply using the most modeern tool for the job?   

In a recent article in Nature, the author (J.M. Parilla)  expresses a dislike for writing grant applications due to the extensive amount of work involved: Grant applications, he explains,  often require various documents, such as a case for support, a lay summary, an abstract, mulitple CV’s, impact statements, public engagement plans, project management details, letters of support, data handling plans, risk analysis, and more. Despite this extensive (expensive) effort, there’s a very high chance of rejection (90–95%).

The author suggests that the system is flawed, time-consuming, and cumbersome. The focus during the review process, he argues, is often on whether the proposal ticks a number of boxes in cluding whether it aligns with the call brief (including the format), if the science is good and novel and if the candidates are experts in the field.

The author decided to use ChatGPT as a tool that could assist in writing grant proposals which, he claims,  reduced the workload significantly. The author therefore questions the value of asking scientists to write documents that AI can easily create, suggesting it might be time for funding bodies to reconsider their application processes.

He notes that a recent Nature survey indicates a significant number of researchers (>25%) are already using AI to aid in writing manuscripts and (>15%) asdmit to using it for grant proposals. Whilst the article acknowledges that some may view using ChatGPT as “cheating” it argues that it underscores a larger issue in the current grant application system.

It concludes that the fact that artificial intelligence can do much of the work makes a mockery of the process and argues that it’s time to make it easier for scientists to ask for research funding.