WebSci21 – Video Vault No 2 – Panel on Alogrithmic Bias


In 2016, an MIT Graduate student gave a TEDx Talk in New York.  Joy Buolamwini’s  How I am fighting Algorithmic Bias, has been viewed by nearly 1.5 million people, and it later led to a journey of deeper discovery with other researchers and filmmaker Shalini Kantayya. Their resulting film, Coded Bias, was launched at the Sundance Festival in 2020 and later released on Netflix in April 2021. The film reveals in an accessible way how facial recognition software and automated decision-making has unprecedented power to reproduce bias at scale. As companies and governments increasingly outsource their services to entities which employ more machines and more machine learning , we can now see that algorithms are being used to decide what information we see, who gets hired, who gets health care, and who gets undue police scrutiny. Human rights lawyers and ethicists can see that this affects vulnerable communities the hardest.

Our distinguished panellists will bring Joy’s insights right up to date and will chart how fast the field is moving whilst also digging deep into the origins of this field. What steps can and should be taken by companies, individual people and researchers to change the way in which ordinary human bias and ignorance is encoded into our digitally driven world? How will we help enable machines not to make the same mistakes as we have historically made? Do we understand the concept of ethics in private and public companies well enough, let alone AI ethics? Do private companies have the same responsibilities as public and government institutions when it comes to transparency and accountability? Environmental, Social and Governance (ESGs) are all the rage but are they becoming a branch of marketing?


In this Panel discussion chaired by Lucy Hooberman, the panelists: Ricardo Baeza-Yates, Rumman Chowdhury, Margaret Mitchell will discuss how bias within algorithmic processes (related to bias in the source data, data processing accuracy or in the focus of the algorithm itself) has the potential for profound effects around how we are recognised/identified, what we are permitted to see and do and ultimately our access to a variety of opportunities, services and benefits in modern societies who are increasingly outsourcing this recognition and allocation to machine learning systems.

About the Video Vault Series

In partnership with the ACM we are pleased to be able to release a series of videos from the most recent Web Science Conference (ACM WebSci’21) that were previously only available to attendees of the conference.

The series will be released fortnightly and will include a selection of Keynote talks and Spotlight panel discussions.

Copyright / Links

This video is (c) 2021 provided under license from the ACM.


ICO Fines Cabinet Office for data security breach

The UK Information Commissioner’s Office (ICO) has issued the Cabinet Office with a £500,000 fine over a data breach that disclosed the personal details of more than 1,000 people listed for 2020’s New Year Honours. The ICO said its investigation into the breach revealed that the Cabinet Office had failed to put proper technical and organisational measures in place to prevent disclosure of personal information in breach of UK’s data protection law.

Insurers cut cover amid growth of Ransomware incidents

Computing reports Insurance firms are worried about profits as ransomware gangs become more sophisticated.

Whilst previousy insurance companies  typically cooperated with  customers (and with Cybercriminls) to cover losses, cyber attacks have risen in number and sophistication which is forcing insurance companies to cut the amount of cover they provide to customers. Insurers have increased premiums, cut policy coverage and may even adopt an adversarial vs a co-operative response to ransomware claims.

“Insurers are changing their appetites, limits, coverage and pricing,” Caspar Stops, head of cyber at insurance firm Optio, told Reuters … Limits [the upper amount paid in a claim] have halved – where people were offering £10 million ($13.5 million), nearly everyone has reduced to five.”

American cyber insurance firm CNA Financial allegedly paid hackers $40 million (£30 million) to decrypt its data and restore systems, following a ransomware attack in March.
In June, meat processing giant JBS confirmed it paid $11 million (£8.2 million) to the REvil ransomware gang, which locked its systems at the end of May.
Insurers say some attackers may specifically check whether potential victims have policies that would make them more likely to pay a ransom.

One industry insider said a tech firm that previously paid £250,000 for £130 million of professional indemnity and cyber cover  is now paying £500,000 for a cover of £55 million.

The main advice from the FBI in the US is not to pay, and instead report the incident as early as possible. The agency also warned that paying ransoms only funds criminals’ efforts.


The post Insurers cut cover amid growth of Ransomware incidents appeared first on Web Science Trust.

New Book: Perspectives on Digital Humanism

This new book collects a series of pieces by leading authors in the field of Digital Humanism including our own WST Trustee George Metakeides as well as WSTNet Lab Directors Hans Akkermans and Mannfred Hauswirth.

The book: 

Aims to set an agenda for research and action in the emerging field of Digital Humanism

Contains short essays by selected thinkers from computer science, law, humanities and social sciences

Covers the complex interplay of technology and humankind to ensure the full respect of universal human rights

It is available to purchase in hardback, paperback or as free download in PDF or EPUB format.

Edited by

Hannes Werthner

Erich Prem

Edward A. Lee

Carlo Ghezzi

and published by Springer

The post New Book: Perspectives on Digital Humanism appeared first on Web Science Trust.

Potential £17m Fine for Facial Recognition Firm

The BBC reports that ClearView, an Australian firm selling access to a database of more than 10 billion facial images, is facing a potential £17m fine in thge UK over its handling of UK personal data.

The Information Commissioner’s Office said “it had significant concerns about Clearview AI”, whose facial recognition software is used by police forces and ClearView has been instructed to stop processing UK personal data and delete any it has. Clearview itself disputes the UK regulator’s claims desxcribing them as “factually and legally incorrect” and is considering further action in light of the UK allegations though it has already lost and is appealing a similar case in Australia. The UK decision is provisional and the ICO said any representations by Clearview AI will be carefully considered before a final ruling is made in the middle of next year. 

Whilst the ClearView’s  service to police is described as resembling a “Google search for faces” the UK’s Information Commissioner said that Clearview’s database was likely to include “a substantial number of people from the UK” whose data may have been gathered without people’s knowledge.

The firm’s services are understood to have been trialled by a number of UK law enforcement agencies, but that was discontinued and Clearview AI does not have any UK customers. 

Earlier this year Facebook announced that it would no longer use facial recognition software to identify faces in photographs and videos marking a more cautious view by some social media companies whilst others continue to gather facial recognition data (for now).