WSTnet Lab Profile: Cardiff HateLab

Cardiff University is the home of a WSTNet lab with two related, but distinct groups: Pete Burnap’s Social Data Lab (based on data visualisation and analysis using COSMOS) which makes social media analysis much more accessible for non-coding academics and also Matt William’s HateLab which uses a COSMOS-based dashboard to identify and analyse hate speech structures and trends in a range of social media sources across modern forms of on-line hate including racial, political, gender and religious intolerance.  

Williams (who holds a chair in Criminology at Cardiff) has been researching the clues left in social media since 2011 but was frustrated by the lack of tools/accessibility for any but the most skilled coders and worked with Prof. Pete Burnap to develop a more user-friendly toolset called COSMOS which allows researchers to focus on the meanings and interpretations of social media data rather than the underlying technologies.

With new tools/possibilities delivered by COSMOS, new research questions began to surface and the “Hate Speech and Social media” project was launched in 2013. This led to the founding of the HateLab where Matt has been director since 2017 where his group has attracted more than £3m in funding. He has published a series of papers and in 2021 he published a summary of more than 20 years research in his book The Science of Hate

 

HateLab could be seen as something of a poster child for Web Science having been featured widely in the press and the media with HateLab research being covered in: LA TimesNew York PostThe Guardian (also here), The Times (also here and here), The Financial TimesThe IndependentTelegraph (also here), TortoiseNew ScientistPoliticoBBC NewsThe RegisterComputerWeeklyVerdictSky NewsTechWorld and Police Professional. On TV, their research underpinned an episode of BBC One’s Panorama, an episode of ITV’s Exposure and an ITV NEWS special report. HateLab is been used as part of the National Online Hate Crime Hub announced by the UK Home Secretary in 2017

HateLab collects data from several platforms including Twitter (They have also been highlighted by Twitter as a featured developer partner), 4Chan, Telegram and Reddit and the tools look for trends and patterns using AI techniques which link the timing, causality and impacts which can link physical acts of violence whilst the appearance and timing of hate speech. Williams has found certain patterns and timings in his work (he calls it the “half-life” of hate speech and this may be critical in understanding how to manage/calm/delay responses in on-line communities if strong reactions (esp. physical reactions to online hate speech) are seen to quickly fade and be much more temporary in nature than other forms of crime.

Whilst it is perhaps clear that real-world “trigger” events (such as Covid, Brexit, Trump speeches, London Bridge attacks etc.) can/do give rise to waves of on-line reactions (with hate being the least desirable of these) it is perhaps less obvious (and more interesting) to consider that a certain level and timing of hate speech might be associated with, and contribute to, higher levels of physical violence. HateLab is looking at the possibility of developing predictive models which not only allow non-academic groups how to gauge and better manage different types of hate speech and volatile communities on-line but might also help to prevent on-line hate spilling over into physical violence.

The recent case of Ex-President Trump and his on-line incitement to “march on the capital building” being a chilling example of the need for this sort of model.  

We asked Matt about his take on the new owner at Twitter and how Musk’s view on free speech might affect his research and his overall objective to reduce hate-speech …  

 “Twitter have been really busy since 2015 trying to manage the whole on-line harm issue and frankly they’ve done a pretty good job – They’ve employed huge numbers of moderators that have ensured that a lot of the more unpleasant material that is ON the platform (and that we have access to via the API for research purposes) is not VISIBLE on the platform where ordinary users can be harmed by it. There is obviously a trade-off between the notion of on-line harm and freedom of speech and we’ll have to wait and see what effect Elon’s new policies have on the resurgance of what is thought to be harmful content. Certainly we’ve seen a reduction in the amount of hatespeech across the twitter API over recent months/years but its unclear whether users have migrated to more tolerant platforms or whether the Twitter filtering is now being reflected in the API output. Overall we’ve had a very positive relationship with Twitter and we’d obviously like to continue to work with them”.

DISCLOSURE:

I have to admit to being just a tiny bit disappointed that Matt is not also the brains behind HateLab: the London-based cyberpunk band which I stumbled on when googling more about his work 😉

Government agencies are tapping a facial recognition company to prove you’re you – here’s why that raises concerns about privacy, accuracy and fairness

 

 Beginning this summer, you might need to upload a selfie and a photo ID to a private company, ID.me, if you want to file your taxes online.

Oscar Wong/Moment via Getty Images

James Hendler, Rensselaer Polytechnic Institute

The U.S. Internal Revenue Service is planning to require citizens to create accounts with a private facial recognition company in order to file taxes online. The IRS is joining a growing number of federal and state agencies that have contracted with ID.me to authenticate the identities of people accessing services.

The IRS’s move is aimed at cutting down on identity theft, a crime that affects millions of Americans. The IRS, in particular, has reported a number of tax filings from people claiming to be others, and fraud in many of the programs that were administered as part of the American Relief Plan has been a major concern to the government.

The IRS decision has prompted a backlash, in part over concerns about requiring citizens to use facial recognition technology and in part over difficulties some people have had in using the system, particularly with some state agencies that provide unemployment benefits. The reaction has prompted the IRS to revisit its decision.

a webpage with the IRS logo in the top left corner and buttons for creating or logging into an account

 

 

 

Here’s what greets you when you click the link to sign into your IRS account. If current plans remain in place, the blue button will go away in the summer of 2022.
Screenshot, IRS sign-in webpage

As a computer science researcher and the chair of the Global Technology Policy Council of the Association for Computing Machinery, I have been involved in exploring some of the issues with government use of facial recognition technology, both its use and its potential flaws. There have been a great number of concerns raised over the general use of this technology in policing and other government functions, often focused on whether the accuracy of these algorithms can have discriminatory affects. In the case of ID.me, there are other issues involved as well.

ID dot who?

ID.me is a private company that formed as TroopSwap, a site that offered retail discounts to members of the armed forces. As part of that effort, the company created an ID service so that military staff who qualified for discounts at various companies could prove they were, indeed, service members. In 2013, the company renamed itself ID.me and started to market its ID service more broadly. The U.S. Department of Veterans Affairs began using the technology in 2016, the company’s first government use.

To use ID.me, a user loads a mobile phone app and takes a selfie – a photo of their own face. ID.me then compares that image to various IDs that it obtains either through open records or through information that applicants provide through the app. If it finds a match, it creates an account and uses image recognition for ID. If it cannot perform a match, users can contact a “trusted referee” and have a video call to fix the problem.

A number of companies and states have been using ID.me for several years. News reports have documented problems people have had with ID.me failing to authenticate them, and with the company’s customer support in resolving those problems. Also, the system’s technology requirements could widen the digital divide, making it harder for many of the people who need government services the most to access them.

But much of the concern about the IRS and other federal agencies using ID.me revolves around its use of facial recognition technology and collection of biometric data.

Accuracy and bias

To start with, there are a number of general concerns about the accuracy of facial recognition technologies and whether there are discriminatory biases in their accuracy. These have led the Association for Computing Machinery, among other organizations, to call for a moratorium on government use of facial recognition technology.

A study of commercial and academic facial recognition algorithms by the National Institute of Standards and Technology found that U.S. facial-matching algorithms generally have higher false positive rates for Asian and Black faces than for white faces, although recent results have improved. ID.me claims that there is no racial bias in its face-matching verification process.

There are many other conditions that can also cause inaccuracy – physical changes caused by illness or an accident, hair loss due to chemotherapy, color change due to aging, gender conversions and others. How any company, including ID.me, handles such situations is unclear, and this is one issue that has raised concerns. Imagine having a disfiguring accident and not being able to log into your medical insurance company’s website because of damage to your face.

 

 

 

Facial recognition technology is spreading fast. Is the technology – and society – ready?

Data privacy

There are other issues that go beyond the question of just how well the algorithm works. As part of its process, ID.me collects a very large amount of personal information. It has a very long and difficult-to-read privacy policy, but essentially while ID.me doesn’t share most of the personal information, it does share various information about internet use and website visits with other partners. The nature of these exchanges is not immediately apparent.

So one question that arises is what level of information the company shares with the government, and whether the information can be used in tracking U.S. citizens between regulated boundaries that apply to government agencies. Privacy advocates on both the left and right have long opposed any form of a mandatory uniform government identification card. Does handing off the identification to a private company allow the government to essentially achieve this through subterfuge? It’s not difficult to imagine that some states – and maybe eventually the federal government – could insist on an identification from ID.me or one of its competitors to access government services, get medical coverage and even to vote.

As Joy Buolamwini, an MIT AI researcher and founder of the Algorithmic Justice League, argued, beyond accuracy and bias issues is the question of the right not to use biometric technology. “Government pressure on citizens to share their biometric data with the government affects all of us — no matter your race, gender, or political affiliations,” she wrote.

Too many unknowns for comfort

Another issue is who audits ID.me for the security of its applications? While no one is accusing ID.me of bad practices, security researchers are worried about how the company may protect the incredible level of personal information it will end up with. Imagine a security breach that released the IRS information for millions of taxpayers. In the fast-changing world of cybersecurity, with threats ranging from individual hacking to international criminal activities, experts would like assurance that a company provided with so much personal information is using state-of-the-art security and keeping it up to date.

[Over 140,000 readers rely on The Conversation’s newsletters to understand the world. Sign up today.]

Much of the questioning of the IRS decision comes because these are early days for government use of private companies to provide biometric security, and some of the details are still not fully explained. Even if you grant that the IRS use of the technology is appropriately limited, this is potentially the start of what could quickly snowball to many government agencies using commercial facial recognition companies to get around regulations that were put in place specifically to rein in government powers.

The U.S. stands at the edge of a slippery slope, and while that doesn’t mean facial recognition technology shouldn’t be used at all, I believe it does mean that the government should put a lot more care and due diligence into exploring the terrain ahead before taking those critical first steps.The Conversation

James Hendler, Professor of Computer, Web and Cognitive Sciences, Rensselaer Polytechnic Institute

This article is republished from The Conversation under a Creative Commons license. Read the original article.

James Hendler, Professor of Computer, Web and Cognitive Sciences, Rensselaer Polytechnic Institute
This article is republished from The Conversation under a Creative Commons license. 

Noshir Contractor elected president of ICA

WST Trustee and Web Science researcher Prof Noshir  Contractor has been elected as next president of the prestigious ICA ( International  Communications Association

Click here to see the details of the election.

About Noshir

Noshir S. Contractor is a Jane S. & William J. White Professor of Behavioral Sciences in the School of Engineering, School of Communication and the Kellogg School of Management at Northwestern University, USA. He is the director of Sonic Lab and a Trustee of the Web Science Trust.

About the ICA

(from current presidents introduction)

ICA started 70 years ago as a small organization of U.S.-based researchers. It has expanded to boast more than 6000 members in over 80 countries. Since 2003, we have been officially associated with the United Nations as a nongovernmental organization (NGO).

We publish five internationally renowned, peer-reviewed journals: Communication, Culture, and Critique (CCC), Communication Theory (CT), Human Communication Research (HCR), Journal of Communication (JoC), and the Journal of Computer-Mediated Communication (JCMC). Journal of Communication is the world’s top ranked communications journal on SCIMAGO, and Communication Theory is ranked #5. 

WSTNet Lab Profile: RPI

RPI (the Renssellaer Polytechnic Institute) is based in Troy, New York state comprising 30 research centres and over 750 PhD students. These research teams are engaged in projects worth over $100 million.   

RPI’s Center for Computational Innovations (CCI) is home to one of the most powerful (eight petaflop) supercomputers on the November 2019 Top 500 ranking of supercomputers (named AiMOS – Artificial Intelligence Multiprocessing Optimized System – in honor of Rensselaer co-founder Amos Eaton).

RPI is making AiMOS available (in partnership with IBM, academic institutions, and national labs) as well as access to the expertise of world-class faculty in data, artificial intelligence, networking, therapeutic interventions, materials, public health, and other areas necessary to understand and address the threat of COVID-19.

RPI hosts the Tetherless World Constellation (TWC) which is an active WSTNet laboratory associated with the Web Science community.

 

Deborah McGuinness is a professor in the computer science and cognitive science departments and the director of the Web Science Research Center at Rensselaer. She is a leading expert on knowledge representation and reasoning languages, ontology creation and evolution environments, and provenance.  

 

She is a long-time friend and supporter of Web Science and best known for her research on the Semantic Web and in bridging artificial intelligence and eScience. An extension of the World Wide Web, the Semantic Web allows computers and other electronics and robotics to communicate and interact without requiring human intervention. The Semantic Web uses information encoded in Web ontology languages to allow computers to “talk” to and understand one another. She is a professor and Lab director at the Tetherless World Constellation 

McGuinness’ work on ontology languages and semantic environments opens the Semantic Web to a broader user base and enables semantic applications to proliferate. McGuinness is one of the founders of an emerging area of semantic eScience—introducing encoded meaning or semantics to virtual science environments. Within this intersection of artificial intelligence and eScience, McGuinness is engaged in using semantic technologies in a range of health and environmental applications.

She has published more than 200 papers on semantic eScience, data science, knowledge-based systems, ontology environments, configuration, search technology, and intelligent applications, and holds five patents. She recently won the Robert Engelmore Memorial Association for the Advancement of Artificial Intelligence Award for leadership in Semantic Web research and in bridging artificial intelligence and eScience, as well as significant contributions to deployed artificial intelligence applications.

McGuinness earned a bachelor’s degree in computer science and in mathematics from Duke University, a master’s degree in computer science and electrical engineering from the University of California at Berkeley, and a doctoral degree in knowledge representation from Rutgers University.

Click here to see TWC’s WSTnet Page
Click here to visit the RPI website

 

New Book by Phil Howard OII WSTNet Lab Director

Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives

Philip N. Howard is director of the Oxford Internet Institute and the author of nine books, including Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up, which was praised in the Financial Times as “timely and important.” He is a frequent commentator on the impact of technology on political life, contributing to the New York Times, Financial Times, and other media outlets.

  •  
Format:
Hardback
Publication date:
23 Jun 2020
ISBN:
9780300250206
Imprint:
Yale University Press
Dimensions:
240 pages: 216 x 140 x 22mm

Read more at Yale Books

Dame Wendy Hall appointed to Ada Lovelace Institute

We are delighted to announce the appointment of Professor Dame Wendy Hall as Chair of the Ada Lovelace Institute.

Dame Wendy Hall DBE, FRS, FREng is one of the world’s foremost computer scientists and plays a leading role in shaping science and engineering policy and education in the UK and internationally. She is the UK’s first AI Skills Champion and Regius Professor of Computer Science at the University of Southampton, where she is also Executive Director of the Web Science Institute.

Dame Wendy was appointed by the Nuffield Foundation – the independent funder of the Ada Lovelace Institute – following an open recruitment process. Her three-year term as Chair will begin on 1 June 2020, succeeding Sir Alan Wilson, who retired as Executive Chair in February having led the Institute’s development phase.

Dame Wendy co-Chaired the UK government’s AI Review, published in 2017, and is a member of the AI Council, an independent expert committee providing advice to government and high-level leadership of the AI ecosystem in the UK. She is also Executive Director of the Web Science Trust, which has a global mission to support the development of research, education and thought leadership in Web Science.

During her distinguished career, Dame Wendy has been President of the Association for Computing Machinery (ACM) and the British Computer Society, Senior Vice President of the Royal Academy of Engineering, and a member of the UK Prime Minister’s Council for Science and Technology. She was a founding member of the European Research Council and Chaired the European Commission’s IST Advisory Group from 2010-2012. Her previous international roles include membership of the Global Commission on Internet Governance and the World Economic Forum’s Global Futures Council on the Digital Economy.

Sir Keith Burnett, Chair of the Nuffield Foundation said: ‘Dame Wendy Hall is one of the most influential scientists in the UK and the Nuffield Foundation is delighted to appoint her Chair of the Ada Lovelace Institute. Dame Wendy’s research has been a driving force in the development of her discipline, and through her senior leadership and advisory roles she has shaped science and technology policy both in the UK and internationally.

‘The Ada Lovelace Institute, although a relatively new organisation, is already providing a much-needed independent, evidence-led voice in the public debate on how data and AI should be used in the interests of people and society – most recently in relation to the use of technology in the public health response to the COVID-19 crisis. With Dame Wendy as Chair, I have every confidence the Institute will continue to make progress towards its goal of ensuring the benefits of data and AI are justly and equitably distributed.’

Dame Wendy Hall said: ‘I am very excited to be offered the opportunity to become Chair of the Ada Lovelace Institute. I have been very impressed with what the Institute has achieved since its inception and the commitment of the Nuffield Foundation to its development. This is a wonderful opportunity for me to work with Carly Kind and her team to help ensure the Institute continues to make a significant impact in the world of AI and data ethics by taking an evidence-led approach to the development of policy and practice in this area, which is something I am passionate about.’

Carly Kind, Director of the Ada Lovelace Institute said: ‘Dame Wendy brings to the Ada Lovelace Institute not only her expertise in computer science, but also her pioneering insights into the sociotechnical nature of AI and data-driven systems – a perspective that is critical to the Ada Lovelace Institute’s approach to policy and practice. We are honoured that Dame Wendy will lead our already august Board, deepening the Institute’s expertise in data science and building connections across academia, government and industry.’

About the Ada Lovelace Institute
The 
Ada Lovelace Institute is a research institute and deliberative body dedicated to ensuring that data and AI work for people and society. In addition to its ongoing work programmes, the Ada Lovelace Institute is currently undertaking research projects to help inform understanding of the COVID-19 pandemic and its effects on data and AI. Last month it published a rapid evidence review, Exit through the App Store?, to inform how the Government and the NHS adopts technical solutions to aid in the transition from the COVID-19 crisis.