WSTNet Lab Profile: Cardiff HateLab



Cardiff University is the home of a WSTNet lab with two related, but distinct groups: Pete Burnap’s Social Data Lab (based on data visualisation and analysis using COSMOS) which makes social media analysis much more accessible for non-coding academics and also Matt William’s HateLab which uses a COSMOS-based dashboard to identify and analyse hate speech structures and trends in a range of social media sources across modern forms of on-line hate including racial, political, gender and religious intolerance.


Williams (who holds a chair in Criminology at Cardiff) has been researching the clues left in social media since 2011 but was frustrated by the lack of tools/accessibility for any but the most skilled coders and worked with Prof. Pete Burnap to develop a more user-friendly toolset called COSMOS which allows researchers to focus on the meanings and interpretations of social media data rather than the underlying technologies.
With new tools/possibilities delivered by COSMOS, new research questions began to surface and the “Hate Speech and Social media” project was launched in 2013. This led to the founding of the HateLab where Matt has been director since 2017 where his group has attracted more than £3m in funding. He has published a series of papers and in 2021 he published a summary of more than 20 years research in his book The Science of Hate
HateLab could be seen as something of a poster child for Web Science having been featured widely in the press and the media with HateLab research being covered in: LA Times, New York Post, The Guardian (also here), The Times (also here and here), The Financial Times, The Independent, Telegraph (also here), Tortoise, New Scientist, Politico, BBC News, The Register, ComputerWeekly, Verdict, Sky News, TechWorld and Police Professional. On TV, their research underpinned an episode of BBC One’s Panorama, an episode of ITV’s Exposure and an ITV NEWS special report. HateLab is been used as part of the National Online Hate Crime Hub announced by the UK Home Secretary in 2017.
HateLab collects data from several platforms including Twitter (They have also been highlighted by Twitter as a featured developer partner), 4Chan, Telegram and Reddit and the tools look for trends and patterns using AI techniques which link the timing, causality and impacts which can link physical acts of violence whilst the appearance and timing of hate speech. Williams has found certain patterns and timings in his work (he calls it the “half-life” of hate speech and this may be critical in understanding how to manage/calm/delay responses in on-line communities if strong reactions (esp. physical reactions to online hate speech) are seen to quickly fade and be much more temporary in nature than other forms of crime.
Whilst it is perhaps clear that real-world “trigger” events (such as Covid, Brexit, Trump speeches, London Bridge attacks etc.) can/do give rise to waves of on-line reactions (with hate being the least desirable of these) it is perhaps less obvious (and more interesting) to consider that a certain level and timing of hate speech might be associated with, and contribute to, higher levels of physical violence. HateLab is looking at the possibility of developing predictive models which not only allow non-academic groups how to gauge and better manage different types of hate speech and volatile communities on-line but might also help to prevent on-line hate spilling over into physical violence.
The recent case of Ex-President Trump and his on-line incitement to “march on the capital building” being a chilling example of the need for this sort of model.
We asked Matt about his take on the new owner at Twitter and how Musk’s view on free speech might affect his research and his overall objective to reduce hate-speech …
“Twitter have been really busy since 2015 trying to manage the whole on-line harm issue and frankly they’ve done a pretty good job – They’ve employed huge numbers of moderators that have ensured that a lot of the more unpleasant material that is ON the platform (and that we have access to via the API for research purposes) is not VISIBLE on the platform where ordinary users can be harmed by it. There is obviously a trade-off between the notion of on-line harm and freedom of speech and we’ll have to wait and see what effect Elon’s new policies have on the resurgance of what is thought to be harmful content. Certainly we’ve seen a reduction in the amount of hatespeech across the twitter API over recent months/years but its unclear whether users have migrated to more tolerant platforms or whether the Twitter filtering is now being reflected in the API output. Overall we’ve had a very positive relationship with Twitter and we’d obviously like to continue to work with them”.
DISCLOSURE:
I have to admit to being just a tiny bit disappointed that Matt is not also the brains behind HateLab: the London-based cyberpunk band which I stumbled on when googling more about his work 😉