Book: Seven Veils of Privacy

Here, Kieron O’Hara details a framework of seven levels to help separate the effects and affects of privacy from the facts. In looking at when a privacy boundary is crossed or not, this framework helps citizens think about when that’s problematic, and why this differs not only across cultures, but also across generations and even for the same individuals.

Privacy is one of the most contested concepts of our time. This book sets out a rigorous and comprehensive framework for understanding debates about privacy and our rights to it.

Much of the conflict around privacy comes from a failure to recognise divergent perspectives. Some people argue about human rights, some about social conventions, others about individual preferences and still others about information and data processing. As a result, ‘privacy’ has become the focus of competing definitions, leading some to denounce the ‘disarray’ in the field.

But as this book shows, disagreements about the role and value of privacy obscure a large amount of agreement on the topic. Privacy is not a technical term of law, cybersecurity or sociology, but a word in common use that adequately expresses a few simple and related ideas.




  • Format: Hardcover
  • ISBN: 978-1-5261-6302-8
  • Pages: 384
  • Price: £85.00
  • Published Date: July 2023

WSTNet Interview: Dhiraj Murthy

Q. Dhiraj – thanks for taking the time to speak to us today. Even though your Lab is our newest WSTNet member, you’ve been analysing media for long time. Can you talk to us about your journey into social media and Web Science?

A. Of Course. My interest started doing post-graduate research in Sociology at Cambridge where I was looking at how traditional (non-technical) social spaces and interactions were being supplemented, augmented and even replaced by what (at the time) were new technologies such as blogs, newsgroups and forums (all async, pre-web technologies). I was particularly interested in how group identities were affected by the technologies that were mediating the interactions. I looked at the way in which musicians collaborated using technology vs. live interactions and how this might impact participants’ sense of identity, say, in terms of race/ethnicity. My work went on to look at impacts and opportunities around natural disasters such as hurricanes and we’ve gone on to study several US as well as international incidents. All of this work required the collection and analysis of what we could broadly call (in todays terms) social network data, which could be classified, mapped and visualised to help uncover and understand the observed effects. That process has been going on for me since 2006.

Q. Even those early projects and research questions still sound very relevant and quite contemporary. Would you say that during that time the process of developing research questions has remained relatively stable whilst the types and volumes of data that can be employed have changed rather more substantially? 

A. I witnessed a great deal of competitive system development for social messaging systems working internationally during the dot-com era and the key changes seemed to the ubiquity of messaging standards like the SMS text message (a 140 character format that Twitter later adopted) based on the growth of mobile networks and, critically, the rise of social “platforms” like Twitter (and later WhatsApp) which transformed the culture of what had been a private point-to-point messaging model into high-speed, real-time shared messaging spaces with API’s that (initially) disclosed information (both data and metadata) about the networks of content and networks of users across multiple locations. Twitter was instrumental in developing this model into what became a new opportunity for disciplines like Web Science to do detailed analysis on huge data sets.

Q. So if the availability of data and data types have driven/enabled the research in this way what has been your experience with the transition of Twitter to X and the loss of access to the Twitter API? 

A. The broader issues with social media APIs, data scraping bans and the resulting legal battles have obviously shaped the way in which data can be gathered/analysed and, arguably, is transforming (has transformed) what it means to do Web Science; but equally we have seen a continuing trend in which the Web/Internet overall has become less overtly text-based and much more visual with the enormous growth in video platforms. This means that as Web Scientists we have had to innovate and develop new/better techniques around computer vision, video analysis and the currently available data sets to do quality research. We now combine our data archives with new data, new ways to annotate and analyse data using mixed methods to be able to work with “small data” at a more personal level vs the level of firehose (i.e., complete) data sets that are not currently (no longer) available.

Q. If you are looking at more video data will the recent rise of high-quality (deep fake) AI generated video cause you particular difficulties?

A. Well bots and fake data have been around (in a smaller way) since the very beginning – there were simple bots to be found in early news groups so fake data and bots are not a new thing at all – though the scale and sophistication of the most recent examples is obviously more concerning and hence we are also looking at how we might better detect bad data and misinformation. 

Q.. Is that your main area of interest?

A. Not only that. We continue to look at ways in which the Web may (dis)empower society and how we might identify and promote (or inoculate against) those effects. We continue to look at group social behaviours during natural disasters where we have followed a number of US and international hurricane events. We’ve studied how cancer is reported on Twitter and how this relates to disease incidence across regions/groups as well the enculturation of young people into vaping (i.e., e-cigarettes) and how much impact social media images and messages may have in that process. But we have also been looking at identifying misinformation and tools (beyond labelling) to help users identify misleading information and how it spreads. 

Q. Many thanks for spending time to talk about your work and the Computational Media Lab. We have listed some papers and a link to your website below.


Dhiraj Murthy is the head of the WSTNet Computational Media Lab at the University of Texas at Austin.

To read more about Dhiraj’s work and the Austin Lab click below


Interest in hosting WebSci’26

The Steering Committee of ACM WebSci is seeking statements of interest from organizations or consortia interested in hosting the 18th ACM Web Science Conference (WebSci 2026). The conference series usually moves between the continents.  We will accept bids from all locations, but for the 2026 conference, we will give preference to bids within Europe. We expect the conference to take place in May-June. Co-location with other ACM conferences will be considered and hosting the conference as a hybrid event is encouraged.  Please include a statement on how you would propose offering remote attendance.

The process consists of two stages. During this first stage, the Steering Committee solicits informal statements of interest through an open call. We will prefer statements that commit to running an event with low registration costs encouraging participants from all disciplines, including ones with lower financial provisions.

Organizations wishing to host the conference should contact Susan Davies ( with a short paragraph outlining your interest, which should include the main organizer, the proposed venue and potential dates. Any organization can apply to host the conference, but the local organizing committee must include a representative of a local research group.

Once the first phase is complete, the Steering Committee will shortlist applications who will be invited to submit a full proposal.

The important dates for applying to host the Conference are:

Friday 10 May:  Deadline for receiving statements of interest
Friday 24 May:  Notifications to shortlisted bids are sent out
Friday 5 July:      Formal applications received from shortlisted bids

Friday 19 July:    Shortlisted applicants informed

The hosts for ACM WebSci 2025 will be announced at this year’s conference in Stuttgart, Germany from 21-24 May.

WSTNet PhD Interview: Sungwon Jung

Q. Thanks for joining me Sungwon – could you tell us a little about yourself?

A. I’m a doctoral student in Journalism & Media at Uni Texas at Austin with the media focus more on social media. I’m really interested in what social media can tell us about group behaviour.

Q.. How did you come to be interested in the Web and Web Science methods?

A. I guess like a lot of other colleagues it comes from an interdisciplinary background: My Batchelors was in Sociology – I got interested in how people come together to take collective actions (so-called network actions) and the processes underlying that. To understand that I thought that computational methods would be really helpful and so I got a Masters in Data Science which ultimately led me to researching into a social media data as a proxy for how people act and interact.

Q. What shape does that take?

A. Broadly speaking I am using computational methods to look at how people behave on social media platforms where individual actions may become collective actions (via networks) and the extent to which this might predict/explain larger societal actions

Q. What projects have you been working on?

A. Initially I worked on the issues of political polarisation between different Indian groups using TikTok data where the chief focus was on polarisation between Indian diaspora groups vs. Indian homeland groups though there were also religious divisions between Hindu and Muslim groups.

Q. So within religious groups there would have been a common common cultural background but differences in social environment coming from local influences in India or overseas. Interesting.

A. We were looking to develop new techniques to study social media data both in terms of the content of the messages as well as metadata from hashtags. This can be quite challenging to interpret as a researcher without an Indian cultural background as in the case of group hashtags such as #NRI #Modi NRI being “Non-resident Indian” and Modi a leading political figure in Indian politics so were are dealing with a user-developed “Folksonomy” vs a more formal taxonomy.

Q.. What is your current research focussing on?

A. Now I am working with AI-based vision and data science techniques to study the impact of social media on health using social media data on Vaping and e-Cigarettes. We believe social media influences/shapes young peoples’ understanding of smoking/vaping health outcomes and at this early stage of understanding vaping health issues, social influence and peer pressure are potentially very important.

Q. In the same way that media depictions (Movies and TV) shaped the perception of tobacco usage for earlier generations of young people?

A. Exactly. The average age of users here is 18-25 in this TikTok group and may well be significantly affected by peer pressure on social media.
e.g. VapeCloud competitions displays bragging rights/status about the size of cloud that can be produced

Q. Presumably whilst we would observe that this is less negative than, say, competitive self-harm or anorexia support group, nonetheless this involves group behaviour and peer pressure.

A. Exactly. We also observed significant amounts of co-reporting (tacking on) of Vaping to other activities:
e.g., “I am playing X + vaping” or “I am doing Y + vaping” . So I am also interested in why these groups are reporting vaping in other contexts.

Q. How are you looking at the data?

A. I’m using TikTok (meta) data around the posting and developing computer vision techniques to look at images and video. That way we analyse the post itself in terms of the image/video as well as any annotation from metadata/tags. We analyse the post with image analysis, video speech-to-text conversion plus user text descriptions and tags. There is no TikTok API so we need to scrape manually.

Q. What are the challenges here?

A. Whilst it is not hard to get data it may be harder to confirm that it is valid/complete. We may not be looking at all the relevant hashtags (and these may change over time) and posts may include target hashtags even when the post is not actually focussed on vaping #vape – perhaps users are including popular hashtags in the post to get more likes. The data itself is largely unstructured and so we have to do more cross-checking since we know that however good our analytical approach may be, if the source data is flawed then we are going to get unreliable results: garbage-in-garbage-out. This will be especially true for image / video analysis as we are starting to see challenges in terms of fake data from bots and LLM’s and the current rise of AI video where AI content (deep fakes etc) are polluting data streams which may distort our research findings. Ultimately we can try to analyse what is happening but the causes may remain elusive. Why do they vape and even compete at vaping? What are the underlying models driving the behaviour? Social science research at this scale was previously not possible (i.e., analysing 50 paper questionnaires vs 50 million social media data points). This is the new norm and seems impressive but whilst it is much easier to gather more data than ever we need to worry more about quality than ever.

Q. What are the future objectives for this research?

A. Understanding vaping as a “normal” activity vs deviant activity. Understanding social bonding and competitive behaviour. Looking at the idea of “Vape” vs “Vape challenge”. Looking at how social rewards correlate with individual behaviour creating larger network (group) behaviours and the extent to which these behaviours buy group membership getting the user more attention and higher status.

Q. Thanks for speaking to me today and good luck with the rest of your research.

Sungwon Jung is a doctoral student in Journalism & Media at the University of Texas at Austin.

She is interested in the impacts of social media on health and in studying how individual actions can become collective (network) actions.

Can this approach shed any light on future health trends and the importance of messaging for young people as they form more/less healthy habits as part of social learning? 

Research shows AI watermarks easily removed

Researchers at ETH Zurich have discovered that watermarks used to identify AI-generated text can be easily removed and copied, making them ineffective. These attacks undermine the credibility of watermarks and could therefore deceive people into trusting misleading text. Watermarking involves hiding patterns in AI-generated text to indicate its origin, but this recent research suggests there are flaws in this technology.

Watermarking algorithms categorize words into “green” and “red” lists and make AI models choose words primarily from the green list. However, attackers can reverse-engineer these watermarks by analyzing AI responses and comparing them with normal text. This enables them to steal the watermark and launch two types of attacks: spoofing, where fake watermarked text is created, and scrubbing, where watermarks are removed from AI-generated text.

The research team successfully spoofed watermarks 80% of the time and stripped watermarks from text 85% of the time. Other researchers, such as Soheil Feizi from the University of Maryland, have also highlighted the vulnerability of watermarks to spoofing attacks.

Despite these challenges, watermarks remain a promising method for detecting AI-generated content, but further research is needed to improve their reliability. Until then, caution should be exercised when deploying such detection mechanisms on a large scale. Managing expectations regarding the reliability of these tools is crucial, as they are still considered useful even if imperfect.

This article is summarised from the orginal which appeared in MIT Technology review