WSTNet Interview: Dhiraj Murthy

Q. Dhiraj – thanks for taking the time to speak to us today. Even though your Lab is our newest WSTNet member, you’ve been analysing media for long time. Can you talk to us about your journey into social media and Web Science?

A. Of Course. My interest started doing post-graduate research in Sociology at Cambridge where I was looking at how traditional (non-technical) social spaces and interactions were being supplemented, augmented and even replaced by what (at the time) were new technologies such as blogs, newsgroups and forums (all async, pre-web technologies). I was particularly interested in how group identities were affected by the technologies that were mediating the interactions. I looked at the way in which musicians collaborated using technology vs. live interactions and how this might impact participants’ sense of identity, say, in terms of race/ethnicity. My work went on to look at impacts and opportunities around natural disasters such as hurricanes and we’ve gone on to study several US as well as international incidents. All of this work required the collection and analysis of what we could broadly call (in todays terms) social network data, which could be classified, mapped and visualised to help uncover and understand the observed effects. That process has been going on for me since 2006.

Q. Even those early projects and research questions still sound very relevant and quite contemporary. Would you say that during that time the process of developing research questions has remained relatively stable whilst the types and volumes of data that can be employed have changed rather more substantially? 

A. I witnessed a great deal of competitive system development for social messaging systems working internationally during the dot-com era and the key changes seemed to the ubiquity of messaging standards like the SMS text message (a 140 character format that Twitter later adopted) based on the growth of mobile networks and, critically, the rise of social “platforms” like Twitter (and later WhatsApp) which transformed the culture of what had been a private point-to-point messaging model into high-speed, real-time shared messaging spaces with API’s that (initially) disclosed information (both data and metadata) about the networks of content and networks of users across multiple locations. Twitter was instrumental in developing this model into what became a new opportunity for disciplines like Web Science to do detailed analysis on huge data sets.

Q. So if the availability of data and data types have driven/enabled the research in this way what has been your experience with the transition of Twitter to X and the loss of access to the Twitter API? 

A. The broader issues with social media APIs, data scraping bans and the resulting legal battles have obviously shaped the way in which data can be gathered/analysed and, arguably, is transforming (has transformed) what it means to do Web Science; but equally we have seen a continuing trend in which the Web/Internet overall has become less overtly text-based and much more visual with the enormous growth in video platforms. This means that as Web Scientists we have had to innovate and develop new/better techniques around computer vision, video analysis and the currently available data sets to do quality research. We now combine our data archives with new data, new ways to annotate and analyse data using mixed methods to be able to work with “small data” at a more personal level vs the level of firehose (i.e., complete) data sets that are not currently (no longer) available.

Q. If you are looking at more video data will the recent rise of high-quality (deep fake) AI generated video cause you particular difficulties?

A. Well bots and fake data have been around (in a smaller way) since the very beginning – there were simple bots to be found in early news groups so fake data and bots are not a new thing at all – though the scale and sophistication of the most recent examples is obviously more concerning and hence we are also looking at how we might better detect bad data and misinformation. 

Q.. Is that your main area of interest?

A. Not only that. We continue to look at ways in which the Web may (dis)empower society and how we might identify and promote (or inoculate against) those effects. We continue to look at group social behaviours during natural disasters where we have followed a number of US and international hurricane events. We’ve studied how cancer is reported on Twitter and how this relates to disease incidence across regions/groups as well the enculturation of young people into vaping (i.e., e-cigarettes) and how much impact social media images and messages may have in that process. But we have also been looking at identifying misinformation and tools (beyond labelling) to help users identify misleading information and how it spreads. 

Q. Many thanks for spending time to talk about your work and the Computational Media Lab. We have listed some papers and a link to your website below.


Dhiraj Murthy is the head of the WSTNet Computational Media Lab at the University of Texas at Austin.

To read more about Dhiraj’s work and the Austin Lab click below


WSTNet PhD Interview: Sungwon Jung

Q. Thanks for joining me Sungwon – could you tell us a little about yourself?

A. I’m a doctoral student in Journalism & Media at Uni Texas at Austin with the media focus more on social media. I’m really interested in what social media can tell us about group behaviour.

Q.. How did you come to be interested in the Web and Web Science methods?

A. I guess like a lot of other colleagues it comes from an interdisciplinary background: My Batchelors was in Sociology – I got interested in how people come together to take collective actions (so-called network actions) and the processes underlying that. To understand that I thought that computational methods would be really helpful and so I got a Masters in Data Science which ultimately led me to researching into a social media data as a proxy for how people act and interact.

Q. What shape does that take?

A. Broadly speaking I am using computational methods to look at how people behave on social media platforms where individual actions may become collective actions (via networks) and the extent to which this might predict/explain larger societal actions

Q. What projects have you been working on?

A. Initially I worked on the issues of political polarisation between different Indian groups using TikTok data where the chief focus was on polarisation between Indian diaspora groups vs. Indian homeland groups though there were also religious divisions between Hindu and Muslim groups.

Q. So within religious groups there would have been a common common cultural background but differences in social environment coming from local influences in India or overseas. Interesting.

A. We were looking to develop new techniques to study social media data both in terms of the content of the messages as well as metadata from hashtags. This can be quite challenging to interpret as a researcher without an Indian cultural background as in the case of group hashtags such as #NRI #Modi NRI being “Non-resident Indian” and Modi a leading political figure in Indian politics so were are dealing with a user-developed “Folksonomy” vs a more formal taxonomy.

Q.. What is your current research focussing on?

A. Now I am working with AI-based vision and data science techniques to study the impact of social media on health using social media data on Vaping and e-Cigarettes. We believe social media influences/shapes young peoples’ understanding of smoking/vaping health outcomes and at this early stage of understanding vaping health issues, social influence and peer pressure are potentially very important.

Q. In the same way that media depictions (Movies and TV) shaped the perception of tobacco usage for earlier generations of young people?

A. Exactly. The average age of users here is 18-25 in this TikTok group and may well be significantly affected by peer pressure on social media.
e.g. VapeCloud competitions displays bragging rights/status about the size of cloud that can be produced

Q. Presumably whilst we would observe that this is less negative than, say, competitive self-harm or anorexia support group, nonetheless this involves group behaviour and peer pressure.

A. Exactly. We also observed significant amounts of co-reporting (tacking on) of Vaping to other activities:
e.g., “I am playing X + vaping” or “I am doing Y + vaping” . So I am also interested in why these groups are reporting vaping in other contexts.

Q. How are you looking at the data?

A. I’m using TikTok (meta) data around the posting and developing computer vision techniques to look at images and video. That way we analyse the post itself in terms of the image/video as well as any annotation from metadata/tags. We analyse the post with image analysis, video speech-to-text conversion plus user text descriptions and tags. There is no TikTok API so we need to scrape manually.

Q. What are the challenges here?

A. Whilst it is not hard to get data it may be harder to confirm that it is valid/complete. We may not be looking at all the relevant hashtags (and these may change over time) and posts may include target hashtags even when the post is not actually focussed on vaping #vape – perhaps users are including popular hashtags in the post to get more likes. The data itself is largely unstructured and so we have to do more cross-checking since we know that however good our analytical approach may be, if the source data is flawed then we are going to get unreliable results: garbage-in-garbage-out. This will be especially true for image / video analysis as we are starting to see challenges in terms of fake data from bots and LLM’s and the current rise of AI video where AI content (deep fakes etc) are polluting data streams which may distort our research findings. Ultimately we can try to analyse what is happening but the causes may remain elusive. Why do they vape and even compete at vaping? What are the underlying models driving the behaviour? Social science research at this scale was previously not possible (i.e., analysing 50 paper questionnaires vs 50 million social media data points). This is the new norm and seems impressive but whilst it is much easier to gather more data than ever we need to worry more about quality than ever.

Q. What are the future objectives for this research?

A. Understanding vaping as a “normal” activity vs deviant activity. Understanding social bonding and competitive behaviour. Looking at the idea of “Vape” vs “Vape challenge”. Looking at how social rewards correlate with individual behaviour creating larger network (group) behaviours and the extent to which these behaviours buy group membership getting the user more attention and higher status.

Q. Thanks for speaking to me today and good luck with the rest of your research.

Sungwon Jung is a doctoral student in Journalism & Media at the University of Texas at Austin.

She is interested in the impacts of social media on health and in studying how individual actions can become collective (network) actions.

Can this approach shed any light on future health trends and the importance of messaging for young people as they form more/less healthy habits as part of social learning? 

In Conversation with: George Metakides

In this interview we sit down with Prof. George Metakides, one of our esteemed WST trustees, to talk about democracy in the digital space and why you should be  concerned.

Ian: George, thanks very much for taking the time to chat with me today.

George: Always pleased to take the opportunity to talk about Web Science and Digital Enlightenment.

Ian: George, you’ve been linked to both Web Science and Digital Enlightenment perhaps we could start by contrasting the two.

George: Well we founded an organisation we called the Digital Enlightenment forum 12 years ago around the same time as WST was founded (the Web Science Research Institute WSRI back then) and we had a great deal in common: both groups  have been  looking at the digital space to move beyond the idea of what CAN be done to focus more on the notion of what SHOULD be done. Modern global networked technologies like the Web have a tremendous capacity to help and improve the quality of our lives but at the same time there is the capacity for them to be mis-used to exploit, control and undermine our privacy , freedoms and democracy itself.

Ian: Wasn’t it Kranzberg that said that “technology is neither good nor bad – nor is it neutral”: do you see it that way?

George: Indeed. I should note that it is no accident that historically, new technologies had the military as their first and major users. Many types of technology can be turned to  negative uses whilst retaining their potential for good and so we must understand that technology needs to walk hand-in-hand with regulation so as to promote the good while minimizing the bad

Ian: So its not enough to ask HOW we do something – we must also ask WHY we should do it – in effect IF its a good idea at all? In some sense moving from what is possible to what is socially desirable?

George:  We have both OVERestimated the inherent goodness of technology and UNDERestimated the potential for exploitation and so we must remain very cautious about the types of technology that we encourage to flourish unchecked in the digital ecosystem.

Ian: Can you give some examples?

George: We need only look at the way in which the (reasonable) pursuit of profit by businesses has generated an (unreasonable) reduction in personal privacy through what has been called  “Surveillance Capitalism”.   For example, the big tech platforms did not start out explicitly wanting to invade our privacy *per se* – they merely wanted to make better quality recommendations about things we might want, based on things we had already purchased. In the drive to know more and more about customers, companies have started to track and identify us across multiple apps, systems, identities and locations and have built chillingly accurate profiles from which they deduce/predict a great deal more about our behaviour than we know ourselves and without our knowledge about what those predictions are.  This can be benign or threatening depending on how, when and by whom it is used

Ian: Given we can vote, can we not rely on the democratic process to restrain and control this sort of snooping by corporates and governments?

George: We have recently run a summer school in Vienna  coorganized by the Digital Humanism and Digital enlightenment organizations looking at democracy in the digital age and the conclusions are quite disturbing. 

There has been a level of optimism (or even euphoria) around liberal democracy ever since the end of the cold war – the assumption that the ideological war for democracy, free-speech, capitalism and freedom had been “won” and would eventually universally (and irrevocably) accepted as the de facto way to live.

The euphoria of the 90s (overoptimistically considered as the “end of history”)  was primarily caused by the dissolution of the Soviet Union as analyzed by many . What few realized at the time was that there was another factor generating optimism which was the blossoming of the web into a vision of an “e-agora” (in the tradition of the public marketplace) where well –informed citizens would engage in democratic processes enabled by the Web. Alas, this was not to be.

Today, practically all surveys ( EIU, Freedom house and others) document a “backsliding of democracy worldwide with young people, in particular, participating less and less in democratic processes and more and more people expressing support for “anti-systemic” political parties and/or so-called “strong leaders”.

Younger people (though not only younger people)  surveyed  express little patience for the  four- or five-year cycles of government which seem unresponsive to their needs/goals and they become increasingly drawn to charismatic, go-getting and even aggressive “rule-breakers” and self-styled “strong men” (Trump, Putin et al) in what has been called the “Age of the strong man”. They are frustrated that politicians no longer seem to represent their constituents but are instead driven to act along party political lines and those of party backers (corporates, unions and other interest groups that run outside (or even counter to) the communities that politicians are supposed to represent. The growth of Anti Political Establishment Parties (APEp’s) seems a good indication that people are looking for alternatives that they are not seeing in mainstream politics.

Ian: You are painting a fairly dark picture of where this is all heading – is there anything we can/should be doing to combat this trend?

George: Democracy requires ”participation”, engagement and discussion – but there are issues  with the way this is carried out in social media which can leaves us vulnerable to being provoked, nudged and even radicalised if we have no broader framework of social groups and peers with whom to engage . Filter bubbles can and do simply re-inforce extreme views.  As those with extreme views are more predictable customers when it comes to the tech platform choosing the ads they are most likely to click.

Besides the “standard” tools of democracy such as elections and referenda  there has been a rise in the last few years of other forms of participation such as “citizen assemblies” and  other “deliberative democracy”  processes. that encourage multiple viewpoints and sources of reliable information which feature respectful debate, compromise and sharing (Win:Win mindset) rather than aggressive posturing and brinkmanship (Win:Lose mindset). We should definitely be encouraging these forms of engagement.

Ian: What would your summary message be to those reading this interview?

George: Well whilst it is clear that there is plenty of inequality and dissent around the world that has little/nothing to do with the Web, I would say that keeping a firm hold of how Web-enabled technologies develop is important as the Web reflects and reinforces so many aspects of modern society:

  1. Don’t take democracy for granted –  it is fragile, has only “lived” for a relatively short period and always carries the seeds of its own destruction. Democracy will not live or die by digital alone. Issues like economic inequalities need to be addressed alongside with regulation that limits the most deleterious effects of “socialmediocracy”
  2. Don’t over- or underestimate the power of  digital technology to both nurture and destroy  cherished values. Don’t think governments are immune to the lure of more and more surveillance  of their citizens or that big tech is going to put protection  of democratcy over its profits  Both regulation and an alert, educated cirtizenry are needed.
  3. Complacency is the enemy here – the biggest danger for democracy is to believe there is no danger. 

Ian: Thanks for a fascinating discussion George.


In addition to being a WST Trustee, George is a well-known academic, author and was the director of the EU ESPRIT progam from 1993-1998.

In conversation with: Jennifer Zhu Scott

In conversation this time is well-known finance and digital economy expert, Jennifer Zhu Scott. Jen recently joined the WST Board of Trustees and we are delighted to welcome her. Ian Brown sat down to find out a little more about Jennifer’s (Jen’s) path to Web Science and why she thinks we’ve invented a whole new kind of poverty and what we should be doing about it.

Ian: Hi Jen, welcome to the Trust and thanks for joining us today to give the WST members and supporters an idea of who you are and where your interests lie.

Jen: No problem – I’m really pleased to be joining the board at a time when there is so much important work to do.

Ian: Like many of us you didn’t start out as a Web Scientist but reading your Bio you have studied very widely across different disciplines in Sichuan, Manchester and many top institutions – that’s quite a journey – can you tell us a little about it?

Jen: I was brought up in an environment where my father was always tinkering, disassembling and reassembling radios, fixing lights and telephones. I was very comfortable with technology. When I was in university, I bought the parts and built my own PC. Technology and science is my native language. I remember being fascinated by what technology could do. Today, as a professional, it is evident that technology has transformed every aspect of our life. Whilst our understanding of technology leaped ahead at a breakneck pace, our understanding of the social impacts of technology (the socio-technological aspect)  has been moving much, MUCH slower. I knew there must be trade-offs between what technology could do and what it should do but there didn’t seem to be any good models or guidelines for that. Arguably there still aren’t.  

My studies started with Applied Maths & Computer Science and when I left China I came to the UK to work and later on studied Finance in my master’s degree. Data is the essence of every discipline I’ve studied. I moved into industry working for some big FinTech data companies looking at how advanced technologies could be applied to businesses individually and what the key trends would be in (digital) value.  However, I was still interested in how all these benefits could be distributed across society more broadly and continued my studies branching into public policy – trying to understand how policy is formed and how change is driven on a larger scale.



Ian: You mentioned the importance of data and you gave a TED talk in 2019 about data and why we should be being getting paid for it

Jen: Absolutely. We are supposed to work towards a more inclusive and equitable economy, but in terms of data ownership, most of us are just equally poor. Most people haven’t understood the concept or implications of data poverty. The thing I learned in China as a child is that ownership, personal ownership, brings a form of liberty and the opportunity for improvement. At a time when seven of the top 10 companies on the planet derive their wealth from data about us, the conclusion is that data is immensely valuable – but the power struggle for the ownership and control of the data has only been between corporates and governments, and individuals have no seat at the table, yet the vast majority of data is generated by individuals.  My proposal of establishing the economic value of individuals’ data with a degree of pricing power is a way to grant the individuals’ rights in a digital economy and reflect each individual’s nuanced need for privacy.

Ian: I think it’s widely accepted that when a product is offered for free it is generally the users who are actually the product. I like to think of it as receiving “free shovels” that we use to dig up all the vegetables in our garden and give away to the supermarket where we can go to buy them back! 

Jen: I would argue that in the case of the current economy, we are not even a product. Shoshana Zuboff writes in her book “The Age of Surveillance Capitalism” that we are only raw materials in the current digital economy. I tend to agree with her. We also give away our time, privacy, and mental wellbeing to constantly produce data for big tech.  I argue that in many ways our ‘free will’ is an illusion – a result of algorithms to manipulate more attention and more ad clicks. Therefore, a nuanced reflection of our privacy, health and individual priorities in our digital life is an important pillar of a fair and inclusive digital economy.

Ian: That is a constant problem on the Web – finding models that fit everyone globally.

Jen: In Europe, California, and increasingly China, the regulators approach this problem with more and more limitations and regulations. In China, to respond to centralized sensitive data collection and control, the regulators are introducing data localization rules to protect national security. There are more than 60 regulators around the world that are working on more than 150 various data localization rules. But the web is supposed to transcend borders and jurisdictions. Instead of forcing a balkanization of the World Wide Web, we should enable and empower decentralized data control and ownership that puts the individual at the center. With a decentralised model, it would be harder for one corporate to put national security at risk. 

Ian: We are seeing a lot of debate about Elon Musk’s proposal to change policy at Twitter if/when he buys it. In simplistic terms are we trading free speech against hate speech?

Jen: Twitter has become a tremendously powerful platform with its algorithm driving political and social discussions around the world and whether or not Elon believes he is championing free speech for all the right reasons we have to question whether one person should be making decisions with such a huge potential impact for hundreds of millions of people around the world. Elon is using his position to improve things as he sees them, but ultimately even a “better Emperor” is still an Emperor.

Ian: So you are suggesting more regulation of these types of technologies?

Jen: As we discussed, global regulation may not always be appropriate at the local level – this is where public policy comes in. There is an important difference between asking HOW something is done and if something SHOULD be done. Technology is a bit like medicine – we should be exploring, developing, and investigating what is possible without necessarily automatically licensing/approving every discovery, everywhere before understanding the costs, trade-offs, and local impacts.  This is about value-driven leadership  – moving beyond profits towards benefits and improvements for society as a whole.

Ian: But would you support the large-scale use of personal data in some cases? Some people argue that small amounts of data “don’t count” ..

Jen: Arguing that individual data doesn’t count is like arguing that one vote doesn’t count – it’s the principle that counts and it certainly matters to the individual. Data at scale is valuable of course – the question is who has the control. I chair The Commons Project, a tech non-profit that’s working towards interoperability and global health data standards that will allow us to respond to national and international events like pandemics by quickly sharing data between different countries and labs globally so the borders won’t need to shut down for so long. Covid has shown us the need to be able to react quickly and globally. At The Commons Project, we do not monetize individuals’ data. While there is a large amount of data in the mix, we minimize the data collection and maximize privacy protection. With the right governance model, you can build tech that puts the people at the center.

Ian: So with use cases like this that employ global technical standards for health data where is the place for Web Science?

Jen: Web Science brings together a host of interdisciplinary approaches from technology, law, philosophy, medicine, government (and many more) to examine the issues and decide the most important questions; even if we can do something, when/where is it appropriate to do so? How can we do it so there is clear accountability to the people and society? 

Historical medical data about a terminated pregnancy might inform health policy generally and future medical treatment for that one patient specifically but it might also get that patient prosecuted, imprisoned (or worse) in certain legal jurisdictions, or where policy/public opinion may change over time. We need to think beyond the narrow impact (or profit) in the present and consider the longer-term, wider strategic impact of these decisions.  

Ultimately the question is much more nuanced than “how can we capture/store the data?”.

In China, the ride service DIDI collected detailed journey/location information on over 550 million passengers and 10’s of millions of drivers. DIDIs aggregated data on billions of journeys offered detailed maps/models of locations that were not even on official maps and that showed who had been where and when. When attempting a foreign (US) listing in 2021 the Chinese government became uncomfortable about the international security and privacy implications of the data and has moved to restrict DIDI’s operations through the removal of the associated apps from mobile platforms as well as an investigation of the company’s potential abuses of personal data.

It goes to show that data and networks of data “at scale” have very different social implications to smaller private data stores – Web Science focuses on these types of networks at global scale.

Ian: What do you see as the role of Web Science going forward? What would you like to see happen?

Jen: We should be looking to educate users about how their data is used, how valuable it is, and why they should be managing it better. In Web2, companies like Facebook have data monetization baked into their business model. Their algorithm is designed to hook users to spend more time on their site because ‘time on site’ is an important determinant of advertising pricing. What the algorithm discovered is that when people are angry they tend to stay engaged for the longest time. This is why platforms like Facebook are full of divisive, provocative content that’s designed to trade your rage for advertising dollars. We live in a more and more polarised and divided world so Mark Zuckerberg can become a multibillionaire. Web Science Trust should gather the brightest minds in the world in our field to actively educate, debate, participate and build a healthier digital world.  There are so many more issues to address – how AI interacts with our data, the responsibility for the algorithms, the crypto-asset bubble, the lack of security and value model for NFT and the list goes on. It all centers around data: the use of data. the value of data, the ethics of data, and the ownership of data.

If our view of the world on the Web (what we see and what we are served up via search and social media) remains so strongly controlled by a combination of a data-centric 360-degree profile of our activities and profit-centered algorithms then I would argue that it’s not only a huge privacy issue, as people have argued – our freedom of information, our freedom to choose and, with it, our free-will are severely impacted. Does free will actually become an illusion?   

We need an impactful, multi-disciplinary conversation about data: its value, its uses, its ownership, and its potential benefits for society – that is where Web Science can and must make an impact.

Ian: Jen – thanks for joining us and once again welcome to the Web Science Trust!

In Conversation with: Bill Thompson

What do you get when you mix Philosophy, Applied Psychology, AI, Political activism and Unix programming with the Web?

In conversation this time is well-known BBC journalist, author and technology pundit Bill Thompson, who is surely an obvious candidate for the titles of both renaissance man and Web Scientist – he recently joined the board of Trustees at WST and we are delighted to welcome him. Ian Brown sat down to find out a little more about Bill’s road from Philosophy to Web Science and why he has been “thinking about the way the network is changing the world”.

Ian: Bill, you left Cambridge with a degree in Philosophy (with a side interest in Experimental Psychology) and decided to stay in Cambridge (post grad) to take a Diploma in Computer Science – how did that mix of disciplines shape your thinking?

Bill: I had initially been interested in the philosophy of mind and, from there, to how minds work (psychologically) and then whether it might be possible to build minds (machines that sense and think) using neural networks and artificial vision. From there I became interested in human-computer interaction and started to think more about how to build machines that might amplify our own minds.

Ian: What was the state of the tools available at that time to tackle those goals?

Bill: Well the technologies were starting to emerge – I joined Acorn just as someone was saying “what if we did something different and created a RISC processor..?” which was pretty interesting. As I moved through roles at Pipex and The Instruction Set I learnt more about programming, databases and networking and I attended the very first WWW Conference meeting Tim Berners-Lee (one of WST’s founders) in the process. Looking back I was on the periphery of some very interesting projects and impressive characters in AI and the Web throughout much of my education and early career.

Ian: Did you have a sense back then of how important these technologies were going to be and did you have a feeling whether the people were driving the technology or vice versa?

Bill: I think my views came together slowly over a decade between ‘84-‘94 culminating in helping to run a national body called the Community Computing Network through a growing sense of what computers could do for society and the social and political impact of technology. We wanted to help people see computing for what it could do socially as well as technically.

I think there was a sense of anticipation that technology could level the playing field between big businesses (or even oppressive states) and the rest of us – we were telling charities to embrace the same computing technologies as the big players with our slogan “If it can do it for them – it can do it for you! “. We realised we had to consider how technology is applied and not only the tools themselves. We wanted people to get engaged in owning/shaping their technologies for better social outcomes.

Whilst I had initially developed my thinking in the HCI world, I started to run into people (including Nigel Shadbolt – a fellow WST trustee) talking about Web Science – an approach that seemed to crystallise many of the things I had been thinking about in terms of interdisciplinary boundaries and adaptive models to describe fluid conditions and new technologies – in effect “thinking about the way the network is changing the world”.

Ian: I typically ask my “In conversation” guests which part(s) of Web Science particularly interest and attract them but I understand you’ve come up with a different definition of Web Science which addresses the moving target issue in Web Science.

Bill: I’ve really side-stepped the difficulties in defining what an ever-changing Web Science is by taking a cue from pragmatic Philosophy and focussing instead on what Web Science does* and, more importantly asking, “What do we need from Web Science?”. Web Science can usefully be defined by what we need it to do at any given point.

Ian: So let me ask you instead what do we need from Web Science now and is it the same as we needed when Web Science was founded over a decade ago?

Bill: Whilst its difficult to point to specific examples I think we need to understand (in a changing environment) where we can have most leverage to deliver the outcomes we think are most desirable for society as a whole. With 3 billion extra people coming online soon and technologies becoming more pervasive every year I think we are going to see a number of “step changes” in the Web we know today and a need to determine which aspects of this vast and growing system of interacting technologies that will need to be regulated. We can’t expect to build technologies with global reach and so many effects, both positive (e.g. economic) and negative (e.g. social/climate) effects and simply leave the world to cope. Web Science needs to research, reflect and advise on the impacts and (dis)benefits of these approaches, bringing a strong evidence-based historic viewpoint which will allow us to effectively learn from the past as we plan for the future – something which seems sadly lacking from the approach of some modern tech companies.

Web Science can help us to see that technology can be grounded in humanity and human processes in a rigorous and useful way. We can help people that aren’t really “noticing” these invisible/pervasive technologies by making clear to them that whilst society is indeed moulding the Web, the Web is also moulding society at the same time. I’ve been saying for the last 20 years that we need to stop thinking of “the Web” and “Cyberspace” as distinct places – they are simply new ways of expressing society and humanity with everything ultimately grounded in the real world with real-world costs and consequences.

There are many new freedoms (both positive and negative) that become possible on the Web. We need a level of rigour to balance those personal freedoms against the social responsibilities that maintain the Web as a viable and positive experience. Perhaps we need to be the “anti-poets” in this venture.

Ian: Bill, thanks for joining me in conversation – we look forward to another session soon.

Bill Thompson is an English technology writer, best known for his weekly column in the Technology section of BBC News Online and his appearances on Digital Planet, a radio show on the BBC World Service. 

He is a Trustee of the Web Science Trust (WST), an Honorary Senior Visiting Fellow at City University London’s Journalism Department, He is chair of the Centre for Doctoral Training advisory board, a member of the main advisory board of the Web Science Institute at the University of Southampton and writes for BBC Webwise.