European Parliament advised to build its own ‘European Internet’ to block services supporting unlawful activities

This article originally appeared in Computing 9th June 2020

A policy paper requested by the European Parliament’s committee on the Internal Market and Consumer Protection recommends the European Union to develop a “European Internet” which, like the “Great Firewall of China”, would block services supporting unlawful activities in other countries.

Many governments and human rights groups in Europe currently criticise the Chinese government for its use of a firewall that denies Chinese people open access to Internet for free exchange of information and ideas. Critics argue that this “Great Firewall of China” helps the Chinese government to suppress opposition to its one party system.

But, it appears now that policy makers in the EU have also started noticing some advantages of this approach.

“The EU should include an action plan for a digital cloud – a European Internet – in the DSA”, suggests the policy document [pdf] which is authored by experts from Hamburg-based consultancy Future Candy.

According to these experts, EU’s own firewall/cloud/ internet would help foster a digital ecosystem based on data and innovation in the European region. Unlike Chinese approach that enables Beijing to suppress democratic movements in the country, EU’s firewall would be founded on the pillars of democratic values, transparency, user friendliness, data protection and data accessibility. Moreover, it would also help in setting standards and driving competition in the region.

Foreign web services would be allowed to join EU’s digital ecosystem, but for that, they would need to adhere to the rules and standards set by the European Parliament.

The document further advises the parliament to take various measures ahead of the proposed Digital Services Act (DSA) that will eventually a directive introduced nearly 20 years back to govern online services in the EU.

The policy document recommends starting a funding programme for European firms to help build state-of-the-art eGovernment services. This funding project would invest money in start-ups and other firms that demonstrate a strong desire to create infrastructure and digital services to enable a digital world of government.

The policy paper also recommends building a Visionary Communication Programme that would include regular legislative updates of the DSA and would also inspire European citizens about digital developments going on the region.

NHS data contract gives Palantir access to medical records of Covid-19 patients

This article originally appeared in Computing 9th June 2020

A data deal signed between the NHS and the US technology firm Palantir granted the controversial American data-mining company access to sensitive personal data of hundreds of thousands of patients, employees and members of the public.

The revelation came last week after the government finally released details of multiple data deals it had signed with Palantir, Microsoft, Google, and UK-based AI firm Faculty earlier this year.

The government published the details of contracts [pdf] after the campaigning website openDemocracy and law firm Foxglove threatened to take legal action against the NHS for withholding the information.

As part of the government contracts, Faculty and Palantir were granted certain intellectual property (IP) rights, openDemocracy said. The technology firms were allowed to train their algorithms and to make profit off their access to the NHS data.

The data shared with those firms includes personal contact details, race, occupation, gender, physical and mental health conditions, religious and political affiliation and past criminal offences.

While the government now claims that the contracts had been modified to address those issues, the new contracts have yet to be released by the government, according to openDemocracy.

The Faculty contract reveals that the NHS is paying over £1 million to the firm to provide AI services. Palantir, on the other hand, charged just £1 for use of its Foundry data management software by the NHS. The company is known for its surveillance work with US law enforcement and immigration services. Foundry played a part in the Brexit Vote Leave campaign.

Earlier in March, the NHS announced that it was working on the Covid-19Data Store project that would collate data from multiple health and social care organisations to “provide a single source of truth” about the outbreak.

openDemocracy raised doubts over the NHS deals at that time, pointing at the track record of tech firms and the British governments lack of transparency around a contract of this size.

Several MPs asked questions in parliament about the deals with private companies, and over 13,000 people also joined a call for transparency on those contracts.

While announcing the Data Store project, the NHS said that the data collected “will only be used for Covid-19” and that “only relevant information will be collected.”

The health agency also stated that all the data collected will either be destroyed or returned in line with the law once the public health emergency situation has ended.

IBM to pull out of Facial Recognition

This article originally appeared in Computing on 9th June 2020

IBM is quitting the controversial facial recognition software market over concerns that the technology could be used to promote racial injustice and discrimination.

In a letter to the members of the US Congress, IBM CEO Arvind Krishna said that the company would no longer sell general purpose facial recognition software and would also oppose use of such technology for racial profiling, mass surveillance, violations of basic human rights or any purpose “which is not consistent with our values and principles of trust and transparency”.

IBM’s decision to quit the facial recognition services has come at the time when US faces countrywide demonstrations over the tragic death of George Floyd, a black man, while in police custody in Minneapolis.

Several lawmakers and government officials in the US have called on the government to introduce reforms to address police brutality and racial injustice.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” Krishna said.

He added that vendors and users of AI-based systems have a collective responsibility to ensure that such systems are “tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported”.

Since its inception, facial recognition technology has faced intense criticism from lawmakers and privacy advocates in different countries. Critics of the technology cite multiple studies that have found that the technology can suffer from bias along lines of race, age and ethnicity and could result in abuse of human rights. Critics further argue that technology also has the potential to become an invasive form of surveillance.

Earlier this year, Clearview AI came under heavy scrutiny after it emerged that its facial recognition tool, with over 3 billion images compiled from scraping social networking websites, was being used by a number of private firms and law enforcement agencies.

Clearview has since faced multiple privacy lawsuits in the US.

In January, Facebook was also ordered to pay $550 million to settle a class-action lawsuit over its unauthorised use of facial recognition technology.

In March, the Metropolitan Police’s facial recognition deployment in Oxford Circus on Thursday led to the wrongful apprehension of seven innocent members of the public who were incorrectly identified by the system.

Last year, the UK Information Commissioner’s Office (ICO) issued a warning to police over the use of live facial recognition and also called for a statutory code of practice to be introduced to govern police’ use of live facial recognition.

ICANN rejects sale of .org registry to for-profit investor group

 

SAN FRANCISCO (Reuters) – A body overseeing web addresses said it has vetoed a $1.1 billion deal to sell control of domain names ending in .org to a private investment firm after an outcry from internet pioneers and officials including California’s attorney general.

Inventor of the Web, Sir Tim Berners-Lee had spoken out against the sale of the domain which is specifically intended for charity and non-profit use and appeared relieved at the decision when he tweeted

“Phew. That sale would have been a travesty of governance of public things.”

Read the story at Reuters here

COVID-19 – a networks perspective

Whilst at its heart the coronavirus SARS-CoV-2, and the associated COVID-19 pandemic is a biological event, the impacts and reporting around it have social impact in many ways, for example in the ways we react to the event, cope with it, and judge our governments and health services.



Earlier this week several major social media providers including Google, Twitter, Facebook and other social networks released a joint statement (https://about.fb.com/news/2020/03/coronavirus/#joint-statement) in support of handling fake news and mis-information related to COVID-19. Our perceptions of the outbreak (as much as any objective facts) have led to fake cures, conspiracy theories, stock market panic selling (even across normally negatively correlated instruments) and panic buying of, and price gouging (profiteering) around, hard-to-find supplies; all of which have figured prominently in recent news reports. A notable common thread running through many of these issues (and also the evidence-based approach to modelling the spread of the virus) is the perspective that many of the them could be considered to be networks. Epidemiological networks, supply-chain networks, financial networks, social networks, academic/business networks and “social machines” (the interaction between human and machine actors in large networks). Command-and-control hierarchies are simply overwhelmed by the movement of information at scale through these networks.

Web Science seeks to study the effects of social + technical forces in large networks and the impact of the current pandemic has arguably triggered impacts and responses at many levels of societies’ networks and on an international basis across governments, academia, businesses and individuals.

Whilst we seem to live in an ”age of correlation” underpinned by Big Data and Machine Learning, the most interesting (and in some cases damaging) issues around the pandemic have more behavioural (psycho-social) elements that are deserving of research into their causation:

– the (un)willingness of governments to disclose the existence, size/severity of a pandemic impacting the timing of responses
the issue of social compliance vs legal measures and enforcement to contain/control the rates of infection while respecting personal freedoms.
– the creation/exacerbation of price and supply/demand volatility due to panic selling in financial markets (due to perceived risk) and panic buying in retail markets (due to perceived shortages), neither behaviour necessarily underpinned by realistic market conditions.

How then does a study of networks offer insight here?

Following historic market crashes due to computer or human error, stock markets introduced a level of “scepticism” to data validation when unexpectedly high or low prices or high volumes are input to trading systems in an attempt to avoid automated network cascades of intelligent (sic) agents that may interact to collapse an entire market quicker than human regulators can react. Perhaps a more socio-technical persepctive on supply-chain networks would be valuable with respect to emergency/disaster relief approaches and supply chain resilience beyond the typical optimisation models that use minimum quantities and reduced unit pricing. Could we automatically dampen excess demand which is unhelpful to the inhabitants where panic buying is taking place?

Can social networks be used to identify players who are pushing fake news, social unrest or exploiting market shortages for unfair gain ?

Can social networks and smartphone sensors be used to gauge the level of social distancing at a regional level without compromising individual privacy?

Is there a ‘killer app’, such as https://covid.joinzoe.com/, that can help us track, or even manage, behaviour during the pandemic?

Several Web Science-related themes emerge for COVID-19:


1. Misinformation, e.g. www.theguardian.com/world/2020/mar/18/russian-media-spreading-covid-19-disinformation

2. Surveillance and contact tracing, e.g. https://www.wsj.com/articles/to-track-virus-governments-weigh-surveillance-tools-that-push-privacy-limits-11584479841

3. The role of data, e.g. blog.schema.org/2020/03/schema-for-coronavirus-special.html

4. The changing patterns of Internet use and network congestion as everyone works from home (it’s a finite resource, even though it doesn’t appear to be), e.g. www.politico.eu/article/brussels-in-talks-with-netflix-about-reducing-internet-congestion/

5. The psychology of panic buying/selling e.g. www.bbc.com/worklife/article/20200304-coronavirus-covid-19-update-why-people-are-stockpiling

The Internet and the connections it allows to be made are at the forefront of our understanding of COVID-19, and also must be part of the solution.

A Contract for the Web

Sir Tim Berners-Lee has launched a new website called contractfortheweb.org intended to lay out the behaviour/responsibilities of international internet giants, such as Google and Facebook, national governments and individual web citizens. 

The document is 32 page long and calls itself as “a global plan of action to make our online world safe and empowering for everyone”. It lists nine principles for the Web (three aimed at governments, three for companies and three for individuals) and can be downloaded here as a pdf

Why add these social rules 30 years after the technical rules were released? The paper claims that whilst the Web   “has changed the world for good and improved the lives of billions… (it) comes with too many unacceptable costs”.

The Contract is supported by more than 150 organisations, including GitHub, Reddit and DuckDuckGo and perhaps surprisingly, Facebook and Google who were recently cited by Professor Berners-Lee as examples of companies that should be broken up.