The “normalisation of mass surveillance” could pose a threat to social mobilisation, warns digital rights advocate Diego Naranjo

Diego Naranjo at EDRi’s headquarters in Brussels, Belgium. (Marta Checa)

Long before the arrival of 2020, the ‘year zero’ of Covid-19, efforts to safeguard digital rights and the public debate on fundamental rights (often disregarded by new technologies) were far from being public priorities.

After eight months of a health pandemic without precedent in recent history, the debate over the use of surveillance technologies (in order to prevent and reduce the spread of coronavirus) and our digital rights in general (the right to privacy and the protection of personal data, among other issues) continues to be less prevalent and comprehensive than could be hoped for. While acceptance of a ‘Big Brother’ state is widespread in many East Asian countries, both democratic and otherwise, resistance in Europe has recently been shaken, often due to fear for personal safety (formerly terrorism, now health), or ignorance and exhaustion rather than actual awareness.

In an interview with Equal Times, Diego Naranjo, head of policy at European Digital Rights (EDRi), a non-governmental organisation comprising 44 human and digital rights associations in Europe (as well as some based in the United States and others that are globally active), described some of the measures that protect us and that we can take to protect our fundamental rights from being violated in the digital sphere, based on the General Data Protection Regulation (GDPR) implemented two years ago.

At present, the organisation (which defends rights and freedoms in the digital sphere, from data protection to access to information and freedom of expression) is questioning the need for many of the measures proposed or improvised by various governments (from the use of drones to ensure compliance with quarantines to immunity passports), as well as their proportionality. Their concerns also include how collected data is secured, for how long it is stored, how it is obtained and processed, whether it will be used for other purposes and by whom.

Against the backdrop of a pandemic such as the present one, how can we reconcile digital surveillance with the values that underpin Europe? Are we seeing more debate now than prior to January? Are the responsibilities of everyone involved in the process clear? What is the necessity, proportionality, transparency, legality and safeguards of such surveillance?

The pandemic has underlined the need to discuss the surveillance already taking place (facial recognition, data retention, etc.) within the context of this new reality, which (at least in its initial phase) has prompted governments to increase monitoring of the population for health reasons, but at times using disproportionate measures. We reacted quickly with an analysis of what should not be done. Fortunately, the European Commission appears to have at least taken note and its recommendations to the member states were very good. Now it’s up to the states to implement them correctly.

At the same time, major corporations have leapt at the opportunity to appear as the saviours of this crisis, attempting to establish themselves as the ‘safe option’ in times of pandemic while shoring up their domination with more public money, something that Naomi Klein’s work reveals in all its vulgarity. For the time being we’ve withstood the worst tendencies of privatisation and pro-surveillance. Now we must continue to fight for new freedoms and not lose in a matter of months that which has taken us years to gain.

In numerous statements you’ve stressed that, with limited oversight and opposition, the measures currently being taken to deal with the pandemic – big data and artificial intelligence systems among others – will shape our future. Can you tell us what is of particular concern to you?

We are generally worried about the normalisation of extraordinary measures. Decades ago, security cameras began to be installed on our streets and in other public spaces. The excuse at the time was a mixture of general public security (the economic crisis increased certain types of poverty-related misdemeanours such as theft) followed by anti-terrorist utility. Now we find ourselves in a situation in which the same cameras are being used for facial recognition under the pretext of checking to see whether people are wearing masks, finding lost children, or whatever else. The ultimate goal is control and the short-term goal is public funding for private surveillance industries.

If state and private surveillance continue or increase, the upcoming struggles involving cuts to social funding, climate change, racial justice and defence of democracy will be undermined by a system in which everything is recorded and many will prefer to remain silent and at home rather than see their precarious employment or health insurance jeopardised by participating in resistance movements.

Beyond the current health crisis, where are you directing your efforts in the area of video surveillance and facial recognition technology?

At the local level, we want to make sure that some activities remain surveillance-free and that the use of such systems is prohibited even in places where they are not used, specifically public spaces – not just squares and streets, but stations and shopping centres.

Also read:  Activists march in Durban to highlight climate crisis

At the national level we are working to ensure that laws prohibit this as well and that the EU, if necessary, launches infringement proceedings against member states for breaches of fundamental rights when the measures they have imposed have a serious impact on privacy, freedom of association and demonstration, and are neither necessary nor proportionate to the ends they are trying to achieve. The most modest success [of such efforts] would be guaranteeing that no new systems are put into place and existing ones are eliminated.

And if such measures prove to be useful without doing harm, what is the problem?

Just because something may be useful for certain purposes doesn’t make it necessary or proportional. It would be very useful to have a camera in every home to prevent to prevent violence against women. But it’s clear that this would not be proportional, that it would be an abuse of power.

In reality we all have something to hide, so this argument doesn’t work. And it’s not about whether you have ‘nothing to hide,’ it’s that you need to live in freedom and not under the eye of constant surveillance that sees who you meet with, which bars you go to with whom, which unions you join and who you go on strike with. It’s a dystopian world that we must avoid. But yes, I see a normalisation of mass surveillance taking place. Five or ten years ago we were shocked at what was taking place in China. Today, Slovenia, Germany, the United Kingdom, the Netherlands, Greece, France, Hungary, Italy and Sweden all use facial recognition. And there is zero public debate.

In the case of Serbia, cameras that were already on the streets of Belgrade are now being used for facial recognition with Chinese technology and, if that weren’t enough, with Chinese police officers on patrol (due to the many Chinese workers working on Chinese investment projects).

The development of 5G and artificial intelligence faces ethical concerns that differ from country to country, from continent to continent, and from organisation to organisation. What is your interest in this debate?

The issue of ethics is fundamentally driven by business. Companies do not want to talk about rights because this is something that can be exercised and taken to court. They prefer to talk about ethics because each company, each country, has its own set of ethics. All AI and other technology has an impact on human rights (right to privacy, data protection, freedom of association and demonstration, freedom of expression), so we have to talk about rights, not ethics. Focusing the debate on ethics distracts attention from fundamental rights, which are the most important to us.

Are our political representatives up to the task?

The representatives here in Brussels and in the member states fear that we are being left behind, that there is not sufficient innovation and that this affects employment. But it is also true that there has been a decline in different views on the part of civil society in general. This was most clearly seen during the discussions on the regulation of data protection: those of us who were advocating for stricter regulation were in the minority. It was a David and Goliath battle (less than 100 people against an army of lobbyists). If you multiply your message by 100, as is the case with the companies, that message is heard more than the voices of the citizens.

In the digital sphere, how vigilant, suspicious or clairvoyant do we have to be with respect to the use of our data? What should we assume they can do with it if we use, for example, a free-of-cost (non-open source) application?

You don’t have to imagine the future, you just have to look at the past. Edward Snowden published his revelations in 2013 and what they make clear is that what is evolving is not a dystopian system of the future but a type of time machine, the idea of constant surveillance of all of your activity, online and offline, who you talk to, what pictures you upload, where and how you travel, what you consume. Everything, 24 hours a day…so that if you ever become a problem for a certain government, it can get into that ‘time machine’ and see where it finds fault. Because we all have flaws in our lives, things that we don’t want to reveal, things that we don’t want to become public. The idea is to have total control for the day that they have to take action against someone.

What measures can protect us in this scenario?

The data protection regulation was a step in the right direction and established Europe as a global leader in data protection. Although a European convention, Convention 108 of the European Council can be ratified by any country in the world. As many countries as possible should be encouraged to adopt it. It would also help if the US had adequate data protection, privacy and surveillance laws, which it currently does not have.

Also read:  Civil society organisations in Mexico are hanging in the balance following the government’s decision to cancel funding

But on a personal level, you can also take action beyond fighting for regulations that protect you: encrypt all devices, use certain services, Signal rather than WhatsApp, avoid Dropbox, use BitWarden (as a password management service) and free software when you can.

[In any case,] security is not an objective, it’s a process. None of this technology can be trusted blindly, it has to be evaluated and audited. If it’s not free, open-source software, we can’t know if they’re doing what they say they’re doing. At a European level, we are calling for public investment in free software. Free Software Foundation Europe, which is one of our members, has a campaign called Public money? Public code!: this sums up what we want very well.

Companies manoeuvring to their own advantage is nothing new. But considering the potential risks, what is stopping us from adequately protecting our rights?

If what benefits companies like Google and Facebook most is surveillance of the entire population that uses their services, they will continue to do so until we tell them that we don’t like it, that this is not an acceptable business model in a democratic society.

The issue of developers is also an interesting one. We recently published a guide for ethical web development because we have found that many web developers, meaning no harm, insert Google Analytics and Google Fonts by default into websites (because it is standard practice and because they are free of cost). But these services track and monitor for Google. I think it’s a matter of ignorance, there is educational work that has to be done. But yes, part of the problem is that developers are in their own world and not inclined to work on data protection and privacy by design and by default, which are basic principles for us.

When it comes to Amazon and cloud storage, what should we be looking at or not losing sight of?

One of the interesting proposals made by the European Commission is to create a kind of European cloud to compete with Amazon. I don’t believe that creating a European Amazon, a European Facebook or a European Google is the answer, but ensuring technological sovereignty in the sense that we are – ideally – able to publicly monitor who handles our data, who finances these algorithms and who has access to storage on the network, would be positive for us. [As opposed to] this being in the hands of a foreign company where we don’t know who monitors their servers, who has access. But resources and investment are lacking, as well as development at the local and state level.

But is there an awareness of the need for such measures?

There appears to be no specific interest beyond the Commission’s proposal. Even for us it is not a central question. There are issues that take priority. Projects such as Gaia-X [presented in early June], proposed for Europe, do not necessarily benefit us in terms of respect for fundamental rights. It depends on who has access to this data, to the cloud, who owns it, etc.

What is the current state of net neutrality (in which the internet is understood as a public service and internet providers cannot stop or block data traffic according to their interests) in Europe?

Maintaining net neutrality is a major battle and the tech companies are especially fierce (ending neutrality would bring them many benefits). The GDPR prohibits the lack of net neutrality in Europe, but implementation in the member states is different and depends on the regulators.

We have to make clear that this is not allowed. Otherwise, we could end up in a situation like the one in India where zero-rating practices have given people free access to Facebook, meaning that for a large part of India’s population, the internet is Facebook. Zero-rating essentially restricts access to the internet under the false pretence of providing better or free service.

Does it help that certain parts of the population or certain countries take all the measures for the privacy of personal data while others move in the opposite direction? Does this not make it a thankless task that makes people want to throw in the towel?

At the European level the aim is for people to be protected by default. For example, if you take an elevator you are protected by default, and if the elevator falls down you can sue the company that built it, the company that maintains it, or the owner of the building. It should be the same with technology; you shouldn’t have to be a hacker or a geek to use the internet. There should be data protection and privacy by design and by default so that you don’t have to think about these kinds of things.

Copyright policy

Creative Commons LicenceThis work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Should you wish to republish this Elitsha article, please attribute the author and cite Elitsha as its source.

All of Elitsha's originally produced articles are licensed under a Creative Commons license. For more information about our Copyright Policy, please read this.

For regular and timely updates of new Elitsha articles, you can follow us on Twitter, @elitsha2014, and/or become a Elitsha fan on Facebook.