In het nieuws

Technology’s lack of transparency is a human rights issue, says Nani Jansen Reventlow from the Digital Freedom Fund

op 12 november 2020

NL

Share on facebook
Share on linkedin
Share on twitter

As part of an international campaign to lift the lid on data privacy violations, The Privacy Collective is asking some of Europe’s leading experts why online privacy matters. 

Nani Jansen Reventlow is a human rights lawyer based in Berlin, specialising in strategic litigation and freedom expression. She is the founding Director of the Digital Freedom Fund, a lecturer in law at Columbia Law School, and has been an advisor to Harvard’s Cyberlaw Clinic since 2016. Here she discusses how strategic litigators are fighting for digital rights in Europe, the issue of lack of transparency around technology, and how the notion of socially responsible artificial intelligence is really just good marketing.

Why does online privacy matter?

Privacy is one of the human rights that’s a key right. It matters, of course, in and of itself, but it also enables the enjoyment of other rights – such as the right to freedom of expression. The Coronavirus pandemic and international lockdowns have made us rely on technology for many aspects of our lives, including work, education, and healthcare, and we’re seeing just how much online privacy matters there too. For example, which platforms are being used for homeschooling? What data is being collected from children and who is that data shared with? The answer to these questions can have a far-reaching impact on the future lives of young people.

Another good example, of course, is the elections. Events such as the Cambridge Analytica revelations have shown what can happen when our privacy rights aren’t sufficiently safeguarded. We’ve seen what can happen when the information that others have on us can, either independently or when combined with information that other parties have, make us vulnerable to all sorts of attacks on our human rights. That includes the right to freely make our minds up about who we're going to vote for. 

Can we talk a bit about why the Digital Freedom Fund was set up, and your new project Catalysts for Collaboration? 

The Digital Freedom Fund supports strategic litigation on digital rights in Europe, financially and also by connecting people who engage in litigation, in advocacy and across other work to protect our human rights. By digital rights, we mean all human rights as engaged in the digital context, including privacy, data protection, and freedom of expression, but also economic, social and cultural rights.

Strategic litigation is litigation that is seeking to bring about broader change, rather than just a good result in the case itself. You're trying to pursue systemic change, a change in the law, a change in policy, or a change in the way that policy is being applied in practice. In cases where the regulators can't or won't move quickly enough, litigation can be an important tool. And the case doesn't have to be won, necessarily. If it manages to spark public debate, public interest, and sometimes even public outrage about an issue, you could very well achieve the goals underlying the project.

Catalysts for Collaboration is a project that tries to encourage people to work together on such cases, across disciplinary silos. It promotes the idea that a good strategic case is more than a good court strategy alone, it has to be accompanied by a good advocacy strategy, a good public campaign, good policy work, lobbying, etc. And when it comes to tech cases, you also often need to explain complicated technology to a judge, and to a wider audience. We provide best practice advice on how to do that.

These are big philosophical questions – where do we not want technology involved to make decisions for us?

Have you seen an increase in strategic litigation since the General Data Protection Regulation (GDPR) came into effect? 

GDPR opens the door to a different kind of litigation than we are used to seeing, one because it enables individuals to really enforce their data rights themselves. And then also because of the possibility of initiating collective action. That's a game changer for a lot of jurisdictions. But I do think that there's still a lot of GDPR that's currently under explored. While there's more activity in that area, I don't think that we're seeing as much activity as we could at the moment.

How is technology in general, and AI in particular, being used by governments to make decisions, and what are the implications of that? 

I think the main issue is the lack of transparency – we quite often do not know, in which part of the process, what kinds of decisions are being made on the basis of technology. And that makes it difficult to scrutinise and determine if our human rights have been violated, how they've been violated, and if we can do anything against it. There are really no clear rules on transparency that apply across the board. 

There’s a very seductive narrative that has been spun about the black box in technology, which exists out of its own volition and cannot be examined. But technology has been created by people. So you can choose to make something transparent or you can choose to make something not transparent. Companies want to guard their IP, so they choose to not make it transparent. And there’s actually very little pressure right now from regulators to make that possible. But it's not something that is out outside of the realm of possibility. Companies could build in transparency and the opportunity to review what is going on inside those systems and algorithms, if they wanted to.

There are many examples of technology being used to make decisions that have disadvantaged people in so many ways. The debate on facial recognition and the disproportionate impact that it has on marginalised and racialised groups is really under explored. And there is the whole issue of access. In the UK with Universal Credit, for example, people are unable to access benefits that they have a right to because they can't navigate the system. And in the Netherlands, the SyRI (system risk indication) case showed how different databases had been connected with each other to detect the likelihood that someone would commit fraud, subjecting them to further investigation. The court found the use of SyRI violated individuals’ right to privacy. 

So what does socially responsible AI look like and what limits do we need to make that possible? 

The term “socially responsible AI” is really good marketing. For me, the only socially responsible AI is human rights compliant AI, and AI that is used within clearly set boundaries. We need red lines to be drawn as to where we do not want to even contemplate using technology at all. Sadly, that is not the focus of the debate at the moment. 

These are big philosophical questions – where do we not want technology involved to make decisions for us? I've been thinking about this for a long time, and I haven't come to a conclusive decision about where those red lines should be. But I have the feeling that some of it has to lie in the realm of wherever we make life and death decisions. 

Have you been concerned about the ways this technology is being used as part of the fight against the Covid-19 pandemic?

Very much so. The tech solutionist narrative took hold very successfully, very early on. That’s been enthusiastically supported by big technology companies of course. But I also completely understand that governments have been faced with an unprecedented situation, and they have wanted to figure out a quick fix. It’s almost become a reflex – head towards technology, it’s going to save us.

The bottom line is that a lot of these technology tools are being rolled out without a plausible cause being made about how this is going to make a difference. There’s also a shocking willingness for societies to leave certain groups of people behind. If you look at Covid-19 apps, for example and how many people in the population would have to have to use them in order for them to be effective – that entails all sorts of assumptions about everyone having a new smartphone, being constantly connected to the Internet, and having a phone of their own. What happens to all the people who don't meet those criteria? You're actually only focusing on protecting a very specific group of people in your society. 

What can people do to educate themselves and protect their online data today? 

Depending on where you are, make sure to follow your local digital rights organisations. In the UK, there’s the Open Rights Group, Big Brother Watch, Privacy International and Liberty, for example. In the Netherlands, there’s Bits of Freedom and others. These organisations have been very vocal about introducing technology in the Covid-19 context for example, and other issues. 

Also be critical. Whenever you engage with technology, ask yourself questions: is this really free? Or am I paying in some other way? Am I making a trade off with my data? Nobody has time to actually read all of these sites’ terms and conditions, but you can find information as to what certain apps, what certain platforms, do with your data pretty easily. And actually looking at your privacy settings, on whatever devices you use. It's a chore but at the same time, you wouldn't go on holiday and leave your windows open and your door unlocked, either. It’s about creating awareness within yourself that there are places where other people can intrude and abuse your rights, just as there are in the physical world. 

Your data should not be for sale. We’re taking Oracle and Salesforce to court for illegally selling millions of peoples data and we need your help! If you believe that tech giants should be held accountable for their use of people’s data please support our claim here. We’re fighting for change, because your privacy matters. 

Steun onze zaak tegen Oracle en Salesforce.

Aanmelden

Toon je support

Door je gegevens hier achter te laten steun je onze zaak tegen Oracle en Salesforce. We gebruiken je volledige naam en mailadres mogelijk om in de juridische procecure aan te tonen hoeveel mensen actief hun steun hebben gegeven aan The Privacy Collective.

Support form | home & popup

- Je steunt de zaak van The Privacy Collective tegen Oracle en Salesforce.


- Je maakt deel uit van de groep van benadeelden.

Je maakt deel uit van deze groep als je sinds 26 mei 2018 vanuit Nederland cookies geaccepteerd hebt van Oracle en/of Salesforce en op dit moment in Nederland woont. Deze cookies zijn aanwezig op populaire websites zoals nu.nl, booking.com, marktplaats.nl, bol.com, buienradar.nl, telegraaf.nl, funda.nl en amazon.nl.


- The Privacy Collective gebruikt jouw voor- en achternaam en emailadres om jouw steun aan te tonen in de procedure tegen Salesforce en Oracle en om contact met je op te nemen over het verloop van de procedure.

Waar mogelijk worden persoonsgegevens gepseudonimiseerd. Lees hier het volledige privacybeleid van TPC.


Door op 'Aanmelden' te klikken, bevestig je het bovenstaande.

Dit veld is bedoeld voor validatiedoeleinden en moet niet worden gewijzigd.