In het nieuws

Can we really call this a free society? We ask Amnesty International’s Matt Mahmoudi

op 01 oktober 2020

NL

Share on facebook
Share on linkedin
Share on twitter

As part of an international campaign to lift the lid on data privacy violations, The Privacy Collective is asking some of the UK’s leading experts why online privacy matters. 

Matt Mahmoudi is a researcher and adviser at Amnesty International on artificial intelligence and human rights, and Jo Cox PhD Scholar at the University of Cambridge. He co-founded and presents Declarations, the human rights podcast. Here, he discusses the need to reclassify privacy from being an individual issue to a community problem, the danger of becoming desensitised to surveillance technologies, and the risks of decision making by algorithms.

Why does online privacy matter?

The right to privacy is often framed as “the right to turn away” – but arguably, privacy is about our right to participate. To choose what we share about ourselves, with whom, on our own terms. This is such an important distinction to make. Otherwise we get stuck in a misleading narrative, where the online platforms – e.g. Facebook and Google – who supposedly are all about “connection” and “participation” are on one side, and on the other side are the privacy advocates, who are made out to be against those things.

And this means that Facebook and Google – and governments too, in the wake of the Snowden disclosures – have easily reinforced a narrative whereby privacy is only for those with something to hide. 

But this is all wrong. What we want is to be able to participate – to exercise the full range of rights that the digital world makes possible – and to do that on our own terms. That means being able to choose when we turn away and when and in what way we want to participate. Privacy is a bedrock right that allows us to exercise autonomy and dignity in the digital world.

What do you think are the biggest risks to online data privacy at the moment? 

I think our understanding of data privacy at a general level is quite one dimensional. I don't think that we necessarily have the right tools or education to fully understand. We're not just talking about what is convenient, we're talking about people's ability to access information, about economic and social rights, about questions around life and death for especially vulnerable populations, many of whom rely on anonymous identities. That's one of the major risks. 

Another risk is the concentration of the means to classify individuals, which is done by a handful of tech giants and data brokers. That’s a determining factor of how you're treated and what information you're served – and indeed how you might be manipulated. And then thirdly, is the fact that our behaviour and data underpin lucrative business models.

I don’t think this is just about awareness. Even though The Social Dilemma [a new documentary on Netflix that Amnesty International partnered with the filmmakers on] does a great job at creating awareness, it's not like we weren't aware that this could happen. Channel 4 recently revealed that 3.5 million Black Americans were profiled and categorised as ‘Deterrence’ by the Trump campaign in an attempt to put them off voting. Cambridge Analytica and the Snowden revelations also happened not too long ago. But are you someone who believes that you are fundamentally affected by how your data is surveilled? Does this have a consequence in your everyday life? Those are the questions that will impact how we think about data and tech. 

You’ve also spoken about the dangers of surveillance technology – particularly facial recognition software. How do you think the Covid-19 pandemic has shifted the public view on tracking technologies? 

Global emergencies are conditions under which states of exception become possible. During the pandemic, we’ve seen a rollout of contact tracing technologies, we’ve seen a rollout of not just facial recognition, but also in some contexts emotion recognition, and really quite diverse forms of biometric identification that we haven't really seen used at the scale before. Covid-19 has almost created an innovation race for surveillance technologies. However, that crisis element also intersects with our nature to adapt and move on and try to make normal what we can in exceptional circumstances. Before we know it, the invasive measures that were put into place during the pandemic could outlive it. I think the contact tracing debate has opened a lot more discourse around this issue, but it’s not at the same level as the response to the Cambridge Analytica scandal or the Snowden revelations. 

We had those great debates. Those debates were never settled. They made us worried, but at some point, we just accepted that some surveillance is ongoing. Things like certain contact tracing apps risk significantly stifling our ability to freely move. There are so many things that can go wrong when you're dealing with a centralised means of controlling people’s movement. The ways in which these technologies can be used are not kept in check. And I think that is one of my biggest grievances around contract tracing technologies and around facial recognition. But we won't have the kinds of regulatory frameworks that we need as long as there isn't outrage at the scale that we saw before.

Privacy is a bedrock right that allows us to exercise autonomy and dignity in the digital world.

Talking about outrage, it was interesting to see the uproar over the A Level results fiasco recently – where else are algorithms being used that we might not be aware of? 

There are many other contexts to think about. The A Levels moment gave some hope to the cynical minds of people like myself who are waiting for people to challenge the delegation of human judgment to a machine. Algorithms are often used to quantify unquantifiable things, and where there should really be a more extensive consultative process of figuring out what the truth is and how to best respond. 

There was another example recently in the Netherlands, where a particular algorithm was used to try and predict who was more likely to commit tax fraud and used dual citizenship as a variable. That largely resulted in it finding that people with dual nationality, often ethnic minorities, were committing tax fraud even though there was no evidence to say so. Another example is predictive policing, which tends to double down on existing structural prejudices – Amnesty International recently published a report on this. And algorithms are used in welfare assessments, which often come down to indicators designed to encourage reductive decision-making, rather than augmenting the expertise of a caseworker or social worker. 

You recently co-authored a book called Digital Witness about using open source information for human rights investigation and accountability. I wondered what your thoughts are around whether technology leaves people more open to human rights abuses when it comes to the use of their data? 

I don't think that we can make an unequivocal assertion that technologies leave people open to greater human rights abuses. They do. Of course they do. But they also lead to greater ways for people to contest oppressive power relations and systems that fail to respect human rights in such a way that destroys societal engagement and free public discourse. Contexts in which we've used technology to accomplish that includes Amnesty’s reporting on war crimes, human rights atrocities and crimes against humanity. Some of the same technologies that allow for heavy surveillance and the abuse of human rights, also enable the effective pursuit and fight for human rights. 

So I do think it's double edged. I think the way that we shift that dial is by challenging the circumstances under which these technologies exist. Technology should not be operating according to a surveillance-based business model, which privileges the abuses of human rights over the fight for them.  Does the tech industry have to be based on generating an income from our online behaviour and the data that we share, in ways that we don't always understand? Are we willing to accept that society should be commodified in this way? In lots of ways, this is an exercise in imagining a different world. One we should not shy away from.

Do big technology companies have a responsibility to be more ethical when it comes to handling our data? 

I think social media platforms tend to rely on the language of ethics to justify their model. Ethics can do a lot to steer the conversation in the right direction and there are incredible digital ethicists, scholars and activists out there doing fantastic work on this. However, my fear is that it can also be a normalising discourse when deployed from within the corporate responsibility side of things, where there is very little accountability and practically no enforcement.

It's taken for granted, for example, that these asymmetric relations of power are birthed by tech and merely need a tech fix in order to be corrected, which is simply not enough from my perspective. Not only do we need social media platforms to do better than engage in the theatre of ethics washing, but we need stronger and stricter regulation that acknowledges the structural inequities within which the technology industry is embedded. The human rights framework – a legally, politically and morally binding set of principles – is the best universal language we have to that end.

Your data should not be for sale. We’re taking Oracle and Salesforce to court for illegally selling millions of peoples data and we need your help! If you believe that tech giants should be held accountable for their use of people’s data please support our claim here. We’re fighting for change, because your privacy matters. 

Steun onze zaak tegen Oracle en Salesforce.

Aanmelden

Toon je support

Door je gegevens hier achter te laten steun je onze zaak tegen Oracle en Salesforce. We gebruiken je volledige naam en mailadres mogelijk om in de juridische procecure aan te tonen hoeveel mensen actief hun steun hebben gegeven aan The Privacy Collective.

Support form | home & popup

- Je steunt de zaak van The Privacy Collective tegen Oracle en Salesforce.


- Je maakt deel uit van de groep van benadeelden.

Je maakt deel uit van deze groep als je sinds 26 mei 2018 vanuit Nederland cookies geaccepteerd hebt van Oracle en/of Salesforce en op dit moment in Nederland woont. Deze cookies zijn aanwezig op populaire websites zoals nu.nl, booking.com, marktplaats.nl, bol.com, buienradar.nl, telegraaf.nl, funda.nl en amazon.nl.


- The Privacy Collective gebruikt jouw voor- en achternaam en emailadres om jouw steun aan te tonen in de procedure tegen Salesforce en Oracle en om contact met je op te nemen over het verloop van de procedure.

Waar mogelijk worden persoonsgegevens gepseudonimiseerd. Lees hier het volledige privacybeleid van TPC.


Door op 'Aanmelden' te klikken, bevestig je het bovenstaande.

Dit veld is bedoeld voor validatiedoeleinden en moet niet worden gewijzigd.