In het nieuws

The internet is f*cked: Mozilla Fellow Frederike Kaltheuner on the failures of a rigged system

op 17 december 2020

NL

Share on facebook
Share on linkedin
Share on twitter

As part of an international campaign to lift the lid on data privacy violations, The Privacy Collective is asking some of Europe’s leading experts why online privacy matters

Frederike Kaltheuner is a Mozilla Tech Policy Fellow and Fund Manager of the European Artificial Intelligence Fund. Previously she was the Programme Director of Privacy International. Here, she discusses with The Privacy Collective the shift in thinking about privacy as protection from exploitative company practices, how cookie notices make use of deceptive design, and why we can’t make individuals responsible for technology companies’ disregard for data rights.

Why does online privacy matter? 

It's often forgotten that privacy is a human right, just like freedom of expression. It is one of the key factors that protects us from abuse and that’s especially true for people who are vulnerable. What’s really interesting is privacy as a concept is really closely intertwined with technology, and as such, it's a concept that is constantly evolving and changing. 

Can you tell me about your work at Mozilla, and your role in establishing the European AI Fund? 

Until November 2018 I worked at Privacy International, where we worked with a team of technologists, policymakers, researchers, lawyers and advocacy people and looked at various dimensions of the ways in which companies use data and the way in which privacy intersects with corporate power. I was also given the opportunity to do a technology policy fellowship with Mozilla this year, looking at the use of AI to make inferences and predictions about people and how AI intersects with identity. 

And I’m also the fund manager of the European AI fund, which I joined in June. It’s a new philanthropic initiative to support critical work that's being done in Europe around the use of automated decision making, AI and algorithms. One of the core motivations for this fund is that a lot of the work around digital rights and privacy in Europe is being driven by digital rights organisations. We want to bring in new voices, specifically, organisations that represent or work on behalf of communities that are disproportionately affected. 

The internet is largely paid for by ads that track people around the internet – how has this business model evolved and what are the implications of this? 

It is true that advertising supports a large proportion of the open web as we currently know it. But the discussion around targeted ads and tracking is not about whether this should be monetised. It’s about whether collectively, as a democratic society, we find the price acceptable. 

The current price is that everything you do online, whatever app you download, whatever device you use, your detailed behavior is being tracked and shared with hundreds of companies that you've probably never even heard of. This is problematic for a number of reasons. On a purely principled level, there’s no natural law that this is how technology should be built. It gets really complicated when you download things like mental health apps, and you can’t work out whether you can trust the app not to share your data with advertisers.  

For the vast majority of people, the fact that you're being tracked is annoying, but it's not immediately associated with any tangible negative harm. But because the harm is invisible, doesn’t meat it does not exist. We've seen what can happen when data is sold on or shared for unrelated purposes. In the US, for example, the authority responsible for deportations purchased location data in order to track people. There are also examples of Muslim prayer apps sharing data with the US military, and of diet ads targeting people with eating disorders. These risks have an impact on how societies, democracies and public spaces work. But they also affect certain people more than others.

For the vast majority of people, the fact that you're being tracked is annoying, but it's not immediately associated with any tangible negative harm. But because the harm is invisible, doesn’t meat it does not exist. We've seen what can happen when data is sold on or shared for unrelated purposes. In the US, for example, the authority responsible for deportations purchased location data in order to track people. There are also examples of Muslim prayer apps sharing data with the US military, and of diet ads targeting people with eating disorders.

How aware do you think the public is that their data is being shared in this way? 

A lot of the examples that I've mentioned so far are actually in violation of European privacy and data protection laws, which classify certain kinds of data as sensitive. Data about your ethnicity, political beliefs, sexual orientation, religion, health and trade union membership are special category data that deserve a higher level of protection. In reality, however, there's a massive enforcement gap between what laws like GDPR are saying, and how data is used in practice. Unfortunately that gives data protection a bad reputation. I don’t even like using the phrase data protection – It’s about protecting people. 

That gap exists because of a combination of factors. Everything we do runs on data, so there is a disproportionate relationship between the size of the data protection authorities versus the subject of what they're supposed to regulate. Despite some prominent, and very visible fines, the chances of receiving a big fine are currently still quite small. That turns non-compliance into a business case. You can’t blame companies for deciding to take their chances. 

Nowhere is this more evident than when it comes to cookie notices. The idea behind these notices is that users can decide and choose whether they want to be tracked - or not. That's clearly the spirit of the law. In practice, however, most websites and ads have chosen to interpret this as a box ticking exercise. So their tactic is: ‘how can we design consent boxes in a way that makes it more likely for people to say yes’? Now the web is full of incredibly annoying cookie notices that disrupt the entire browsing experience. If websites, apps and operating systems truly wanted to give people agency and control, they would allow them to make their choices once, and respect it on a systems level. The reason we’re not seeing this is that it would very likely reduce tracking drastically. 

What makes you hopeful for the future? I see for example there are signs Europe is trying to ban micro-targeted advertising. Are you heartened by these sorts of measures? 

The EU has a really ambitious plan to regulate tech companies and uses of specific technologies over the next few years. One of the key goals of the European AI Fund is to make sure that civil society actors have a voice in these debates. Over the past few years, there's been a lot of focus on making sure that the way in which AI, for example, is deployed becomes more responsible. That’s important work but we also need people to be asking the broader questions – do we even want or need to automate everything? And what can we do about market concentration and market dominance?

I’m also hopeful that we've seen a real move away from privacy as a very libertarian idea - the right to be completely left alone - towards a broader demand that protects people from exploitative and predatory practices. People need privacy because it protects them from abuse, and it allows them to have agency over their life. But change in this space takes a lot of time. In 2018, when I was still at Privacy International, we filed complaints against a number of data brokers and ad tech companies and credit scoring agencies. It took until 2020, for the Information Commissioner’s Office (ICO) to issue a notice against every one of the companies. 

Is, as Mozilla says, the internet f*cked? What needs to change? 

The platforms and systems that we depend on - now more than ever -  are really f*cked because they aren’t ready for the challenges that are being thrown at them. We’re still in the midst of a global health crisis, and countries like the US don’t even have a data protection law that would protect people from employers, predatory lending companies or advertisers to exploit their health data. Misinformation about vaccines and Covid are spreading on all networks, but platforms and governments haven’t figured out how to mitigate misinformation without hampering freedom of expression yet. This isn’t an abstract problem, there are lives at stake. 

There are also some broader lessons from the current situation. The moment personal circumstances or the political climate changes, technology that’s convenient in normal times, can very suddenly become incredibly risky - or an actual liability. We’ve never been more dependent on the internet and technology but we’re in quite a fragile, difficult situation. If you’re asking what keeps me up at night, this is really high on the list. 

What steps can users take today to educate themselves and protect their online data?

I'm always really nervous in shifting the responsibility towards individuals. It's a rigged system – those cookie disclaimers are designed for you to click yes. So people shouldn't feel guilty about the fact that they don't always have the time or patience to go out of their way to express their actual preferences. Even as someone who has worked on this for years, I sometimes can’t be bothered. So many things are designed in a way that it’s virtually impossible for most people to understand what's happening - that means truly informed consent is very difficult. 

But where possible, I think it is important that we can make decisions that communicate our preferences when it comes to online privacy, because companies do react to that. They’re already recognising people care a lot more about this than they used to, and are becoming more educated about it. There are some companies that are doing a really good job. Litigation also plays an important role in bridging the enforcement gap between the spirit of the law and how it has been implemented. It’s also important to test the strength of the law in how it applies to new technologies that we haven't seen before.S

Steun onze zaak tegen Oracle en Salesforce.

Aanmelden

Toon je support

Door je gegevens hier achter te laten steun je onze zaak tegen Oracle en Salesforce. We gebruiken je volledige naam en mailadres mogelijk om in de juridische procecure aan te tonen hoeveel mensen actief hun steun hebben gegeven aan The Privacy Collective.

Support form | home & popup

- Je steunt de zaak van The Privacy Collective tegen Oracle en Salesforce.


- Je maakt deel uit van de groep van benadeelden.

Je maakt deel uit van deze groep als je sinds 26 mei 2018 vanuit Nederland cookies geaccepteerd hebt van Oracle en/of Salesforce en op dit moment in Nederland woont. Deze cookies zijn aanwezig op populaire websites zoals nu.nl, booking.com, marktplaats.nl, bol.com, buienradar.nl, telegraaf.nl, funda.nl en amazon.nl.


- The Privacy Collective gebruikt jouw voor- en achternaam en emailadres om jouw steun aan te tonen in de procedure tegen Salesforce en Oracle en om contact met je op te nemen over het verloop van de procedure.

Waar mogelijk worden persoonsgegevens gepseudonimiseerd. Lees hier het volledige privacybeleid van TPC.


Door op 'Aanmelden' te klikken, bevestig je het bovenstaande.

Dit veld is bedoeld voor validatiedoeleinden en moet niet worden gewijzigd.