In het nieuws

The future of tech should be fair, transparent, accountable, says Meg Foulkes from the Open Knowledge Foundation

op 12 november 2020

NL

Share on facebook
Share on linkedin
Share on twitter

As part of an international campaign to lift the lid on data privacy violations, The Privacy Collective is asking some of Europe’s leading experts why online privacy matters. 

Meg Foulkes is the director of the Open Knowledge Foundation’s Justice Programme, a new initiative set up to support legal professionals in holding AI and algorithms to account and improving their deployment. Here, she talks to The Privacy Collective about why it’s important to hold power to account, how the boundaries between online and offline spaces are becoming blurred, and the need for data literacy skills to be taught in schools.

Why does online privacy matter?

Privacy is a really difficult concept because it doesn't have one fixed meaning. Edward Snowden said in 2016, that privacy is the right to a self. It gives you the ability to share who you are with the world on your own terms. 

The problem with this framing is it obscures the hidden power and privilege behind that. The sort of privacy invasions that are regularly experienced by people on the margins of society – anyone who asks for government assistance, has interactions with social services, or with law enforcement, for example – has to divulge intimate details of their lives to access that support. Their lives are opened up for scrutiny by the digital age, in ways that people in a more privileged position just don’t have to encounter.

You also have to take a step back and consider, what even counts as ‘online’ anymore. The boundaries are becoming increasingly blurry. We’re not just accessing the internet on our phones or computers anymore, there are also smart devices that we have in our homes, such as virtual assistants like Alexa. The space that we need to protect is wider than ever before.  

Can you tell me about the work of the Open Knowledge Foundation and its role in championing open data? What does ‘open knowledge’ mean?

Open data is the idea that some data – non personal, non private data – should be freely available to everyone to use, republish and amend as they wish, without restrictions from methods of control, such as copyright. This definition was formulated by Open Knowledge Foundation in 2006 and is still widely used today. Open data advocates encourage governments to allow citizens access to all manners of data, such as budget spending for instance, and as a movement we have had a real impact around the world. It’s highlighted the extensive use of tax havens for example, and revealed bribery and corruption involved in the public contract bidding process. 

We use the term ‘open knowledge’ because we want to ensure that opening up that information doesn’t become a hollow act. Data is only useful and meaningful when it becomes knowledge, which means you need more than just a spreadsheet full of numbers. You need skills to interpret and analyse the information, to be able to verify and check the quality of the data, and to put it into context. So alongside our advocacy work, we’ve trained thousands of individuals, including journalists and those from civil society, to give them the data literacy skills they need.

Can we talk a bit about the Open Knowledge Justice Programme, Are you seeing an increase in litigation around these subjects? 

The Justice Programme focuses on improving accountability when artificial intelligence (AI) and algorithms are used by governments or corporate agents to make decisions about people on a large scale. We call them public impact algorithms and they can have very serious consequences. 

To improve accountability, to hold these bodies to account if something goes wrong, we’re training barristers, solicitors and legal activists to give them a working knowledge of how AI and algorithmic decision making works. This is pretty alien to the legal profession and lawyers need some understanding to even identify when a client has been impacted by this. Secondly, we’ve written a practical checklist of questions that legal professionals can direct to the body who deployed the algorithm, such as the Home Office in a visa decision, or the Department for Work and Pensions in a benefits calculation. These questions might be: has a human verified this result at any stage of the process? Or can you tell me whether a data Impact assessment has been conducted? They’re not very technical questions but the answers can be very revealing.

What we’re starting to see is often the deployers say “we don’t know”. And that’s not good enough as far as the law stands in the UK. We’re also working with partners to create standards for best practices regarding the use of algorithmic decision making. This work really focuses on mitigating the potential harms to affected individuals throughout the whole process – starting with deployment, through to design and roll out. 

Too often the assessment of the risks and the benefits associated with these technologies happens behind closed doors. The people that have the most to lose when these things go wrong aren’t consulted or given the opportunity to have input.

You recently expressed concern about privacy issues around remote tests for students. What are your thoughts on these sorts of initiatives that have been introduced at pace during the pandemic? Has appropriate scrutiny been applied?  

I don't think these technologies are being scrutinised appropriately at all. Governments around the world are operating with expanded “emergency powers” at the moment to fight the Coronavirus pandemic. I think there's a certain amount of public goodwill in the face of these perilous times for that to happen. But the fear is that such temporary measures could become permanent in a quiet way – what we call the “mission creep effect”. 

Too often the assessment of the risks and the benefits associated with these technologies happens behind closed doors. The people that have the most to lose when these things go wrong aren’t consulted or given the opportunity to have input. But there have been some successes in challenging these hastily deployed systems, even while the pandemic is ongoing. Norway for example, ceased using its contact tracing app Smittestopp, after a really damning Amnesty International report that identified it as one of the most invasive worldwide. 

What does trustworthy AI look like? What key principles/practices need to be considered? 

The key principles are ones with which we're all familiar: fairness, transparency, accountability. We've got a plethora of ethics frameworks and guidelines in the field of trustworthy AI, which helps to inform public debate and raise the profile of this issue, but we also need effective regulatory and legislative provisions to govern these technologies. Otherwise we really have no means by which to hold governments and corporate actors to account when they don't behave ethically, or when the AI isn't as trustworthy as it should be. It’s not enough to just tell these players to be a bit more trustworthy when the potential harms are so serious. 

In 2018, the cross-party science and technology committee at the House of Commons recommended that a register of all government use of AI and algorithmic decision making was published. That still hasn't happened and would be a good first step. But again, if you’re asking for that kind of transparency, we also need the skills to deal with that. A list of algorithms or source code isn’t going to be useful to or of interest for the general public. 

That’s why we see strategic litigation as a key way we can make meaningful headway on this issue, so that legal professionals can represent the general public on their behalf. Collaboration with different groups – grassroot organisations, the legal sector, trade unions, the media – is also essential. 

The Open Knowledge Foundation has campaigned for a focus on teaching data skills to the British public – why is that so important? What can the public do to protect their privacy today? 

We've seen during the pandemic how important data can be as a source of knowledge – we’ve been bombarded with graphs and slides over the past six months. But if governments published this data openly and the public had data literacy skills, this information could become much more powerful. 

With the awareness of the dangers of fake news and misinformation rising, it’s now commonly accepted fact checkers and verifiers are important. Data literacy skills really allow ordinary people to be able to conduct the same kind of reliability checks when data is presented to us by the state – especially when they do it in a hurry and there are huge implications. It's been amazing to see schools start to introduce this thinking at a really young age. They’re learning that you shouldn't trust every result that Google spits at you, that you have to have some kind of discretion and a healthy amount of scepticism when online. 

As well as building up data literacy, another really great way to get involved is to support civil society organisations who are doing this work. I don't think members of the general public necessarily have to be personally involved, but they could donate a little money or support a campaign by sharing it on their social media. There's also a huge open data community for those wanting to learn more. Open Knowledge, for instance, holds an open data day every year on 6 March, which is a great place to start. 

Your data should not be for sale. We’re taking Oracle and Salesforce to court for the misuse of millions of peoples data and we need your help! If you believe that tech giants should be held accountable for their use of people’s data please support our claim here. Because your privacy matters.

Steun onze zaak tegen Oracle en Salesforce.

Aanmelden

Toon je support

Door je gegevens hier achter te laten steun je onze zaak tegen Oracle en Salesforce. We gebruiken je volledige naam en mailadres mogelijk om in de juridische procecure aan te tonen hoeveel mensen actief hun steun hebben gegeven aan The Privacy Collective.

Support form | home & popup

- Je steunt de zaak van The Privacy Collective tegen Oracle en Salesforce.


- Je maakt deel uit van de groep van benadeelden.

Je maakt deel uit van deze groep als je sinds 26 mei 2018 vanuit Nederland cookies geaccepteerd hebt van Oracle en/of Salesforce en op dit moment in Nederland woont. Deze cookies zijn aanwezig op populaire websites zoals nu.nl, booking.com, marktplaats.nl, bol.com, buienradar.nl, telegraaf.nl, funda.nl en amazon.nl.


- The Privacy Collective gebruikt jouw voor- en achternaam en emailadres om jouw steun aan te tonen in de procedure tegen Salesforce en Oracle en om contact met je op te nemen over het verloop van de procedure.

Waar mogelijk worden persoonsgegevens gepseudonimiseerd. Lees hier het volledige privacybeleid van TPC.


Door op 'Aanmelden' te klikken, bevestig je het bovenstaande.