Share on facebook
Share on linkedin
Share on twitter
As part of an international campaign to lift the lid on data privacy violations, The Privacy Collective is asking some of the UK’s leading experts why online privacy matters.
Carsten Maple is a professor of cyber systems engineering at the University of Warwick, where he leads the Academic Centres of Excellence in Cyber Security Research. He’s also a fellow of the Alan Turing Institute, a co-investigator in the PETRAS National Centre of Excellence for IoT systems cybersecurity, and former co-director of the National Centre for Cyberstalking Research. Here he discusses who the public trusts when it comes to data, how the coronavirus pandemic has changed the conversation about privacy, and how it’s almost impossible for the public to make an informed choice about sharing data.
Why does online privacy matter?
There are many different types of information collected about people online, and in many different ways. Some information is provided by citizens, and they might provide it openly. Other data is observed, derived and inferred from our behaviour and activity online. We also talk about passive and active data collection.
On the commercial side, there is a huge market for data brokerage. These are legitimate firms that buy, sell and trade in your information, and make a lot of money. There’s also the black market for information. If there is a compromise of your password or credit card details, for example, that information can be sold. We know there are lists of people who’ve been scammed before, so if you’ve been the victim of some kind of fraud or identity theft, then there’s a good chance that you’ll be re-victimised. That information can also become important in building a fraudulent relationship with you – it’s quite interesting about what one can infer from data and how that can be used for nefarious, or at least questionable, purposes.
You could argue that users consent to share their data all the time. But consent is very difficult because typically we negotiate it at the time of a transaction. If you need to send an email, and there were terms of service that were not too favorable for privacy purposes, then you may well consent to that because you really need to send that email. And that’s why you’ve got the option to opt out later, or the right to be forgotten under the General Data Protection Regulation (GDPR), but many companies recognise the value of information and want to try and circumvent the spirit of GDPR.
Do you think there’s enough protection in place around how people’s data can be used?
I think the GDPR is fantastic regulation, but I do think there might need to be some revision because of the caveats around legitimate use. That’s typically relied on by governments who can argue public interest, rather than needing to obtain consent from every citizen they collect data about. But companies can use it too. When experts talked about GDPR when it was first introduced, many asked how are we going to manage this consent? And a lot of people said actually, this will be resolved in the courts, because companies will argue legitimate interest rather than seek consent.
So if I’ve got a legitimate interest in providing you a better service, then it could be legitimate for me to collect that data. And of course, you have the option to opt out. The problem is that isn’t as clear to people. The Information Commissioner’s Office (ICO) is charged with the enforcement of GDPR but how many minor issues are reported? And if they all were, could they all be taken up by the ICO? Probably not. So to answer your question, I believe that the regulations that are being introduced are a step in the right direction, however more needs to be done to ensure correct enforcement.
It’s quite interesting about what one can infer from data and how that can be used for nefarious, or at least questionable, purposes.
You’ve spoken in the past about the privacy paradox – can you tell me a bit about what that means?
It’s been around as a concept for a long time – Susan Barnes first wrote about it in 2006 – and it’s this idea that we say we want to protect information, but we willingly give it away. In 2014, a report from the Oxford Internet Institute talked about a new privacy paradox, which found that people know what the risks are and are making informed choices.
I believe that actually it is not an informed choice – it can be a conscious choice but not informed. The most common example is Facebook. You know that they’re going to use your data but, even after Cambridge Analytica, no one could foresee or predict what was really happening. It’s very difficult for even me as an academic in this area to make an informed choice.
Transparency is one thing. I can give you terms and conditions, they are transparent, they also consist of 70 pages that you will never read. But are they apparent? I think the conversation needs to move from transparency to apparency – trying to make sure they’re understandable. Regulation is one lever we have (although it would be a challenge – do you test how readable something is? Do you test for length and font size?) but another is economic. If there are economic incentives to being more apparent with how data is collected – increased revenues from customers as they trust a service more – that will also drive the behaviours of these technology platforms. Some companies may also be able to use fair and ethical behaviours as a way to justify a higher price.
And when we think about the rise of smart technology and the Internet of Things (IoT) – which you focus on as part of your work with PETRAS – do you see potential ethical and privacy issues around technology companies having access to this sort of data in our homes?
It’s really interesting, because IoT systems are increasing and we need to do more research into trustworthy autonomous systems. And part of that is around the ethics. The Internet of Things is a rich source of data. We’re collecting far more information than we’re creating. Cars for example are creating and transmitting so much data, it’s incredible. Warwick University is trying to lead on some of this work to see how we can share automobile data effectively. And I really think that coming together of socio and technical experts is really important. Privacy risk and ethical considerations should be measured in all of these scenarios.
You recently co-authored an interesting project called Speak for Yourself about who the public trusts to be in charge of our data – can you tell me a bit about the study? And do you think the pandemic has changed that conversation about who we’re happy to share that data with?
The pandemic is an interesting situation because it is unprecedented for most people in this lifetime, and it’s certainly unprecedented in the time of the information age that we’re currently in. In the Speak for Yourself project, we gave a number of scenarios around the type of data that someone would happily share (location data, data from your contact list) and who they would share it with. Overwhelmingly the NHS was the body that people in the UK were most likely to share with, and effective control of the pandemic (at the time of doing the survey) was more important than data privacy. But that shouldn’t necessarily mean that that status quo pervades beyond the pandemic. Just because we are comfortable in one situation, doesn’t mean that we will always be comfortable sharing that data.
Your data should not be for sale. We’re taking Oracle and Salesforce to court for illegally selling millions of peoples data and we need your help! If you believe that tech giants should be held accountable for their use of people’s data please support our claim by “liking” our support button. We’re fighting for change, because your privacy matters.