Is the entire online data ecosystem a breach of privacy rights?
Dr Garfield Benjamin responds to news reports regarding a new lawsuit against online advertising.
The Irish Council for Civil Liberties (ICCL) has launched a lawsuit against the online ad industry. The case is targeted at the Interactive Advertising Bureau (IAB) Tech Lab, as well as tech giants like Google and Facebook who follow the standards set by the IAB. The ICCL’s case is built on huge amounts of evidence around how data about us is collected and misused every time we go online.
The lawsuit has called the misuse of data by the online advertising industry a data breach. This is important to trigger legal complaints against things that should be protected under regulations like the General Data Protection Regulation (GDPR). But this does not mean that the companies involved have lost data or had it stolen. In fact, the problems are built into the entire online advertising industry.
Whenever we click on something online, our activities are logged and built into a ‘secret dossier’ about us. This can build up a picture of our lives. Where do you live? What age are you? What gender are you? What is your job? What are your interests? What’s your financial situation? Your living situation? Your family situation? All this and more is collected about us, mostly without us even knowing.
The data in these secret dossiers is used to target us with ads. This might seem harmless, but for starters personalised ads have a tenuous claim to being effective. STER, the Dutch public broadcast advertising agency, found that personalised ads based on web cookies were actually less effective than making good quality ads in relevant places (called contextual targeting rather than personalised targeting).
But most websites still run outsourced advertising based on the personalised targeting system. This is often called Real-Time Bidding (RTB), as advertisers pay based on keywords or characteristics that are matched to specific target audiences in real time. You’ve probably noticed that clicking on a certain product means related adverts immediately start following you around other websites. But beyond having the same targeted ads stalking you across the web, there are a number of other problems with personalised ad targeting.
Personalised ads and RTB are run by algorithms based on the data collected about us. These algorithms don’t understand the data they receive, they just process it. What this means is that it can introduce harmful content to our ad feeds or other online recommendations. Imagine you have recently miscarried, but the ad system has seen you were pregnant and continues showing you ads for baby products. If this is not changed, the system could automatically carry this on across the assumed age of a lost child, meaning that potentially painful memories are there every time you go online. Social biases often come through in these systems as well. Better job adverts or financial rates are not offered to women or racially minoritised people. This might not be overtly programmed in as discrimination, but the design of the algorithm to match target audiences with historical data means that the same human biases are run over and over again.
The entire data ecosystem is built on extracting data from as many people as possible, and using that to feed algorithms that not only shape our digital lives, but can determine what opportunities we have offline as well. It is long past time to address these issues. Progress so far has been limited, but is increasing - just think of the protests against the exams algorithms last year that caused a biased system to be overturned.
Real-Time Bidding (RTB) has previously been a focus of the UK’s Information Commissioner’s Office, who wrote a significant report on the problems it creates but have so far taken little meaningful action to challenge any of the problems they identified. The entire investigation was put on hold during the pandemic, but has resumed this year focusing on auditing digital marketing platforms. But one of the persistent limitations of this work is that it enforces only bare minimum legal requirements, which still leaves many loopholes and doesn’t really challenge the underlying issues.
Last year, I wrote a Digital Society report that brought together thinking about privacy and online content. In it, I identified huge gaps in regulation. Different aspects of our digital lives fall under different regulators, which means that the bigger problems have so far fallen through the gaps. Regulators may acknowledge the issues, but don’t have the powers or political will to tackle them. I conducted surveys of the UK public that found a lot of concern about privacy and information online, and the need for action. I outlined seven steps to regulating more cohesively, acknowledging that privacy and online content aren’t just linked, they are part of the same issue. This issue needs systemic change, an overhaul of the exploitative business models and personalised ad hype that is causing so many problems.
At a time when the government’s Taskforce on Innovation, Growth and Regulatory Reform (TIGRR) are pushing for the UK to abandon the protections of the GDPR, it is vitally important that we think more strategically and more cohesively about how data is regulated. It is not a time to relax the rules. It is time to take a more systemic look at how our data is being exploited to increase inequality. It is time to take action against tech companies who have designed the web in a way that exploits people. It is time to take action against data exploitation and algorithmic injustice. Hopefully the ICCL case will kickstart some of these major changes.
Dr Garfield Benjamin is a Postdoctoral Research at Solent University. View his academic profile here.