Immunizing European democracy against disinformation: Why fake news is about more than truth
The events of the last three years have taught us one key lesson; democracy can be hacked.
The upcoming European Parliament elections will be a bellwether for the severity of disinformation in society today. They are uniquely vulnerable and any attempt to manipulate them could put the entire process in jeopardy. Indeed, events in the UK and the U.S. demonstrate how fake news can play a role in throwing entire societies into disarray. But despite the gravity of the threat, as a society we remain digitally naïve. Profiling firms and social media behemoths continue to monetise and abuse personal data. As a result, the disinformation we are targeted with is more prevalent than ever.
On 26 March, ICF Next, in collaboration with its academic partner, the think-tank Protagoras, welcomed Paul-Olivier Dehaye, a mathematician, data specialist, and one of the first people to uncover the Cambridge Analytica scandal, and Janak Kalaria, a technology and analytics specialist at ICF, to share their thoughts on the resilience of European democracy in the age of disinformation.
Content is not king
The public narrative surrounding fake news focuses on the truth value of the content itself—is what we’re being told true or false? But, Dehaye argues the issue is not as binary as this. “Content is only a very narrow part of the problem,” he says. “The problem goes much further and goes into targeting and personalization. Why? Because the targeting helps amplify the speed at which this information circulates.” By focusing less on the content itself, and more on how the content is delivered to specific users, we can kill the problem at its root.
The issue of profiling, targeting and virality is dangerous within a political context, and one of Facebook’s profiling services—Facebook Lookalike Audiences—is questionable in this respect. It delivers thousands of relevant leads to businesses by analyzing demographic trends within the profiles of those who have bought or searched for their products. Although useful from a business perspective, the practice becomes legally and ethically dubious when used for political purposes. “Lookalike Audiences is what propels a lot of disinformation,” says Dehaye. “It’s using outsourced profiling for political purposes without the explicit consent of the individuals.” In theory this isn’t legal, but due to the absence of enforced regulation in the field, the tool continues to be used in these contexts.
These tools are why the debate needs to be broader than just the truth validity of content. “Information is not just true or false but is aimed at particular people to try to answer needs or weaknesses they may have with a view towards virality,” comments Dehaye.
Technical problem, technical solution
It’s more difficult to develop a solution as fake news is constantly evolving, becoming more sophisticated and harder to identify. Fraudsters and trolls continue to breed new methods of deception, such as deep fakes – videos that, although seemingly real, are in fact fabricated. Kalaria, an expert in the technological landscape surrounding fake news, believes that the solution to the problem is almost paradoxical. “The irony of fake news is that the technology that caused this crisis is the same technology that can solve it,” he says. “We need to trust the algorithm to be one step ahead of those trying to misuse it.”
– Janak Kalaria
According to Kalaria, we already possess the technology to successfully combat fake news. Artificial intelligence can detect suspicious articles by applying techniques in computational linguistics, machine learning, keyword analytics and link analysis. If a piece of content raises red flags when viewed through each technological lens, then there is a high chance of it being false.
Virality is also a key indicator from a technological standpoint. According to Kalaria, artificial intelligence tracks the speed at which content is circulated. “Anything that is fake is 70 percent more likely to be retweeted than a true story.” Working in partnership with a rapid alert system, the artificial intelligence can detect and remove fake news in real time, before the disinformation has a chance to mislead public opinion. However, despite the efficiency of the algorithm, Kalaria stresses that we still have a part to play. Human expertise, he says, is still needed to verify complex and intricate cases of fake news.
Educate, equip, empower
Both Kalaria and Dehaye agree that, however sophisticated the technological solutions are, the public still needs to adopt robust intellectual self-defense methods to mitigate fake news. This can occur by educating the public to read news laterally and verify sensationalist headlines through other sources. Training a small percentage of the population to be suspicious and skeptical of unverified news can help spread this culture of doubt, in turn de-escalating the problem.
Reversing the trust dynamics of the internet can also help, says Kalaria. Ten years ago, we believed everything on the internet to be white-listed, unless we had proof otherwise. Now, given the ‘Wild West’ nature of today’s internet, the opposite should apply—we should believe what we read is false until proven true.
The burden of responsibility
– Paul-Olivier Dehaye
Although these are long-term strategies, there are steps that can be taken in the coming weeks to shield the European elections from disinformation. Dehaye believes that the role of national Data Protection Authorities (DPAs) in each Member State is critically important. They have a responsibility to ensure each social media platform is transparent about how data is used and why users are targeted with particular adverts.
But DPAs can only do so much, and social media titans also have a responsibility to respond to these calls for transparency. Calls which, so far, have been ignore—perhaps best illustrated by Mark Zuckerberg’s empty chair at a pan-European hearing following the Cambridge Analytica scandal.
Dehaye says that the attitude of Facebook needs to change. He gives the example of the forced disclosure of targeting information – a tool used by Facebook to allow users to understand why they are targeted with certain adverts. But use it more than a handful of times and Facebook will block you from using it again—citing ‘misuse’ of the tool. “We’re supposed to trust these platforms,” he says. “But they’re showing bad faith.”
While fake news presents a risk in the context of the European elections, the solution relies on a unified approach; collaboration between governments and content platforms, between technological algorithms and the wider public. As a society, there are opportunities for all of us to contribute towards purging our newsfeeds of disinformation. Although this will not be an instant process, it can be initiated by the public making a small, but important declaration; that our data is valuable, and we expect full control of it.