By Meera Narendra
When it comes to politics, the spread of targeted misinformation is rife and the use of cyber-attacks is abundant considering the dramatic focus on data, privacy and security in the past two years. These attacks and targeted misinformation heavily influence the opinions and views of the public and thus should be a cause of great concern.
With no doubt the UK election parties fought largely online, and the use of data played a significant role. Considering this is the first election since the implementation of data protection laws in 2018, including the EU General Data Protection Regulation (GDPR) – it brings to question whether regulators can ensure that all parties are playing by the laws, and if new regulation is fit for purpose when dealing with political campaigning.
It should be general knowledge by now, but unfortunately the majority of the public are unaware that during the election, parties had utilised micro-targeting techniques on them to appeal to their opinions and interests.
Although using data to target voters has always been a part of political campaigning, it often allows for highly specific targeting based on demographics, interest-based and behavioural factors. However, this raises many concerns about transparency and privacy, and becomes extremely problematic if the content being advertised or sent is inaccurate and heavily misleading.
The BBC conducted an analysis of the techniques being used, to which they asked their audience to send in any political adverts displayed on their Facebook feed along with Facebook’s explanation as to why they are seeing the ad. It was revealed that the majority of the ads were closely geographically targeted, whilst many ads were targeted at men than women.
Last month, the Conservative party faced criticism for blatantly misleading the public after renaming one of their official Twitter accounts to “factcheckUK” during the televised leaders’ debate, and masquerading it as a fact checker whilst tweeting anti-Labour posts, and pushing pro-Conservative content to the public.
Just two days after the debate, the Conservative party launched a spoof website purporting to be the Labour Party’s manifesto whilst attacking its pledges.
It is evident that the use of targeting algorithms and the use of misinformation in political advertising is a cause of concern, and in the wake of the 2016 US presidential campaign (post-Cambridge Analytica world) it is an issue that quickly needs to be resolved.
The Cambridge Analytica scandal is a prime example of a “full-service propaganda machine,” whereby the harvesting of millions of US Facebook profiles heavily impacted the 2016 US presidential election and swayed the Brexit referendum.
In 2014, the private data of over 85 million Facebook users fell into the hands of UK data analytics firm, Cambridge Analytica. The firm had created an app that linked with Facebook whereby Facebook users’ personal information would be acquired through a personality quiz. The quiz was designed to harvest users’ data to target American voters with pro-Trump content in the run up of the 2016 presidential elections.
In addition, the app had collected information from consenting users, as well as improperly accessing the data of users’ friends without their consent or knowledge. Following an investigation, the Federal Trade Commission fined Facebook a record $5 billion.
Just last week, the Federal Trade Commission ruled that Cambridge Analytica had deceived Facebook users by collecting data to swing voters ahead of the US elections, despite it claiming otherwise.
Cambridge Analytica had also been working for Leave.EU on the EU referendum according to emails supplied by Brittany Kaiser, the former director of business development at Cambridge Analytica, and whistleblower Christopher Wylie.
The data profiling of UK voters led to the targeting of increasingly sophisticated messages about the Brexit campaign. Leave.EU has maintained that it did not pay Cambridge Analytica for any services, despite evidence coming out.
Many organisations have urged governments for the outright ban of political ads. Last month, Twitter announced that it had banned political advertising from its platform, whilst Google announced in November that it will limit targeting for political ads to broad categories like age and gender – effectively banning micro-targeting.
However, Facebook will still carry on accepting political ads, whether they are accurate or not, in the interests of free speech and to refrain from involving itself in democracy. The implications of this soon became very evident. It was found that nearly 90% of the ads posted in the first days of December were pushing figures that had been challenged by the UK’s leading fact-checking organisation Full Fact.
Politics are now adopting techniques used in the advertising industry, thus emergency legislation must be passed to ensure transparency in political campaigns. The role of technology has never been more significant than in this general election…and sadly its impact resulted in the Conservative party winning.