In our last blog post >b<, we discussed the urgent need for an immediate and coordinated cross-government response, including defence ministries, to the security threat of online foreign disinformation that leverages both public and private expertise. Although the long-term societal impacts of false narratives propagated by foreign adversaries has yet to be completely understood and quantified, the development and anticipated launch of the Government’s Online Safety Bill in the UK provides an opportune moment to consider counter-disinformation interventions. These should build off existing conversations with social media platform providers, harness private sector expertise and vibrant startup ecosystems to develop digital defensive tools, and foster population resilience through building core media literacy skills.
Collaborating with Confidence
The increasingly digital means of disseminating disinformation means that narratives are constantly and rapidly emerging and evolving in real-time. It is often disseminated across many platforms and mediums by a multitude of human and non-human actors. As such, while human moderation is irreplaceable, defence today would benefit from a range of holistic digital tools designed to detect, monitor, assess and counteract foreign disinformation.
In order to source agile solutions and respond urgently to reinvigorated disinformation threats, governments ought to reduce barriers to entry in defence procurement to tap into the wealth of expertise and innovative solutions by startups and SMEs. In this regard, PUBLIC has worked closely with DCMS in the UK to launch the Safety Tech Challenge Fund >s<: an initiative designed to stimulate the Safety Tech ecosystem by incentivising companies to develop innovative solutions to protect user safety within end-to-end encrypted environments.
While the Safety Tech Challenge Fund addresses a variety of online harms, counter-disinformation technologies that tend to focus more on metadata and broader network effects would undoubtedly benefit from similar mechanisms of funnelling investment. By supporting startup-led disinformation solutions - broadly segmented between detection/monitoring tools and rating/filtering tools - governments can improve their understanding of online narrative patterns and better address its harms.
However, in order for these tech solutions to tackle disinformation effectively on social media platforms, an environment of greater collaboration must be fostered between Safety Tech suppliers and the social media platforms where disinformation propagates. Indeed, one of the primary barriers to effective counter-disinformation technology solutions is the fact that Safety Tech suppliers at present have limited access to open source datasets through large social media APIs. Effective defence today would benefit from working with these platform providers to reduce data access restrictions for government-affiliated Safety Tech suppliers. Where data quality and data access varies platform by platform, defence against disinformation today stands to benefit from collaboratively developed data standards that make a uniform ask of all large social media platforms where mis/disinformation is disseminated.
Online platforms have historically been reluctant to share datasets that may be later used to demand further safeguards and regulations. However, as the public interest in and government scrutiny of the online information environment has intensified - and its real-world impact realised - this position appears to be changing. The Online Safety Bill, discussed in the last blog post, and European Union’s The Digital Services Act (DSA) seek to hold social media providers accountable for illegal content and disinformation on their platforms. The Digital Services Act - having recently been agreed upon by the European Parliament and EU Member States on April 23rd 2022 - proposes to introduce a “notice and action” mechanism which would mandate urgent removal of illegal content upon receipt of a notice. However, where harmful disinformation is not always illegal, the DSA would also impose obligations upon Very Large Online Platforms (VSOP) to perform mandatory risk assessments, risk mitigation measures, and independent audits on such harmful content.
Crucially, Article 31 of the DSA mandates that VSOPs provide data access to vetted independent researchers investigating the systemic risks posed by disinformation and other online harms. These measures will undoubtedly help bridge the gap between policymakers and digital innovators while supporting the growth of the Safety Tech sector by addressing some of the current data access barriers.
These legal obligations serve to further encourage social media platform providers to become proactively responsible contributors to technologically-enabled societal discourse. Irrespective of these nudges, there are standalone user-driven incentives for platform providers to implement robust counter-disinformation measures. The proliferation of fake news on social media poses a reputational threat as consumer trust in news on social media fell to a 10-year low in 2021 in part due to the inability to tackle mis/disinformation. As social media platforms primarily generate revenue through advertising, reducing false news also serves to minimise brand risks posed to potential advertisers. The reduction of disinformation online is therefore not just an obligation but an opportunity for platform providers to rehabilitate consumer trust, improve user experiences, and minimise potential risks.
Getting Wise to Fake News
These technological and regulatory interventions should help to digitally transform and bring the long-standing reactive defences of flagging, fact-checking, and removing/correcting false narratives into the digital age. In spite of this, the volume of mis/disinformation content online and the innumerable channels in which false narratives can propagate means that we must expect some disinformation to fall through the cracks and reach their intended audiences. Government must therefore also build up population resilience by empowering and equipping ordinary citizens with the skills and knowledge to be critical consumers of information.
In the UK, Covid-19 mis/disinformation has urgently prompted DCMS to create an Online Media Literacy Strategy in July 2021, but it recognises that the current ‘national curriculum does not include media literacy specifically.’ On this front, almost two years later, it is clear that there is still much work to be done as the Year 2 Online Media Literacy Action Plan >y< notes that ‘there is limited understanding about the factors contributing to audience vulnerability to misinformation and disinformation, and little consensus about the most effective ways to build resilience in audiences.’ Maggie Feldman-Piltch, a national security expert and founder of Unicorn Strategies, argues that this needs to change if we intend to counteract reinvigorated foreign disinformation threats which intend not only to spread false or misleading information, but are also carefully curated to undermine national security by eroding citizen trust in democratic and journalistic institutions. She therefore suggests that the essential skill of media literacy must be taught continually - in a hygiene like manner - and explicitly to counter foreign disinformation.
In light of the Russian invasion of Ukraine and proactivity by law-makers internationally to tackle the challenge - there has never been a more opportune time for a holistic counter-disinformation strategy which considers, engages and incorporates cutting-edge digital solutions.
Join us at Defence Disrupted as we explore how to partner up and work to tackle disinformation and clean up information pollution now and in the years ahead. Register now to get your ticket!