Defeating Disinformation in the Digital Age

The use of disinformation has been a long standing phenomenon of war that has historically fallen under the umbrella of political propaganda. However, the modern tools and technologies used to create, disseminate, and propagate false narratives have vastly transformed its ease of creation, speed of spread, and range of possible audiences that disinformation can reach. In order to counter this reinvigorated and digitally-empower threat, population resilience should be actively fostered so as to not have the fundamental pillars of democracy subverted, twisted and weaponised in the theatre of war.

Based on our work on the Online Safety Data Initiative (OSDI) - we have defined disinformation as:

‘False and/or manipulated information that has been deliberately created and disseminated with the intention to deceive or mislead audiences…for the purposes of causing harm, or for political, personal or financial gain’.

In the defence context, the focus is on malicious state and non-state actors that seek to advance political agendas through targeting specific but large audiences abroad. These propagators of disinformation therefore often utilise social media to bridge geographical borders, leverage digital tools such as bots, and take advantage of a poorly regulated environment where the profusion of information sources makes it difficult to identify falsity. This also serves to amplify the reach and potential harm of false narratives as social media has made it easier to inadvertently and unknowingly share this false information - a phenomena defined as misinformation.

The technological advances in the last two decades have vastly transformed the ways in which civilian populations consume and are now able to directly create and disseminate information. In 2021, Ofcom found that 33% of UK adults mostly get their news from social media and 21% get their news equally from social media and traditional news outlets.This trend towards real-time digital news in the context of conflict presents both risks and opportunities as it means that information is created bottom-up: generated by citizens on the ground with a phone camera and disseminated on social media platforms such as Facebook and the ‘defacto public square’ of Twitter. Indeed, while the Vietnam war is often dubbed the first television war, commentators and experts, such as Ciarán O’Connor of the Institute of Strategic Dialogue, have characterised the recent Russian war in Ukraine as a social media war of an unprecedented scale.

While technologically-empowered information creation and consumption trends has pierced the ‘fog of war’ by giving citizens deep insight into conflict unfolding on the ground, it also provides an opportunity for malicious state and non-state actors to easily target a global online audience - particularly the underserved and marginalised - with doctored narratives that aim to ‘drive wedges in society.’ Indeed, these seemingly citizen-generated narratives may prove more insidious than traditional state disseminated propaganda as there are strong concerns that regulatory interventions to track disinformation would infringe upon privacy rights; and that stronger measures such as content flagging or removal would infringe upon freedom of speech.

The interventions needed to address these digital threats may also prove difficult to justify as governments face the challenge of quantitatively measuring the impact and harm caused by online mis/disinformation. Why is this vital? To enable more efficient decision-making, to inform the development of policy, and to communicate to the public and international governments the vitality of countering this threat. This is no easy task, however, as the attribution of harms to malicious actors and the establishment of a clear causal chain that results in real world harm has been notoriously difficult. Nonetheless, although there is work to be done to quantify and understand the extent of these harms, the undeniable intent of foreign adversaries to cause harm means that ministries of defence should proactively recognise disinformation as a distinct and digitally-enabled threat that requires a cross-government response.

The need for a holistic approach has begun to be recognised around the world. Sweden, as an example of good practice, has established the Swedish Psychological Defence Agency in January 2022 to centrally coordinate efforts to ‘identify, analyse, meet, and prevent undue influence on information and other misleading information that is directed at Sweden.’ The United Kingdom has similarly recognised the need for a ‘‘whole government approach to protecting democracy in the UK.’’ The Department of Culture, Media and Sport (DCMS) Counter Disinformation Unit (CDU) - which has the ‘core function of disinformation monitoring, analysis and response’ - has been established to lead and coordinated counter-disinformation research and initiatives  across government’. One such initiative is the launch of The Online Safety Bill - currently in the final stages of the legislative process - which proposes to create an advisory committee on disinformation and misinformation to leverage private and public sector expertise to provide independent advice to government on tackling this emerging security threat.

There is certainly a greater need for more intervention to drastically decrease the reach and potential impact of foreign disinformation campaigns. Much of this comes from being more empowered to understand and quantify the extent of this impact. With the landscape evolving in real-time, defence today requires more proactive coordination of both cross-government functions and private sector expertise - across startups and industry - to mount a digital defence.

Register for Defence Disrupted today to learn how to combat weaponised information through innovation, and protect tomorrow.