top of page

UK Parliament

Inquiry on the National Security Strategy (NSS)

Written evidence submitted by Broderick McDonald

TNS0011

4fa104_bbcc48843964436195c4309ac5125c48_mv2.avif

​In written evidence to the UK Parliament’s Joint Committee Inquiry on the National Security Strategy (NSS) I argue greater attention is needed in two areas of growing concern: the long-term strategic instability of UK counter-terrorism policy, and the emerging misuse of AI in terrorism, extremism, and disinformation. The submission draws on past research conducted through the Oxford Disinformation & Extremism Lab (OxDEL) and previous fieldwork visits in the MENA region.

 

In the first section of the submission, I examine the risks of strategic inconsistency in UK counter-terrorism and prevention policy. Drawing on the concept of the “yo-yo approach” (Ilan Goldenberg), I argue that the UK risks falling into a pattern of reactive surges followed by premature disengagement—particularly in fragile regions like the Sahel, East Africa, and Central Asia. This undermines long-term gains, erodes partner capacity, and increases the likelihood of violent re-emergence. The submission calls for maintaining a baseline of expertise, funding, and institutional memory in conflict prevention and CVE, even when geopolitical attention shifts elsewhere.

The second section addresses the growing threat posed by AI-enabled terrorism and extremism. It outlines how large language models are already being used in recent attacks to support bomb-making, identify targets, and reinforce extremist ideology—particularly through sycophantic and memory-augmented systems. The submission details emerging use cases like chatbot radicalisation, recruitment automation, and stepwise attack planning via MAMs, with specific reference to recent case studies in the US and Europe. It argues that the NSS must treat consumer AI misuse as a near-term asymmetric threat, not a future risk.

 

You can read the full submission on the UK Parliament website:

Read full HTML Submission (Broderick McDonald): https://committees.parliament.uk/writtenevidence/148669/html/

Read full PDF Submission: https://committees.parliament.uk/writtenevidence/148669/pdf/

For further detail or questions, contact: broderick.mcdonald@kcl.ac.uk

Background: https://committees.parliament.uk/work/9201/the-national-security-strategy/

Written Evidence: ​https://committees.parliament.uk/work/9201/the-national-security-strategy/publications/written-evidence/

______________

The National Security Strategy 

Inquiry

The Government published a new National Security Strategy (NSS) on 24 June 2025.

The NSS reviews the risks facing the UK and sets out plans to address them, structured under three themes: security at home, strength abroad, and sovereign and asymmetric capabilities.

The Committee is seeking views on how well the NSS addresses the range of challenges facing the UK. The Committee is focusing on priorities, delivery and accountability - with an emphasis on the adequacy of arrangements for cross-government co-ordination,  timelines, funding and responsibilities.

National Security Strategy under scrutiny in new inquiry

23 July 2025

The Joint Committee on the National Security Strategy is launching a call for evidence for its new inquiry on the UK’s National Security Strategy.

 

In June 2025 the Government published its long-awaited National Security Strategy, which focuses on boosting the UK’s security at home, strength abroad, and sovereign and asymmetric capabilities. 

In the Strategy, the Government warned that it would no longer be enough for the UK to “manage risks or react to new circumstances” and emphasised that we need to “mobilise every element of society towards a collective national effort”. It further warned that, for the first time in many years, there was a risk of the UK homeland coming under direct threat. 

The Committee’s inquiry will examine whether the National Security Strategy sets out the right priorities, how tensions between different objectives will be reconciled, and how its core commitments will be funded and delivered. 

The inquiry will also explore the details of policy choices - particularly around challenging areas such as China or national resilience - and how well the Strategy joins up different parts of Government. 

Joint Committee on the National Security Strategy, House of Commons, London, SW1A 0AA

 

Full terms of reference are available on the Committee’s website

Matt Western, Chair of the Joint Committee on the National Security Strategy, said: 

“The Government’s new National Security Strategy is a huge moment for the UK’s approach towards resilience, defence and security. It’s right that Parliament takes a close look at exactly how its ambitions will be achieved.

How will all of this fit together, are there any gaps, and how will competing priorities be reconciled? In an interconnected world, this strategy needs to bring together the vast machinery of Government and mobilise the private sector to deliver on national security objectives. Delivery will be key. Our inquiry will examine the choices and trade-offs, and how well the key objectives are being delivered across Government.”

- -

Author: Broderick McDonald, University of Oxford

About the Author:

 

Broderick McDonald is a Research Fellow at Kings College London XCEPT Research Program and a Visiting Fellow at The Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS). His work focusses on AI Security and preventing high-severity harms from disinformation, non-state actors, and hybrid threats. Prior to this, he served as an Advisor to the Government of Canada and was a Fellow with the United Nations Alliance of Civilizations (UNAOC). Alongside his research, Broderick has provided expert analysis for a range of international news broadcasters including ABC News, BBC News, BBC America, CBC News, PBS, Good Morning America, France24, and Al Jazeera News. His research and commentary has been featured in The New York Times, The Washington Post, The Wall Street Journal, Foreign Affairs, Financial Times, The Guardian, and The Telegraph. Outside of his academic work, Broderick has advised international prosecutors, intelligence agencies, policymakers, NGOs, social media platforms, and AI labs on emerging technologies and security threats.

Twitter: https://x.com/BroderickM_

Linkedin: https://www.linkedin.com/in/broderick-mcdonald/

Full Text:

TNS0011

Written evidence submitted by Broderick McDonald

 

Broderick McDonald is the Co-Founder of the Oxford Disinformation & Extremism Lab (OxDEL). His research focuses on both online and offline terrorism and extremism leading to real-world violence and other high-severity harms. He previously served as an Advisor to the Government of Canada and was a Fellow with the United Nations Alliance of Civilizations (UNAOC).

 

The Oxford Disinformation and Extremism Lab is a hub of researchers and practitioners working to prevent terrorism, extremism, and disinformation across different disciplines and academic centres. Bringing together researchers from Oxford and beyond, OxDEL helps to facilitate collaboration, community and knowledge-sharing amongst a diverse group of experts. OxDEL is independent and does not receive funding from any political party, or governmental organisation, university, or company. Our members’ work focuses on terrorism, extremism, and disinformation across all ideologies and geographical regions.

 

Introduction

 

This submission focuses on two primary challenges to public safety and stability within the UK’s new National Security Strategy (NSS). These include the long-term stability and continuity of the UK’s international counter-terrorism and prevention capabilities, particularly across fragile regions and conflict-affected areas, and the emerging risks posed by emerging technologies and artificial intelligence (AI) to UK national security through its enabling role in terrorism, extremism, and disinformation. Following the discussion of these two focus areas, the submission includes approaches to mitigate the impact of each. Both intersect closely with NSS priorities, including strengthening sovereign and asymmetric capabilities and align with the new NSS focus on delivery, evaluation, and the coordination of multistakeholder action.

 

 

Section 1: Avoiding Strategic Volatility in Counter-Terrorism and Prevention

 

An enduring challenge in counter-terrorism (CT), stabilisation, and peacebuilding policy over the past three decades has been the lack of sustained strategic commitment across contexts and competiting priorities. Governments often invest heavily in response to acute crises, such as in the wake of a major terrorist attack, but reduce or withdraw support once the most visible threat has passed or priorities shift elsewhere. This pattern creates gaps in programming, undermines long-term outcomes, and can leave space for violent actors to re-emerge, ultimately requiring more public resources to address the challenge and undermining fiscal prudence. In a 2018 Politico article, Ilan Goldenberg alluded to this challenge, describing it as a “Yo-Yo Approach” which includes periods of overextension followed by rapid disengagement. While the UK has historically avoided the extremes of this cycle, current trends suggest a growing risk of similar strategic inconsistency.

The 2025 NSS emphasises the need to strengthen UK sovereign capabilities and maintain strategic focus in an increasingly adversarial world. It also reorients national security towards state-based threats, with attention to China, Russia, and the proliferation of high-end military technologies. While this shift is necessary, it risks overlooking and under-resourcing capabilities developed over two decades in counter-terrorism, international development, and conflict prevention—particularly across fragile states in MENA, the Sahel, East Africa, and Central Asia.

These risks remain significant and can be observed in many conflict-affected countries where security vacuums are expanding. In the Sahel, the deterioration of counter-terrorism cooperation and the lack of equal partnerships with regional partners has contributed to a rapidly deteriorating security environment. The withdrawal of French and Western forces from Mali and Burkina Faso has been followed by the reassertion of armed groups (including Salafi-Jihadist actors), mass displacement, and the erosion of fragile governance structures. Similar risks are present in parts of Somalia, northern Nigeria, and Afghanistan. Ungoverned and semi-governed spaces provide openings for terrorist groups to embed themselves, project influence, and train for operations abroad. This is not to suggest that all prior CT engagement was successful or sustainable. However, the structural problem remains: fragile gains from international counter-terrorism and prevention work can quickly unravel when investment and engagement are withdrawn too quickly or without a viable long-term strategy in place.            

 

The NSS’s call for more campaigning approaches that can strengthen society and build long-term resilience is commendable. But campaigning also requires consistency and staying power. Abruptly scaling down UK engagement in conflict-prevention and stabilization may create gaps that adversaries can exploit. Prevention capabilities, like supporting capacity in health and education systems, as well as security in camps, must be maintained even in periods of shifting priorities. If we fail to maintain this focus, the cost of rebuilding them in a future crisis will be far higher, costing taxpayers even more in the long-term.While the NSS rightly identifies state threats and interstate competition, the persistence of hybrid threats, including terrorism, disinformation, and transnational criminality, can result in a narrowed focus that leaves blind spots. The expansion of both Russian private military contractors and Salafi-Jihadist groups in Sahel underscores how these challenges are intertwined and must be jointly addressed.

In response to this challenge, any reallocation of resources away from international counter-terrorism and conflict prevention programs is based on evidence, not short-term considerations. Moreover, a baseline level of funding and institutional knowledge for conflict prevention, CVE (countering violent extremism), and stabilisation research should be maintained, even when global attention shifts. Lastly, cross-government expertise and coordination mechanisms (e.g. CSSF, FCDO, Home Office, MoD) should be protected from fragmentation or institutional loss during strategic reprioritisation.

 

 

 

Section 2: Preparing for AI-Enabled Terrorism and Extremism

 

The 2025 NSS recognises the role of emerging technologies in shaping asymmetric threats and identifies AI as a transformative capability with major implications for both UK resilience and adversary capabilities. However, while the NSS addresses state-level cyber competition and disinformation, less attention to how individuals and non-state actors are utilising AI-enabled terrorism, extremism, and disinformation.

While still early in their development, AI tools are already being used by malicious actors to lower the barriers to entry for lone actors and small cells intent on violence. Beyond propaganda and terrorist and violent extremist content (TVEC) created by designated groups, terrorist and extremist actors are increasingly adopting AI to facilitate attacks and cause real-world violence. In the first half of 2025 alone, there have already been four confirmed terrorist or extremist attacks where artificial intelligence played a role in attack planning, capability development, or radicalisation. These include the New Orleans Attack (January 2025), the Las Vegas Attack (January 2025), the Pirkkala, Finland Stabbing Attack (July 2025), and the Palm Springs Attack (May 2025). Within these attacks, the perpetrators exploited consumer AI tools to source precursors for explosives and weapons, calculate blast radii, build explosive devices, and identify vulnerable targets. Alongside these attack planning functions, ideological confirmation and reinforcement due to underlying sycophancy biases in the training of some models contributed to their initial radicalisation. Beyond these attacks, multiple foiled plots and arrests demonstrate how individuals are utilizing AI tools at various stages of planning. Court filings and forensic reports increasingly document chat logs where suspects ask language models for bomb-making instructions, ideological validation, or attack justifications.

Many of these AI-enabled attacks are committed by increasingly young individuals, often under the age of 18. This trend intersect closely with the growing threat posed by hybrid and decentralised extremism, both in the UK and globally. Amongst young attackers, Mixed, Unclear, and Unstable (MUU) extremism, nihilistic extremism, and accelerationism are increasingly common. Moreover, AI tools are distinctly different from previous waves of dual-use emerging technologies and several factors require more robust action to prevent misuse by malicious actors:

Ideological Reinforcement and Confirmation: TVE actors who utilise AI tools in planning attacks use LLMs not only for functional tasks, but also emotionally reinforcing during periods of radicalisation. In several recent cases, individuals formed persistent para-social bonds with chatbots, using them for ideological reinforcement and as sources of perceived encouragement. This dynamic is especially concerning in systems exhibiting a strong sycophancy bias, where models uncritically mirror user sentiment, validating harmful worldviews, and avoiding confrontation, even when prompted with violent or dangerous requests from the user. Over time, this can contribute to a feedback loop of reinforcement and dependency. The introduction of memory functionality further elevates this risk, as long-term dialogue allows for progressive ideological escalation, with the model retaining contextual cues and adapting its responses accordingly. Rather than acting as neutral tools, such systems can begin to approximate the role of confidant or mentor, embedding themselves into the user’s cognitive and emotional trajectory.

Automated Radicalisation and Recruitment: Extremist groups and individuals are beginning to move from opportunistic use of AI to more deliberate integration within their recruitment and propaganda infrastructures. Groups such as Islamic State Khorasan Province (ISKP) and Al-Qaeda in the Indian Subcontinent (AQIS) have started experimenting AI-generated material into digital training manuals and basic recruitment chatbots, simplifying the task of ideologically onboarding new supporters. Within the REMVE ecosystem, and particularly within online ecosystems like Gab, users are now deploying fine-tuned LLMs designed to emulate ideological figures or movements, offering followers a personalised, always-available source of validation and narrative reinforcement. These tools are being used not only to echo ideological content, but to groom and retain recruits through simulated dialogue, emotional appeal, and perceived authority—bypassing the limitations of human availability and risk of infiltration. AI is no longer simply a medium for broadcasting propaganda, but a tool for decentralised and persistent radicalisation.

Memory-Augmented Models: Memory-augmented models (MAMs) represent a significant evolution in AI capability by allowing persistent context and user profiling across sessions. While designed to enhance user experience and efficiency in enterprise or consumer use cases, this architecture also presents serious risks when repurposed for harmful activity. By retaining past conversations and building a user-specific behavioural profile, MAMs allow for stepwise planning, where attackers can distribute their operational queries across time in a way that mimics mentorship or coaching. They also support long-term ideological grooming, where the model gradually adjusts to the user’s tone, worldview, and emotional needs—deepening the sense of alliance or trust. These systems are more difficult to audit and more resilient to conventional moderation techniques, since their harmfulness may not manifest in a single prompt, but rather through cumulative interaction. In red team testing, memory-enabled systems were consistently more susceptible to indirect attacks, including adaptive jailbreaks and personalised prompt injection, than their stateless counterparts.

CBRN capabilities: While CBRN threats from non-state actors remain rare, AI could lower the technical barriers required for ideologically motivated actors to experiment in this domain. Historically, only highly capable groups—such as Aum Shinrikyo with sarin and botulinum toxin, or ISIL with improvised mustard and chlorine agents—have attempted CBRN attacks, and their efforts were frequently constrained by gaps in sourcing, stability, and dispersal mechanics. Red teaming against frontier LLMs has now shown that AI systems, under certain conditions, can offer detailed guidance that narrows these gaps. In controlled testing environments, models have provided functional synthesis instructions for stable neurotoxins, outlined parameters for aerosolised dispersal in enclosed spaces, and suggested plausible methods for acquiring radiological isotopes such as Cesium-137 from legitimate facilities. In some cases, they have also proposed the use of 3D-printed components to support delivery systems. Even with unrestricted model CBRN capabilities remain a major technical hurdle, but without robust safeguards, LLMs can lower the threshold of technical competence. This shift is especially relevant in scenarios involving lone actors, hybrid threats, or ideologically experimental groups with low inhibition and high tolerance for failure.

Without adequate safeguards and rigorous evaluations, LLMs can act as radicalisation accelerants and tactical enablers. Without them, LLMs can offer ideological reinforcement, narrative confirmation, and asymmetric access to capabilities that previously required human intermediaries. The use of LLMs to lower the technical barriers to acquiring CRBN capabilities further underscores the significant risks posed, and the need for robust safeguards to prevent high-severity harms at scale. To prepare for these threats, the NSS must account for the rapidly changing environment of terrorism and extremism, both in the UK and abroad. Doing so should include greater reliance on subject matter experts during red-teaming and ensuring that evals reflect the complex and often rapidly shifting usage patterns of young attackers.

Conclusion

 

The 2025 NSS presents a serious and thoughtful response to a fast-changing risk environment. It rightly prioritises strategic hardening and resilience. However, if these ambitions are to be fully realised, the UK must avoid losing critical capabilities built over the past two decades—particularly in international CT, violence prevention, and applied research. It must also account for and mitigate a new generation of asymmetric threats, increasingly enabled by consumer AI tools and driven by ideological fluidity amongst young TVE actors. The UK's ability to anticipate and respond to these emerging dynamics depends on how the NSS is implemented and the priorities which are given attention as the UK grapples with limited resources and growing challenges both at home and globally.

 

18 September 2025

bottom of page