Workshops (day 3)

Delegates that register for day three will have the choice either to attend two half-day stakeholder-led workshops or participate in the TASM sandpit. The stakeholder-led workshops include:

AI for Counter Extremism

Hosted by Hedayah

The rapid evolution of AI, especially generative models, presents both emerging threats and exciting opportunities in the field of countering extremism and violent extremism. Malicious actors are experimenting with ways that AI can be used as a tool to support their tactics and amplify their narratives. Similarly, AI holds potential to be leveraged as a tool to counter the threat posed by extremists by enhancing content analysis and moderation, early analysis, strategic communication, among other uses. This session will build on Hedayah’s recently published research brief - Artificial Intelligence for Counter Extremism - that offers an evidence-based assessment of AI risks, opportunities, and other policy considerations for the field of countering extremism and violent extremism. It will delve into the applications of AI for counter extremism in key emerging use cases, with presentations on these issue areas or case studies at the beginning of the workshop followed by teamwork to develop out a proposed use of generative AI for counter extremism, based on the teams’ own needs as researchers and practitioners or needs that the team have identified, with facilitated ideation, concept development, and risk assessment.

Elevating AI Safety: From Content Moderation to Benchmark Development

Hosted by the Christchurch Call Foundation

The Christchurch Call Foundation (CCF) will lead an interactive half-day workshop exploring the intersection of artificial intelligence and terrorism and violent extremism (TVE) prevention. This session will focus on two critical challenges: developing effective TVEC policies for open-source AI safety reasoning models and creating AI safety benchmarks to measure risks like grievance amplification and radicalization to violence in AI chatbot interactions. Participants will work directly with tools currently under development through CCF's Elevate project, providing valuable feedback that will be used to update and improve these tools over time.

The workshop will be structured in two segments, each featuring small group exercises. In the first segment, participants will test TVEC policies using open-source models in a sandbox environment, evaluating how AI interprets and applies content moderation policies. The second segment will focus on CCF’s effort to create an AI safety benchmark related to grievance amplification and radicalization to violence. Participants will review evaluation criteria, assess test scenarios, and annotate sample outputs from our minimum viable version of the benchmark. By the end of the session, participants will have contributed directly to shaping tools for responsible AI use in preventing the spread of terrorist and violent extremist content while gaining practical insights into emerging AI safety challenges.

Make the Call Now: Decisions and Consequences in a Live Terrorism Crisis

Hosted by the VOX-Pol Institute and Extremisphere

This interactive tabletop simulation invites participants to step into the role of founding policy team members in a fictional social media start-up. Tasked with developing a response framework for terrorist content and live incidents online, they must define core principles, escalation thresholds, information-sharing protocols, and coordination mechanisms from the ground up.

The scenario then pivots. Participants move into sector-based roles representing tech Trust & Safety, regulators, law enforcement agencies, civil society, and academia. They are immersed in a rapidly evolving crisis that immediately stress-tests the framework they have designed. As events escalate and information remains incomplete, curveballs and unintended consequences force teams to negotiate across mandates, manage uncertainty, and make time-sensitive decisions under pressure.

Fast-paced and experiential, the exercise reveals how institutional incentives, sectoral tensions, and governance trade-offs shape real-time crisis response. Participants leave with practical insights into cross-sector coordination, accountability, and the complexities of responding to terrorism-related content in a live digital environment.

Tackling terrorism and online threats - Achievements in Law Enforcement Agency and innovator cooperation

Hosted by NOTIONES

The NOTIONES project – Interacting Network of Intelligence and Security Practitioners with Industry and Academia Actors - is funded by the European Commission (2021–2026; Horizon 2020 Coordination and Support Action). Its mission is to build a network of practitioners from security and intelligence services of EU Member States and associated countries. This network now supports the needs of security and intelligence services and equips them to participate in research and innovation actions. The project has brought together practitioners from military, civil, financial, judiciary, local, national and international police authorities, and retains a focus and active interest in the intersection of terrorism, technology, and social media.

The NOTIONES project draws together innovative research, case studies, and technical demonstrations. The event shares best practices in making security and intelligence practitioners work together with academia and industry. Rapid technological change and advances in applying the technologies to the everyday work of security practitioners are broken down into shareable knowledge for the benefit of maintaining secure, adaptive, and resilient societies. This workshop offers research and practitioner knowledge from the NOTIONES network, other EU projects in the field, and invites participants to share recent findings and new insights.