Panels (days 1 & 2)
As well as individual papers, days 1 & 2 will also feature a selection of thematic panels from a variety of different researchers and stakeholder organisations. These include:
NVE, Mixed, Salad Bar? Towards a Definitional Framework for Modern Attackers
Dr Nagham El Karhili (GIFCT): LinkedIn
Jessa Mellea (GIFCT): LinkedIn
Recent attacks, the rise in prominence of networks like Com, and irony-poisoned, incoherent motives have challenged the ideological structures used to classify and analyze terrorism. In this roundtable session, participants will collaboratively create working definitions for these emerging networks and movements. Through semi-structured discussions, participants will craft a working definition of the motivations and characteristics of these attackers.
Exit strategies for Violence Fascination
Christopher Stewart (Human Digital): LinkedIn
Gareth Harris (Human Digital)
Jake Dixon (Human Digital): LinkedIn
Immersion and participation in digital ecosystems, often from an early age, and the convergence of overlapping harm sets such as pornography, violent misogyny, gore and school massacre ideation have contributed to the emergence of individuals with a non-ideologically motivated fascination with violence. Globally there are efforts from governments and law enforcement to collect evidence on the function violent content plays within pathways to violence, and to use that evidence to design proportionate intervention and exit strategies. This session is designed to get attendees thinking practically about how existing strategies could be applied to this growing harm set.
The Current Violent Extremist Threat Landscape: Networks, platforms, and technologies
The VOX-Pol Institute
This session will provide participants with an overview of recent developments in the use of digital technologies by violent extremist actors. First, it will examine the current tactics and capabilities of jihadist networks online, including how they operate across a range of online platforms and services, their strategies for evading detection (and the extent to which such strategies are in fact necessary), and their uses of AI. Second, it will explore the latest trends in how right-wing violent extremist groups and networks are using digital platforms and services, as well as their connections to violent communities with less clear ideological motivations, including those that are fixated with violence.
Online Safety Regulation in Action: Balancing Rights and Risk
Alistair Pullar (Ofcom): LinkedIn
Tom Caleb (Ofcom)
Laura Waters (Ofcom)
Charlotte Hall (Ofcom)
In this session, after each convenor/panellist provides a brief presentation on their team and its functions, we will provide workshop participants with a fake service, and information about this service and where risk arises on it. This risk could be the result of a risky functionality, a particular user demographic, a cultural proclivity etc. We will then split the workshop participants into groups who will act as constituent internal Ofcom teams represented by the above speakers who will take each smaller group through the process of Ofcom engagement by their constituent team. This will include identifying data needs, forming necessary stakeholder partnerships to identify compliance, engaging with services to identify what changes can be made through informal engagement, and establishing appropriate and proportionate policy responses. Ofcom colleagues will go around the smaller teams and provide insight and support while these teams identify what activities would be necessary as part of the wider objective of ensuring compliance and delivering user – especially child user – safety. The room will then reconvene and identify where the process is effective, but also where it could and/or should be improved to better ensure user safety and/or affirm user rights, whilst fulfilling Ofcom’s regulatory duties under the law. We hope this breakout session will both improve our internal processes, and improve stakeholders’ understanding of these processes, but also allow stakeholders to understand where engagement with Ofcom could be valuable and appropriately targeted in the future.
Project Catalyst: Comparative interventions tackling violent extremism & misogyny in gaming communities and platforms
Galen Lamphere-Englund (Christchurch Call Foundation): LinkedIn
Anne Craanen (Institute for Strategic Dialogue): LinkedIn
Amee Wurzburg (American University): LinkedIn
Hesbone Ndungú (Search for Common Ground): LinkedIn
In 2025, the Christchurch Call Foundation (CCF) started work on Catalyst, an 18-month initiative designed to address online misogyny and gender-based violence linked to violent extremism, focused largely on gaming and gaming adjacent platforms and communities. Implemented through a Consortium led by CCF, Catalyst brings together expert partners including the Institute for Strategic Dialogue (ISD), Meedan, Search for Common Ground, the Polarization and Extremism Research & Innovation Lab (PERIL) at American University, and the Blavatnik School of Government at the University of Oxford.
The project encompasses 12+ programmatic, research, and policy tracks in Canada, Jordan, and Kenya, as well as across transnational gaming communities. During this panel, we will share preliminary insights from research in Arabic, Swahili, and English, along with comparative offline and online interventions to addressing online misogyny and its connections with violent extremism.
At TASM, the Catalyst Consortium will present on key pillars of work that explore Catalyst research and interventions that are designed to support online communities to better tackle misogyny and extremism. Members of the Consortium will present findings from their respective areas of intervention: livestreamer led campaigns; community moderators bystander training on gaming platforms; offline peer mentor programs; building LLM classifiers to detect tech facilitated gender-based violence content in low-resource languages; and, the mapping and responses to the ‘manosphere’ across the three contexts. Other efforts on civil servant policy training and global policy dialogues are also underway. The session is intended to share research findings and lessons learned from direct interventions designed to build resilience amongst gaming communities along with support mechanisms for platforms working to address harmful content.
Anonymity, Security and Resilience: The creative tension between regulatory policy and terrorist exploitation of the internet
Chair: Prof Maura Conway (Dublin City University; Swansea University): LinkedIn; BlueSky
Panellists:
Prof Miron Lakomy (University of Silesia): LinkedIn
Dr Ali Fisher (Università Cattolica del Sacro Cuore): LinkedIn
Arthur Bradley (VOX-Pol Institute): LinkedIn
Alessandro Bolpagni (Università Cattolica del Sacro Cuore): LinkedIn
Eleonora Ristuccia (Università Cattolica del Sacro Cuore): LinkedIn
Grazia Ludovica Giardini (Università Cattolica del Sacro Cuore): LinkedIn
This session examines original research situated at the nexus of Internet policy and the adaptive communication strategies of terrorist organizations, with particular emphasis on the operational security (OpSec) tradecraft these groups have developed in response to regulatory and technological pressures.
Despite regular announcements of success against terrorist actors ranging from Salafi-Jihadi to Far-Right and Far-Left, they frequently remain online and, in many cases, have expanded their activity across major platforms.
In pursuit of their objectives of galvanizing core supporters and mobilising a mass movement, the Media Mujahidin have refined the operational security needed to preserve anonymity and secure communications with core supporters, while also surviving the disruption now common across mainstream media platforms.
Far-Left and Far Right networks have also engaged in a complex interplay between terrorism, violent extremism, and cybercriminality. They have exploited alternative or fringe social media platforms, the decentralised web, and gaming and adjacent platforms.
Combining the learning from the continued presence of these diverse groups, this session examines three interrelated areas that shape how terrorist groups continue to exploit social media and the internet: (1) the policy and regulatory environment in which these groups operate; (2) the OpSec tradecraft they adopt, which makes individuals difficult to identify and disrupt; and (3) the multiplatform networks that provide resilience and a persistent online presence.
Distinct remits, but shared mission: How regulation can protect users from terrorism content online
Dr Murtaza Shaikh (Ofcom): LinkedIn
Jonathon Deedman (Ofcom): LinkedIn
ATKM participants
Colleagues from the UK’s Ofcom and the Netherlands’ Autoriteit online Terroristisch en Kinderpornografisch Materiaal (ATKM) will deliver a joint session that seeks to outline the divergent approaches both regulatory bodies take, but the shared objective of preventing users’ exposure to terrorism content online. Working from different national legislative and regulatory frameworks, both Ofcom and ATKM seek to protect users in our constituent locales from terrorism content in distinct ways. We hope this session can both inform TASM attendees how these different systems operate, and improve our own understanding of the harms we seek to regulate through solicitation of prevalence data, definitional frameworks and other theory work to address borderline content, understanding of users’ experiences and perspectives etc. This will see colleagues from Ofcom and from ATKM identifying our remits, our duties and responsibilities under the different legal frameworks, the challenges we have experienced in the last year, and how we intend to move forward and improve, including with input from breakout session participants.
Anti-gender ideology and the social media landscape in and beyond Canada
Prof Amy Mack (University of Lethbridge): LinkedIn; BlueSky
Dr Audrey Gagnon (University of Ottawa): LinkedIn; BlueSky
Dr Kathy Kondor (Norwegian Center for Holocaust and Minority Studies): LinkedIn; BlueSky
Kayla Preston (University of Toronto)
Luc Cousineau (Dalhousie University): BlueSky
Over the last decade, scholars have increasingly turned their attention to digital or cyber misogyny with the rise of incels, MRAs, and male supremacist influencers, to understand the relationship between violence, technology, and gender. In this panel, we expand this work to explore how “anti-gender ideology” groups and movements, which span the political spectrum and include populist parties, use social media to target and harass trans and gender-diverse people in Canada. Our first three papers use empirical case studies in Canada to demonstrate the complexity of these movements in terms of membership and tactics. This includes discourse analysis of the social media and web-presences of populist parties, content analysis of social media-based far-right and far-right adjacent groups, and qualitative interviews with teachers and activists grappling with social media-fuelled gender-based violence in their classrooms. The fourth paper makes connections between this Canadian context and the international flows of anti-gender ideology discourse. The final paper presents research on how activists and educators are pushing back against hate-based and “anti-gender ideology” forces in their communities to provide an empirical and data-driven path forward to countering hate.
“The Digital Bazar”: Understanding How Extremist Organisations are Exploiting TikTok to Share Propaganda and Radicalise Young Individuals
Alessandro Bolpagni (Università Cattolica del Sacro Cuore): LinkedIn
Silvano Rizieri Lucini (Università Cattolica del Sacro Cuore): LinkedIn
Alessandra Pugnana (Università Cattolica del Sacro Cuore): LinkedIn
Eleonora Ristuccia (Università Cattolica del Sacro Cuore): LinkedIn
Grazia Ludovica Giardini (Università Cattolica del Sacro Cuore): LinkedIn
Dr Kristin Weber (Centre for Criminological Research Saxony): LinkedIn; BlueSky
Erik Hacker (SCENOR): LinkedIn
This breakout session aims to explore how extremist groups exploit TikTok to share terrorist propaganda and violent narratives. The participants will shed light on the presence of a great number of users active in sharing either extremist content or propaganda material to spread extremist ideologies, incite violence, radicalise young individuals, and recruit new members. Particularly, the panel will show content related to extreme right-wing accelerationism, Salafi-jihadism, True Crime Community (TCC), and Christian extremism. The authors will delve into the typologies of actors and the set of tactics and techniques employed by users to share propaganda and extremist content. To summarise, participants will delve into how the aforementioned extremist groups are exploiting TikTok, presenting possible techniques and approaches to efficiently counter the spreading of extremist propaganda and content.
The Only Way Is Ethics: An Open Conversation on Safety, Standards, and Solutions in Extremism Research
Dr Ashton Kingdon (University of Southampton)
Dr Ashley Mattheis (University of Manchester)
Dr Alyssa Czerwinsky (University of Manchester)
Bradley Galloway (Ontario Tech University)
Dr Antonia Vaughan (Moonshot)
Dr Elizabeth Pearson (Royal Holloway University)
Dr Nicola Mattheison (University of Liverpool)
Dr Audrey Gagnon (University of Ottawa)
Dr Meili Criezis (American University)
Extremism research has expanded rapidly, but ethical frameworks and institutional support lag behind. Researchers face challenges beyond standard protocols: accessing sensitive data, exposure to traumatic content, physical and digital safety risks, and the danger of amplifying extremist narratives. Institutional responses remain inadequate and inconsistent, with protection gaps varying by resources, location, and discipline.
This interactive roundtable critically assesses the current state of research ethics in extremism studies and charts a path forward. Using a ‘fishbowl’ format, audience members are actively encouraged to share experiences and ideas alongside panellists, generating honest reflections and healthy debate.
Key questions to be explored include:
- How do we balance understanding extremism with risks of amplifying harmful content?
- What protections are needed for researchers’ mental health when engaging with traumatic material?
- As platforms restrict data access, how can we maintain research transparency while respecting ethical boundaries?
- How do we better prepare and protect the next generation of extremism researchers?
- What does robust ethical infrastructure look like?
Participants will examine the adequacy of current ethical guidelines, researcher wellbeing and mental health support, cybersecurity challenges, the ethics of working with closed-source data, and institutional responsibilities in safeguarding researchers.
This session moves beyond problem identification toward actionable solutions, including specialised training, dedicated safety resources, and recognising digital security as integral to research ethics. Through cross-disciplinary knowledge exchange, we will establish a shared vision for ethical practice that prioritises both research integrity and researcher wellbeing while advancing understanding of critical societal challenges.
Researcher Safety by Design: Modelling practical OPSEC for studying digital extremism
Dr Michael Loadenthal (University of Cincinnati): LinkedIn; BlueSky
This interactive, hands-on workshop invites researchers, practitioners, and policymakers into a practical exploration of how to work safely, sustainably, and confidently in the study of violent extremism and contested digital environments. Rather than treating risk as an abstract concern, the session equips participants with concrete tools to identify, contextualize, prioritize, and mitigate threats that emerge across physical, digital, and psychosocial dimensions of research. Drawing on threat modeling and public health–inspired harm reduction frameworks, the workshop emphasizes resilience, adaptability, and long-term well-being.
The session unfolds in three dynamic stages. First, participants build individualized threat models using accessible, creative techniques such as asset inventorying, mind mapping, and structured analytic tools. This process helps attendees clarify how risks vary by research topic, method, and positionality. Second, these models are transformed into actionable mitigation strategies using a harm reduction lens that prioritizes realistic, proportional interventions over one-size-fits-all solutions. Participants learn how to decide what matters most, what can wait, and where small changes can meaningfully reduce harm.
The final stage delivers an engaging “101 crash course” in operational and digital security, tailored specifically to research workflows. Through live demonstrations, participants explore tools such as VPNs, secure cloud storage, password managers, virtual machines, browser extensions, identity separation techniques, and secure messaging platforms. These tools are applied directly to everyday research practices, with a focus on protecting identity, safeguarding sensitive data, and using technology to maintain healthy boundaries when engaging with violent or disturbing material.
Blending case studies, scenario-based exercises, and collaborative discussion, the workshop is designed to be energetic, inclusive, and immediately useful. Participants leave with both a clear risk-management framework and practical skills they can deploy right away—making this session especially valuable for early-career scholars and anyone seeking to build safer, more sustainable research practices.