Critical Perspectives on AI
2:30 PM - 4:00 PM, K8669
Chair: Prem Sylvester
2:30 PM - 4:00 PM, K8669
Chair: Prem Sylvester
Large Language Models (LLMs) are explored as potential tools for identifying online radicalization and disinformation. Less known is how reliable they can be. We evaluated six LLMs - three online and three offline - across five datasets in extremism and disinformation. Extremism datasets include StormFront.org (n=383K), a white supremacist forum data with posts labelled as “Violent” (i.e. posted by users known to have committed violence) and “NonViolent” (i.e. posted by users known not to have committed violence), and Incels.is (n=455K), a misogynistic radicalization forum data with labels inferred using StormFront-trained classifiers. Disinformation datasets consist of Facebook (n=1.2K) and X (n=80K) corpora linked to Iranian and Russian information operations, labelled as real or fake. Each dataset is standardized and scored by LLMs using a single prompt that produces ordinal ratings (0-3) capturing the intensity of theoretically relevant signals: grievance, ideology, violence, emotion, conspiracy, and nihilism for extremism datasets and absolutism, emotion, urgency, moralization, causal simplicity, and political agenda for disinformation datasets. LLM-generated scores were used as features for classification. “Performance” (i.e. ability to distinguish violent from non-violent users and real information from disinformation) was measured through Cross Validation metrics of F1 (i.e., precision and recall). Results inform where LLMs improve predictive accuracy and where they introduce risks and limitations.
*Additional Authors: Richard Frank (Director, ICCRI and Professor, School of Criminology, SFU), Andy Liu (MSc graduate, University of Toronto), Jyotir Mayor (BA graduate, Simon Fraser University)
Public discourse around artificial intelligence is increasingly framed in apocalyptic terms. Across academic, professional, and everyday contexts, AI is described as a threat to work, creativity, meaning, and even humanity itself. What is striking, however, is not the diversity of these concerns but their remarkable uniformity. Regardless of background or expertise, similar anxieties repeat, suggesting a shared affective response rather than a purely technical or political disagreement.
This presentation reframes AI not as an external catastrophe but as a mirror. Rather than introducing something fundamentally alien, AI interfaces with existing social systems such as capitalism, authorship, intellectual property, and human exceptionalism, amplifying tensions that were already present. In this sense, AI may represent humanity’s first sustained attempt to communicate with a non-human intelligence. The discomfort it generates may be less about replacement and more about recognition.
Drawing on posthumanist theory and experiential insights from recovery, this talk approaches the current moment as a form of collective reckoning. Transformation does not begin with clarity or consensus, but with discomfort and the breakdown of familiar narratives. The hostility directed at AI often targets not the technology itself, but what it reveals about us.
Situated within the theme of modulation, this presentation explores how small shifts in perception, away from panic and toward reflection, may open new ways of relating to both AI and ourselves.
The rapid advancement of artificial intelligence has introduced a new class of image generation tools capable of producing highly realistic and aesthetically sophisticated visual content. As AI-generated imagery becomes increasingly indistinguishable from human-created artwork, questions arise about how accurately people can identify its origin. This study examines accuracy rates in distinguishing AI-generated images from human-created artwork, with a focus on the relationship between the two accuracy rates.
Data were drawn from Cunningham's 2025 dissertation, ""The AI of the Beholder: A Quantitative Study on Human Perception and Appraisal of AI-Generated Images."" A chi-square analysis revealed a significant association between accuracy in identifying AI-generated images and accuracy in identifying human-created artwork, with a moderate effect size. Given this association, a logistic regression was conducted to further understand the relationship between the two accuracy rates. Results indicated an inverse relationship, meaning that participants who were more accurate at identifying AI-generated images tended to be less accurate at identifying human-created ones, suggesting a trade-off in how people classify visual content.
These findings point to a perceptual bias in which increased attunement to the features of AI-generated imagery may come at the cost of recognizing what distinguishes human-created work. As AI-generated imagery becomes more embedded in everyday life, understanding how people perceive and misclassify it has meaningful implications for media literacy, digital authenticity, and public trust in visual content.
Prem Sylvester is a researcher and co-lead of the Beyond Verification project at the Digital Democracies Institute. He is also a panel manager and consulting scientist for the International Panel on the Information Environment’s Scientific Panel on Global Standards for AI Audits. Prem holds an MA in communication from Simon Fraser University, Canada and a B.Tech in Information Technology from College of Engineering Guindy, India. He has previously published in ephemera: theory & politics in organization and has work forthcoming in the International Journal of Communication. He is interested in network politics and cultures, logistical media, and the histories that cross these social and spatial relations, and will soon be beginning his doctoral studies in this area.
Bomin is a PhD student in the International CyberCrime Research Institute and School of Criminology at SFU. She completed her B.A. Honours (Distinction) in Criminology at SFU and her MPhil in Criminological Research at the University of Cambridge. Her previous work focused on online radicalization patterns. Now, she studies what works to disrupt those patterns. She does so by: interviewing individuals navigating exit, conducting a Campbell Systematic Review of radicalization prevention programs, evaluating Large Language Model performance—and who knows what else, more to come!
Robert Duhaime is a Master's student in the School of Communication at Simon Fraser University. His research explores disinformation, media, and public disengagement, with a focus on how UFO discourse operates as a contested social reality. Before returning to academia, Robert spent years in telecom leadership and as a long-haul truck driver, experiences that shape his practical approach to communication and systems thinking. His work connects theory with lived experience, emphasizing pattern recognition, interdisciplinary thinking, and making complex ideas accessible.
Maggie is currently pursuing her MA within the School of Criminology at SFU, where her research focuses on preventing wrongful convictions caused by faulty forensic evidence. Previously, she completed an Honours Bachelor of Science at the University of Toronto, specializing in Forensic Anthropology, as she is drawn to the intersection of law and forensic science. Lately, Maggie has been pursuing AI passion projects on the side, including research on perceptions of AI psychosis and recognizing AI-generated artwork from human-created work, which she will present on for this conference.