The Centre for the Study of Learning and Performance (CSLP) recently hosted a timely online event examining the psychological risks emerging from increased human interaction with artificial intelligence systems.
Held on January 22 via Zoom, AI Love… The Feeling Isn’t Mutual: The Psychology of AI Interaction and Its Threat to Human Wellbeing featured Pasha Dashtgard, Director of Interventions at the Polarization and Extremism Research and Innovation Lab (PERIL). The presentation sparked lively engagement, with a robust discussion following the talk that reflected widespread concern about the social and psychological implications of AI technologies.
Drawing on current research and applied intervention work, Dr. Dashtgard explored three interconnected areas of risk: the ways AI systems exploit innate cognitive vulnerabilities, the developmental and relational harms associated with AI companionship, and the growing phenomenon of AI-induced psychosis. He emphasized that these risks are not evenly distributed, noting that socially isolated individuals are particularly susceptible to forming emotional or romantic attachments to AI systems—attachments that can distort expectations of reciprocity, care, and human connection. He discussed how the inherently sycophantic nature of the tools can create toxic feedback loops and may be particularly harmful to developing children and those already faced with mental health challenges.
Organized at the invitation of CSLP co-director David Waddington, the talk situated these concerns within a broader public health framework. Dr. Dashtgard underscored the need for preventative interventions, ethical design, and regulatory responses that address psychological harm before it escalates into more severe mental health and social consequences.
The event concluded with a wide-ranging discussion among participants, who raised questions about responsibility, regulation, and the role of educational responses in mitigating emerging harms. Together, the presentation and discussion offered a clear and accessible examination of one of the most pressing psychological challenges of the AI era.
About the Speaker
Pasha Dashtgard holds a PhD in Social Psychology from UC Irvine, with prior training at Columbia University’s Teacher’s College. At PERIL, he leads the implementation and scaling of psychological-resilience programs and online interventions aimed at countering propaganda, misinformation, polarization, and extremist recruitment. His research spans masculinities, online radicalization, PTSD, and large-scale mental health policy.
About PERIL
The Polarization & Extremism Research & Innovation Lab within the School of Public Affairs at American University uses a public health approach to design, test, and scale-up evidence-based tools and strategies that effectively reduce the threat of radicalization to harmful online and offline content including conspiracy theories, mis/disinformation, propaganda, and supremacist ideologies. As an alternative to security-based approaches that rely on surveillance, censorship, and incarceration, their work takes a multidisciplinary and pre-preventative approach to address hate, bias, and radicalization before they manifest into violent extremism. This work supports individuals and communities to reject propaganda and extremist content, as well as empower them to intervene and interrupt early radicalization.
Pasha Dashtgard, director of PERIL at American University's School of Public Affairs