Cyberpsychology Research Pivots to Influencer Impact and AI Chatbot Behavioural Effects

The latest issue of Cyberpsychology: Journal of Psychosocial Research on Cyberspace (Volume 20, 2026) signals a significant shift in academic priorities, with featured articles focusing on social media influencer messaging, algorithmic bias effects, and qualitative investigations into ChatGPT’s psychological impact on young users.

Key Developments

The journal’s 2026 research agenda reflects growing concern among European and UK psychologists about how emerging digital platforms and AI systems shape adolescent and young adult behaviour. Articles now prominently address:

  • Influencer Mental Health Messaging: Studies examining how social media influencers communicate about mental health, wellness, and body image to younger audiences
  • Algorithmic Bias and Exclusion: Research into patterns of digital exclusion, misinformation amplification, and how recommendation systems affect social interaction
  • ChatGPT Qualitative Effects: Direct investigation of how AI chatbots influence user behaviour, attachment patterns, and information-seeking habits
  • Mobile and Social Network Habits: Longitudinal studies on adolescent screen time, notification dependency, and social comparison patterns

Industry Context

This research realignment reflects real-world pressures. As AI systems like ChatGPT become mainstream and social media influence reaches younger demographics, psychologists are racing to understand long-term behavioural and mental health consequences. The European Union’s AI Act implementation timeline adds urgency—researchers need robust psychological evidence to inform high-risk AI regulation and social platform compliance requirements.

The BPS Cyberpsychology Conference (6-7 July 2026, University of York) will deepen this focus with keynote speakers Prof. Paul Cairns and Prof. Amy Orben addressing AI’s intersection with psychological wellbeing and digital literacy.

Practical Implications for Builders and Policymakers

For AI developers: These research findings should inform UX design decisions around engagement patterns, notification frequency, and content recommendation mechanisms—particularly for systems accessed by under-18s.

For social platforms: Understanding influencer messaging patterns creates accountability frameworks. European platforms will need to demonstrate algorithmic transparency compliance with emerging psychological evidence standards.

For policymakers: The psychological research now underpinning EU AI Act implementation suggests that “high-risk” AI classification may soon extend beyond current categories to include systems demonstrating measurable behavioural or mental health effects on minors.

For parents and educators: These studies provide evidence-based guidance on healthy digital media consumption and AI literacy for young people.

Open Questions

Several critical gaps remain:

  • How do longitudinal psychological effects of ChatGPT use compare to traditional search engines or peer interaction?
  • Can algorithmic bias patterns be quantified in terms of measurable psychological harm?
  • What evidence standards should regulators use when evaluating “high-risk” AI systems under the EU framework?
  • How do cultural differences across EU member states affect young people’s responses to influencer messaging?

The 2026 research agenda suggests cyberpsychology is maturing from descriptive studies into mechanistic investigations that could directly shape technology regulation and platform design standards across Europe.


Source: Cyberpsychology: Journal of Psychosocial Research on Cyberspace