Simon Münker, Nils Schwager and Kai Kugler are offering the workshop “LLMs as Human Simulacra: Opportunities and Limitations from a Linguistic Perspective” on 24 February 2026. Read their abstract below:
The simulation of individual user behavior using Large Language Models (LLMs) has become increasingly popular in computational social science, offering novel opportunities to model human communication patterns at scale. However, the validity of these simulations beyond superficial, believable human-like communication remains critically under-explored, particularly from a linguistic perspective that examines deeper structural and stylistic authenticity. Our workshop explores the evolving landscape of emulating individual users with a specific focus on TWins of Online Social Networks (TWONs). It presents contemporary approaches to aligning and evaluating task-specific behaviors — including writing posts and replying to content — while maintaining a critical lens on the general validity and limitations of these methodologies. We address fundamental questions about what constitutes authentic human simulation and how linguistic analysis can reveal both the capabilities and boundaries of current LLM-based approaches. Through presentations, discussions, and hands-on interactive sessions, participants will gain practical experience in training and evaluating LLMs using state-of-the-art reinforcement learning approaches. The workshop emphasizes moving beyond traditional token-to-token comparison by implementing custom evaluation metrics. The workshop is designed for researchers working at the intersection of linguistics and social media analysis who seek to understand both the promise and pitfalls of using LLMs as human behavioral simulacra in digital environments.
Find more information here.

