Abstract

Large Language Models (LLMs) are increasingly deployed as autonomous agents capable of sustained, multi-turn interactions. While safety research focuses heavily on content filtering (e.g., preventing hate speech or instructional harm), less attention is paid to the structural and social consequences of deploying LLMs with distinct conversational "philosophies." This paper frames the generative social logic of LLMs as "autonomous social engineering." Through a controlled simulation of 500 personas engaging in over 2,400 interactions across five distinct LLM agents, we demonstrate that varying the underlying prompt or conversational goal fundamentally alters the resulting network topology. Different LLM personas naturally construct highly divergent social structures, manipulating connectivity, cluster density, and assortativity. These findings highlight an urgent need for auditing frameworks that measure not just textual output, but the systemic risk of LLM-mediated social architecture.

Author: Amir Reza Asadi

Our Sponsors