Abstract

In many professional fields, practitioners develop expertise through hands-on interaction with subjects, gaining essential experience for real-world scenarios. However, access to human subjects for training is often limited due to safety, ethical, and resource constraints. This challenge is particularly significant in probation officer training, where interactions directly impact public safety and civil rights. This work-in-progress report on using large language models (LLMs) offers a potential solution by simulating different offender personas for training purposes. This pilot study examines the responses of LLM-generated offender personas to standard probation assessment questions. The research evaluated five distinct offender personas responding to 20 assessment questions using two different models: LLaMA3.3-70B and GPT-4o. After grading the total of 200 responses from four criteria of naturalness, plausibility, consistency, and honesty, the findings compare the performance differences between these two models, followed by a qualitative analysis of the results.

Authors: Amir Reza Asadi, PhD Student; Vineela Kunapareddi, MSIT; Myrinda Schweitzer Smith, PhD; Hazem Said, PhD

Our Sponsors