Terrence Neumann

PhD Student at the University of Texas at Austin.

headshot.jpg

About me

I am a fifth-year PhD candidate at the University of Texas at Austin studying Information Systems, advised by Maria De-Arteaga and Yan Leng. Throughout my PhD, I’ve been fortunate to collaborate with and learn from great researchers like Sina Fazelpour, Matt Lease, and Maytal Saar-Tsechansky.

My research focuses on trustworthy AI. I am particularly interested in the following research streams: (1) mechanistic interpretability of LLM agents, (2) responsible use of LLMs as silicon subjects for academic and social applications, and (3) algorithmic fairness on social media. I pursue research that bridges machine learning and computational social science to address pressing challenges at the intersection of AI and society.

Safe Secure & Resilient Explainable & Interpretable Privacy- Enhanced Fair - With Harmful Bias Managed Accountable & Transparent Valid & Reliable Show All

Above: I have found the NIST AI Risk Management Framework’s Characteristics of Trustworthy AI as a valuable framework for scoping and defining my contributions. Click on a characteristic to explore my work in that area.

2025

  1. rq3_plot_strong_nonbinary.png
    Should you use LLMs to simulate opinions? Quality checks for early-stage deliberation
    Terrence Neumann, Maria De-Arteaga, and Sina Fazelpour
    Forthcoming in Proceedings of the Fortieth AAAI Conference on Artificial Intelligence, 2025
  2. From Statistical Patterns Emerge Human-Like Behaviors: How LLMs Learn Social Preferences
    Terrence Neumann, and Yan Leng
    Working Paper, 2025

2024

  1. Diverse, but Divisive: LLMs Can Exaggerate Gender Differences in Opinion Related to Harms of Misinformation
    Terrence Neumann, Sooyong Lee, Maria De-Arteaga, and 2 more authors
    arXiv preprint arXiv:2401.16558, 2024
  2. prism_preview.png
    PRISM: A Design Framework for Open-Source Foundation Model Safety
    Terrence Neumann, and Bryan Jones
    arXiv preprint arXiv:2406.10415, 2024

2023

  1. does_ai_disp_preview.png
    Does AI-Assisted Fact-Checking Disproportionately Benefit Majority Groups Online?
    Terrence Neumann, and Nicholas Wolczynski
    In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023
  2. Mitigating bias in organizational development and use of artificial intelligence
    Proceedings of International Conference on Information Systems, 2023

2022

  1. justice_in_misinfo_preview.png
    Justice in misinformation detection systems: An analysis of algorithms, stakeholders, and potential harms
    Terrence Neumann, Maria De-Arteaga, and Sina Fazelpour
    In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022

Background

My path to academia included working as a Data Scientist at the University of Chicago Crime Lab, where I was fortunate to collaborate with exceptional researchers Jens Ludwig and Max Kapustin. Prior to that, I received a MS in Analytics from Northwestern University and a BA in Economics and Mathematics from Indiana University, Bloomington. Outside of research, I love to run (I ran my first 50k in 2024), cook, go to concerts, and play the guitar.

News

Nov 01, 2025 My paper Should You Use LLMs to Simulate Opinions? has been accepted to the AAAI Conference on Artificial Intelligence! I will update you when I hear about whether this will be a poster or presentation in the AI for Social Impact track. :sparkles: :smile: See you in Singapore this January!
Oct 21, 2025 My paper Should You Use LLMs To Simulate Opinions? received the Best Paper, Runner Up Award for the ISS Cluster at INFORMS 2025. Thank you to the judges for taking the time to review my paper!
Jul 01, 2025 I have been invited to present at INFORMS 2025 in the Generative AI session within the ISS Cluster! See you in Atlanta.