Advancing AI Fairness, Safety, and Responsibility

I’m a third-year PhD student at the McCombs School of Business Information, Risk, and Operations Management (IROM) Department. I’m fortunate to be advised by Maria De-Arteaga.

My thesis research advances fairness, safety, and responsibility in the development and use of artificial intelligence (AI) systems trained to replicate human opinions, judgments, and values. As such, it is very interdisciplinary and involves auditing contemporary AI and NLP systems with theoretical lenses from the social sciences, complex systems, and ethics.

Additionally, I am working to develop a framework for open-source foundation model safety, addressing the risks that come from the public’s ability to modify these models to potentially malicious ends. This effort also considers the unique challenges facing open-source companies, such as lack of centralized control and limited resources, which can complicate safety implementations.

Ethics of Automating Human Judgments

One promise of AI rests on its ability to automate decision-making tasks that have traditionally relied on human judgment, especially in areas where a clear-cut ground truth is often lacking (e.g., determining whether content is harmful). However, human judgments are inherently subject to biases, underscoring the importance of carefully analyzing which biases AI systems may reflect in their output. My work focuses on identifying and addressing the implications of such biases, particularly within the realm of AI-assisted fact-checking.

In this domain, I have formulated ethical frameworks to evaluate potential harms arising from AI usage, assessed the fairness of models used to prioritize content for fact-checking in simulated social networks, and investigated the capacity of large language models (LLMs) to represent diverse viewpoints on contentious issues.

Open-Source Foundation Model Safety

Given the advanced capabilities of open-source foundation models, robust safety mechanisms are necessary to ensure that these models are not misused by malicious actors. However, open-source foundation model companies often lack the resources to maintain these extra safeguards. Further, these companies are often guided by values of openness and accessibility, which can be at odds with safety. In the face of these challenges, how can we ensure that open-source foundation models maintain high standards of safety? I am developing a research agenda that attempts to frame model safety mechanisms within the context and values of open-source foundation model companies, such as privacy, low marginal cost, and open-source contributions.

Publications

2023

Neumann, Terrence, and Nicholas Wolczynski. “Does AI-Assisted Fact-Checking Disproportionately Benefit Majority Groups Online?.” Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 2023. pdf

Tanriverdi, Hüseyin, John-Patrick Akinyemi, and Terrence Neumann. “Mitigating Bias in Organizational Development and Use of Artificial Intelligence.” Proceedings of the 2023 AIS International Conference on Information Systems. 2023. pdf

2022

Neumann, Terrence, Maria De-Arteaga, and Sina Fazelpour. “Justice in misinformation detection systems: An analysis of algorithms, stakeholders, and potential harms.” Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022. pdf

Working Papers

Diverse, but Divisive: LLMs Can Exaggerate Differences in Opinion Related to Harms of Misinformation. With Sooyong Lee, Maria De-Arteaga, Sina Fazelpour, and Matt Lease. pdf

PRISM: A Design Framework for Open-Source Foundation Model Safety With Bryan Jones. pdf

About Me

In my spare time, I like to run around Austin, watch movies at AFS, and experiment with my ever growing guitar pedal collection.