Artificial intelligence (AI) has become increasingly prevalent in our daily lives, impacting various aspects of society from healthcare to finance. A new study from Washington University in St. Louis delves into the intersection of human behavior and AI, revealing an unexpected psychological phenomenon. The research, led by Ph.D. student Lauren Treiman, explores how individuals adjust their behavior when training AI to play a bargaining game to appear more fair, highlighting the implications for AI development.
The study conducted five experiments with 200-300 participants each, focusing on the “Ultimatum Game” where subjects negotiated small cash payouts with either human players or a computer. Participants were informed that their decisions would be used to teach an AI bot how to play the game. Surprisingly, those who thought they were training AI were more inclined to seek a fair share of the payout, even if it meant sacrificing a few dollars. This behavior persisted even after participants were informed that their decisions were no longer contributing to AI training, indicating a lasting impact on decision-making.
The underlying motive driving this behavior adjustment remains unclear. Researchers did not delve into specific motivations and strategies, leaving room for speculation. While it may seem that participants felt a duty to make AI more ethical, it is possible that their natural tendency to reject unfair offers played a significant role. As Wouter Kool, assistant professor of psychological and brain sciences, pointed out, participants may have opted for the “easy way out” without contemplating future consequences.
Chien-Ju Ho, assistant professor of computer science and engineering, emphasized the critical human element in AI training. Many aspects of AI development rely on human decisions, making it essential to account for human biases to prevent biased AI outcomes. In recent years, instances of biased AI systems, such as inaccurate facial recognition software for individuals of color, have underscored the importance of addressing biases during AI training. Understanding the psychological dimensions of computer science is crucial for developing ethical and unbiased AI technologies.
The study from Washington University sheds light on the psychological implications of training AI for fairness. The findings indicate that individuals consciously adjust their behavior when training AI, showcasing an inherent desire for fairness and ethical decision-making. As AI continues to shape our society, understanding the complex interplay between human behavior and machine learning is paramount for developing AI systems that are unbiased and equitable for all individuals.
Leave a Reply