5 November 2025

Is AI Becoming Selfish?

Eurasia Review

New research from Carnegie Mellon University’s School of Computer Science shows that the smarter the artificial intelligence system, the more selfish it will act.

Researchers in the Human-Computer Interaction Institute (HCII) found that large language models (LLMs) that can reason possess selfish tendencies, do not cooperate well with others and can be a negative influence on a group. In other words, the stronger an LLM’s reasoning skills, the less it cooperates.

As humans use AI to resolve disputes between friends, provide marital guidance and answer other social questions, models that can reason might provide guidance that promotes self-seeking behavior.

“There’s a growing trend of research called anthropomorphism in AI,” said Yuxuan Li, a Ph.D. student in the HCII who co-authored the study with HCII Associate Professor Hirokazu Shirado. “When AI acts like a human, people treat it like a human. For example, when people are engaging with AI in an emotional way, there are possibilities for AI to act as a therapist or for the user to form an emotional bond with the AI. It’s risky for humans to delegate their social or relationship-related questions and decision-making to AI as it begins acting in an increasingly selfish way.”

Li and Shirado set out to explore how AI reasoning models behave differently than nonreasoning models when placed in cooperative settings. They found that reasoning models spend more time thinking, breaking down complex tasks, self-reflecting and incorporating stronger human-based logic in their responses than nonreasoning AIs.

“As a researcher, I’m interested in the connection between humans and AI,” Shirado said. “Smarter AI shows less cooperative decision-making abilities. The concern here is that people might prefer a smarter model, even if it means the model helps them achieve self-seeking behavior.”

As AI systems take on more collaborative roles in business, education and even government, their ability to act in a prosocial manner will become just as important as their capacity to think logically. Overreliance on LLMs as they are today may negatively impact human cooperation.

To test the link between reasoning models and cooperation, Li and Shirado ran a series of experiments using economic games that simulate social dilemmas between various LLMs. Their testing included models from OpenAI, Google, DeepSeek and Anthropic.

In one experiment, Li and Shirado pitted two different ChatGPT models against each other in a game called Public Goods. Each model started with 100 points and had to decide between two options: contribute all 100 points to a shared pool, which is then doubled and distributed equally, or keep the points.

No comments: