21 October 2025

Your New Teammate Is a Machine. Are You Ready?

Nelson Lim

Companies across various industries are investing heavily in AI to enhance employee productivity. A leader at the consulting firm McKinsey says he envisions an AI agent for every human employee. Soon, a factory manager will oversee a production line where human workers and intelligent robots seamlessly develop new products. A financial analyst will partner with an AI data analyst to uncover market trends. A surgeon will guide a robotic system with microscopic precision, while an AI teammate monitors the operation for potential complications.

These scenarios represent the forefront of human-machine collaboration, a significant shift that is quickly moving from research labs into every critical sector of our society.

In short, we are on the verge of deploying AI not just as a tool, but as an active partner in our most important work. The potential is clear: If we effectively combine the computational power of AI with the intuition, creativity, and ethical judgment of a human, the team will achieve more than either could alone.

But we aren't prepared to harness this potential. The biggest risk is what's called “automation bias.” Humans tend to over-rely on automated systems—but, worse, also favor their suggestions while ignoring correct contradictory information. Automation bias can lead to critical errors of commission (acting on flawed advice) and omission (failing to act when a system misses something), particularly in high-stakes environments.

Automation bias can lead to critical errors of commission (acting on flawed advice) and omission (failing to act when a system misses something), particularly in high-stakes environments.

Even improved proficiency with AI doesn't reliably mitigate the automation bias. For example, a study of the effectiveness of Clinical Decision Support Systems in health care found that individuals with moderate AI knowledge were the most over reliant. Both novices and experts showed more calibrated trust. What did lead to lower rates of automation bias was making study participants accountable for either their overall performance or their decision accuracy.

No comments: