Scott Rutter
Anthropic presents itself as a company that takes the risks of artificial intelligence seriously, but its record does not support that claim. The company occupies a position of fundamental ethical contradiction: its CEO has publicly and specifically predicted that AI will eliminate half of all entry-level white-collar jobs, drive unemployment to 20 percent, and cause an “unusually painful” shock to society, yet Anthropic continues to build, deploy, and profit from the technology responsible for that harm at maximum commercial speed.
At the same time, when the Department of War demanded unrestricted use of that same technology for autonomous weapons and domestic mass surveillance, Anthropic refused, sued the federal government, and positioned itself as an ethical actor. A company cannot credibly claim moral authority over the military uses of its technology while simultaneously accelerating the civilian displacement it has already predicted and quantified. The virtue is selective. The contradiction is structural.
No comments:
Post a Comment