By C. Anthony Pfaff
Introduction
Last Spring, Google announced it would not partner with the Department of Defense’s Project Maven, which sought to harness the power of artificial intelligence (AI) to improve intelligence collection and targeting. Google’s corporate culture, which one employee characterized as “don’t be evil,” attracted people who were opposed to any arrangement where their research would be applied to military and surveillance applications. As a result, Google had to choose between keeping these talented and skilled employees and losing potentially hundreds of millions of dollars in defense contracts. Google chose the former.[1] Later that fall, the European Union called for a complete ban on autonomous weapon systems.[2] In fact, several organizations and researchers working in artificial intelligence have signed a “Lethal Autonomous Weapons Pledge” that expressly prohibits development of machines that can decide to take a human life.
…if these systems can reduce some of the cruelty and pain war inevitably brings, then it is reasonable to question whether dehumanizing war is really a bad thing.

















