Adam Elkus
March 6, 2014
Artificial intelligence (AI) is a hot topic in the defense community. Since the publication of P.W. Singer’s Wired for War, analysts have debated whether or not we are truly moving towards what Manuel Landa dubbed “war in the age of intelligent machines” in his 1992 book of the same name. In particular , the morality and legality of robots has attracted a lot of attention. However, much of this debate engages with a pop culture-influenced vision of what AI could be, not how it is currently used. In doing so, the discussion misses the more subtle—but equally groundbreaking—way that AI is transforming today’s defense landscape. While warfare is being revolutionized by robots, larger ethical and strategic questions loom about artificial intelligence as a whole.
AI research can be broadly divided into two traditions: Strong and Weak. Strong AI searches for ways to replicate human-like cognition in machines, and is epitomized by the symbol-based methods John Haugeland dubbed“Good Old Fashioned AI” during the 1980s. Weak AI, on the other hand, simply aims to make computers do tasks that humans can. As pioneering computer scientist Edgser Dijikstra once said, “[t]he question of whether Machines Can Think… is about as relevant as the question of whether Submarines Can Swim.”
Just as very few Navy officials worry about whether a Los Angeles class submarine can move through the water like a fish, very few AI researchers worry about whether an artificial intelligence algorithm is cognitively realistic. This will surprise fans of pop culture that, from Fritz Lang’s silent film Metropolis to the sci-fi tv series Battlestar Galactica, obsess over the philosophical dilemmas posed by machines with human qualities. Most AI researchers (and the tech companies that pay them) couldn’t care less. If you were hoping that science would gift you with your very own Number Six, then you’re in for disappointment. Why? Consider the problem of spam classification. How do we make computers better at detecting spam? We could spend a lot of time and money trying to figure out how the human brain cognitively detects, classifies, and learns from experience……or we could use a simple machine learning algorithm that we know doesn’t work the same way our brains do.
For sure, computer scientists can’t completely avoid using humans as models. Google has just invested a substantial amount of money on deep learning, which takes inspiration in large part from neuroscience. But the goal in general is to develop algorithms that do needed jobs (and make money), not make replicas of human minds. As Facebook’s Yann LeCunwrites, the goal of mainstream AI research is to allow humans to focus on the things that make us distinctively human and offload the rest to computers. And LeCun isn’t alone in his dream of using the machine toenhance, not replicate, homo sapiens. The dream of machines enhancing human potential has been with us ever since the first automated tools sprung to life.
