13 June 2018

Why Artificial Intelligence Won't Be as Bad—or as Good—as Everyone Thinks

Milton Ezrati

Instead, often the new technology itself creates additional employment. The application of the spinning jenny and machine loom in eighteenth and early nineteenth century Britain put hand weavers and others out of work. But by creating a much more profitable trade on textiles, the inventions brought about such an expansion in the industry that it ultimately employed more people than previously, including a more significant part of the nation's workforce as well. More recently, the advancement of automatic teller machines (ATM) made everyday banking so much more efficient, the industry employed higher numbers of men and women, including the tellers that people feared would wind up on the unemployment line.

Of greater significance than the employment question is how much wealth created by each technological advance. To be sure, each wave creates a class of super-wealthy from those lucky or smart enough to have made themselves a part of it. The great American railroad barons of the nineteenth century stand as examples, as do today's computer-based tech barons. No doubt AI will create its own class of this sort. But the technology also drives the general economy, creating new demands for products and services of all kinds that in turn create new jobs. This job-producing leverage is evident in a simple thought experiment. Imagine if AI could bring the growth rate of labor productivity in America from the present pace of about one percent a year up to the two to three percent pace averaged in the 1960s, 70s, and 80s. If that were to happen, the overall U.S. economy would expand at something close to 3.5 percent a year instead of the recent two percent pace. Over the next ten years, that difference would produce $3.6 trillion additional annual national product than otherwise, a huge cumulative addition.







Of course, each U.S. generation, though it has access to this history, worries that its technology is somehow different. The response is understandable. People can see the jobs the new technology will destroy a lot clearer than the jobs it will create, especially since some of the new jobs have yet to be even imagined. For instance, when containers put American longshoremen out of work in the late 1950s and early 1960s, it would have taken a bold forecaster indeed to predict that containers would be a good thing. In fact, those containers would foster a 2,000 percent increase in world trade in just five years and create more jobs for higher-paid, better-trained workers in the industry and beyond it. Furthermore, in the 1980s when widespread use of word processing drove out U.S. typists, who were mostly women, the number of women in the workforce still increased, as did the proportion of women in paid employment. The natural inability to see the sources of growth shows clearly, if rather comically, in how the 1975 report of the American Council of Economic Advisors (incidentally authored by Alan Greenspan) did not once use the word "computer."


There is no doubt that robotics and AI will cause dislocation and hardship for some. In this case, the nature of the new technology will extend the pain to occupations once thought immune (which may be why the fears have become unusually intense among U.S. journalists and bureaucrats.) There is every reason to seek programs and funds that can retrain and retool such people for other work. Though, as always, it is difficult to identify what that future work will involve (and a fool's game to try to identify it). The good thing is that the evidence of history, a small part of which this article reviews, and the logic of economic growth suggests that a jobless future is the least likely of outcomes and that the argument for a UBI misses that point.

No comments: