30 March 2024

Navigating the Future of Work

Erik Brynjolfsson, Adam Thierer, and Daron Acemoglu

Introduction

The Workforce Futures Initiative is a research collaboration among the American Enterprise Institute, the Brookings Institution, and the Project on Workforce at Harvard Kennedy School’s Malcolm Wiener Center for Social Policy. The initiative aims to develop concise and actionable reviews of existing research for federal, state, and local policymakers. Since August 2021, the group has provided a forum for researchers and practitioners to discuss policy ideas, evaluate evidence, and identify priorities for new research on the future of work and the public workforce system.

In the first report, Beyond the Turing Test: Harnessing AI to Create Widely Shared Prosperity, Erik Brynjolfsson revises his view on AI, criticizing the Turing Test for equating human mimicry with intelligence and warning against economic consequences. He argues that true technological progress lies in augmenting—not replacing—human capabilities, historically increasing the value of labor. He criticizes the current trend of developing technology that substitutes for human labor, citing misaligned incentives among technologists, entrepreneurs, and policymakers. He advocates for innovation that complements human abilities, exemplified by companies like Cresta, which uses AI to assist, not replace, human operators. Brynjolfsson emphasizes the need for policy changes—such as equal taxation of capital and labor—to encourage such human-centered technology, arguing that the future of work depends on our choices regarding technology’s role in the labor market.

In the second report, We Can’t Predict the Future of Work, Adam Thierer explores the skepticism surrounding predictions about technology’s impact on employment. Highlighting the tendency for overly pessimistic forecasts, he challenges the accuracy of such predictions with historical data. As examples of this overestimation, Thierer cites the recalibration of AI-related job loss estimates and the unexpected growth in certain job sectors. His report emphasizes the complexity of predicting future jobs and skills, advocating for flexible, adaptive workforce development rather than rigid government programs to navigate the evolving technological landscape.

In the final report, Automation, AI, and Wages, Daron Acemoglu examines the debate on automation and AI’s impact on job creation and productivity. While some, such as The Economist and the McKinsey Global Institute, view AI as a driver of new jobs and growth, others express concerns about its potential to cause job loss and exacerbate inequality. Acemoglu argues that automation has not significantly increased productivity or jobs to offset losses. He highlights automation’s limited success in creating good jobs and the growing inequality in labor markets, partly attributed to automation. He scrutinizes AI’s role in the labor market, suggesting cautious adoption to avoid negative outcomes. His introduction also touches on the need for complementary investments and a balanced approach to leveraging automation and AI for society’s benefit.

Beyond the Turing Test

HARNESSING AI TO CREATE WIDELY SHARED PROSPERITY

Alan Turing famously asked, “Can we create a machine that imitates humans so well that we can’t tell which is which?” When I was a teenager, I remember thinking, “Oh, that’s really good! If a machine is indistinguishable from a human to a group of testers, that must mean it’s intelligent.”

I have since completely changed my view. The Turing Test is a bad test of intelligence. It’s about as reliable as assessing gravity’s existence by asking if a magician can levitate someone to the astonishment of a live audience

But more importantly, making machines that perfectly mimic humans would have some strikingly negative economic effects. First, if a machine closely imitates humans, then it’s an economic substitute for labor, and that tends to drive down wages. In turn, that can create a trap—I call it “the Turing Trap”— in which many workers lose not only economic power but also the political power to reverse their predicament.

Many think that, by definition, tech progress entails this sort of inexorable substitution of machines for humans. However, the historical reality is most tech progress has not substituted for humans but rather amplified and complemented our capabilities. One marker of this is that for over a century, an hour of human labor has generally increased in value (though not for everybody and not for all groups). For instance, manufacturing workers are paid about 10 times more for each hour of work today than they were paid in 1860.

Why is an hour of labor more valuable now than it was in the past? Because today, we leverage our hands and brains with a lot of technology—hard technologies, such as bulldozers and computers, and soft technologies, such as business-process innovations. Technological progress that augments humans has increased wages.

Second, merely mimicking humans sets a ceiling on progress. If we are simply taking what’s already being done and using a machine to replace what the human is doing, that puts an upper bound on how good you can get. For example, if a business automates the process of, say, making clay pots, then the clay pots can be made more cheaply and, as a result, you have a lot of inexpensive clay pots. However, the bigger value comes from creating an entirely new thing that never existed before, such as a supersonic jet, a nanoscale actuator, or a new way of solving protein folding to create medicines. We have iPhones because somebody invented something new. They didn’t simply make a cheaper telegraph. Most of our increase in living standards comes from the invention of new goods and services, not from making the same things more cheaply.

ore cheaply. The third important part of my Turing Trap argument is that three different groups— technologists, entrepreneurs and businesspeople, and policymakers—currently have misaligned incentives. Many technologists, though not all, focus on making machines that match humans in various tasks. It’s an inspiring goal, passing the Turing Test. Some are working to make a robotic hand that’s as dexterous as a human hand.3 Others create technologies that play poker, chess, or other games that humans play.4 Still others work on machines that can handle a telephone reservation or a medical consultation without human help.5 These technologists are asking, “How can we replace humans doing existing tasks?” But in my view, they should more often ask, “What entirely new thing can we now do that we’ve never done before?” One reason they don’t is the second question requires a lot more imagination.

I spend significant time with entrepreneurs and executives. I visit their organizations to watch them at work and I teach at a business school, where I study their decision-making. Once again, too often I see them focus on a task their business is already doing and think, “How can I replace the human worker with a machine?” as opposed to “How can we do something new?”

Finally, consider policymakers. The tax code, investment tax credits, and many other policy-guided decisions today heavily skew toward encouraging capital and discouraging labor. For instance, marginal tax rates on labor are currently much higher than tax rates on capital. Back in 1986, they were the same. But since then, they’ve changed in a way that discourages innovations that employ and reward labor and favors innovations that shift value to capital owners.

Therefore, for technologists, executives, and policymakers—and thus for our whole economy— innovation and investment do not create a level playing field. They skew toward creating technologies that substitute for humans rather than technologies that complement humans.

It doesn’t have to be that way.

I work with several innovators and entrepreneurs who are doing something different. One company, Cresta, was started by Sebastian Thrun and Zayd Enam to help contact centers. But it’s not a company that has a robot operator answer your call or a robot text generator respond to you. Instead, they keep humans not only in the loop but in charge. Customers talk to a human operator, and that person receives real-time tips by an artificial intelligence system. The system recommends topics that will be most useful to the caller, such as reminding the operator to mention a relevant product or a new price rebate or instructing them how to fix a particular problem. By augmenting humans this way, the operators have done fabulously. They can handle a much broader range of questions. There’s higher customer satisfaction and higher throughput. Even the employees are less likely to quit.

Using AI for augmentation turns out to be much more effective than trying to get the machine to handle the queries alone or having the humans work alone. The Cresta system combines the strengths of humans and machines. Lindsey Raymond, Danielle Li, and I have found the less-experienced workers benefit the most from this augmentation method, so it more equally distributes income as well.7 This approach has been a win in terms of effectiveness, efficiency, and equity.

How can we encourage more companies to innovate toward complementing humans instead of substituting for them? One way is taxing capital and labor equally to create a more level playing field. A tax system that eliminates the existing incentives toward automation instead of augmentation would allow millions of managers and technologists to make their own local decisions without the government putting a thumb on the scale. Better yet, we have other tax systems, such as a value-added tax or X tax, that treat investment decisions much more evenly.

I’m not a technological determinist, and I don’t think any particular outcome is inevitable in terms of how technology will affect work. The extent to which we augment human labor is a choice. We need to carefully consider what kind of world we want to live in. Do we want a world with widely shared prosperity? Do we want a world where everybody has some bargaining power? If we do, I believe we can create that. The mission of Stanford University’s Digital Economy Lab is to do the research to understand what economic levers matter, what policies will make a difference, and how can we measure things more carefully so we can build a prosperous society.

We Can’t Predict the Future of Work

In a 2002 speech on speculation, science fiction author Michael Crichton lambasted experts and the media for their “tendency to excess” and “crisisization of everything possible” when predicting the future.8 Others have noted how sensationalism dominates forecasting because not only does bad news dominate media headlines9 but “pessimism has always been big box office,”10 with dystopian scenarios at the center of almost every story involving technology.

Against this backdrop, pundits and politicians continue to make pessimistic predictions about the dangers of technology-induced unemployment. They do so even though the historical record tells a different— and quite positive—story about the relationship between innovation and jobs.12 “Futurists don’t know any more about the future than you or I,” Crichton argued, and when reviewing their past predictions, “you’ll see an endless parade of error” and a record that is “no better than chance.”

Indeed, a coin flip is typically a better predictor of future technology and employment trends. A 2012 report prepared for the Department of Defense evaluated over one thousand science and technology forecasts from academia, industry, government, and others.14 The meta-survey revealed an average success rate of just 33 percent, with short-term forecasts (35 percent) faring only slightly better than long-term predictions (27 percent).

Bad predictions are forgotten quickly, however, and replaced with other headline-grabbing pessimistic prognostications. Over the past decade, two major reports predicted massive job dislocations due to artificial intelligence. In 2013, Carl Benedikt Frey and Michael Osborne of the University of Oxford published a widely discussed study that surveyed hundreds of occupations and considered how likely they were to be automated.16 They analyzed 702 professions and estimated 47 percent of US jobs were at high risk of being lost. Two years later, the McKinsey Global Institute published a report predicting as many as 45 percent of jobs (representing about $2 trillion in annual wages) “can be automated by adapting currently demonstrated technologies.”17 Seizing on these reports, headlines lamented, “Robots May Shatter the Global Economic Order Within a Decade."

These reports were wildly off the mark. McKinsey recalibrated its model just two years later, admitting in 2017 that “very few occupations—less than 5 percent—are candidates for full automation.”19 Meanwhile, almost a decade after Frey and Osborne’s study debuted, the US economy has added 16 million jobs. The profession they said would face the highest risk of technological disruption—insurance underwriters—instead has seen employment grow 16.4 percent since 2013.

AI will cause job dislocations, of course, but no one can accurately predict which or how many jobs will be affected. Forecasting the future workforce is haunted by the same problem experts have always faced: We do not even possess a vocabulary to describe the jobs or skills of the future. When skimming old Bureau of Labor Statistics reports, such as the agency’s mammoth 1969 Tomorrow’s Manpower Needs: National Manpower Projections and a Guide to Their Use as a Tool in Developing State and Area Manpower Projections, 21 one finds no mention of any of the jobs that would eventually flow from the personal computing or internet revolutions. Even when old government reports or academic studies made passing mention of the future need for “computer skills,” they offered no detail about what specific skills workers would require.

Employers, workers, and others instead had to master new skills and business models on the fly through constant iteration.22 When mainframe computers dislocated an entire generation of human “calculators,” who did hard math by hand for firms and government agencies, they got busy creating more and better computing devices. Once free to do more creative things, those calculators became the programmers who gave us the digital revolution. Some pundits now predict “the end of programming,” with many of those workers losing their jobs to algorithms.23 More likely, AI will once again free up workers to find still better things to do.

A new book, Working with AI: Real Stories of HumanMachine Collaboration, provides almost 30 case studies showing how firms are currently integrating algorithmic technologies in the workplace and “practicing augmentation, not large-scale automation.”25 The common theme across these case studies is that “they involve highly complex collaboration,”26 with humans and machines learning together through positive feedback loops.

No comments: