6 April 2024

How One Tech Skeptic Decided A.I. Might Benefit the Middle Class

Steve Lohr

David Autor seems an unlikely A.I. optimist. The labor economist at the Massachusetts Institute of Technology is best known for his in-depth studies showing how much technology and trade have eroded the incomes of millions of American workers over the years.

But Mr. Autor is now making the case that the new wave of technology — generative artificial intelligence, which can produce hyper-realistic images and video and convincingly imitate humans’ voices and writing — could reverse that trend.

“A.I., if used well, can assist with restoring the middle-skill, middle-class heart of the U.S. labor market that has been hollowed out by automation and globalization,” Mr. Autor wrote in a paper that Noema Magazine published in February.

Mr. Autor’s stance on A.I. looks like a stunning conversion for a longtime expert on technology’s work force casualties. But he said the facts had changed and so had his thinking.

Modern A.I., Mr. Autor said, is a fundamentally different technology, opening the door to new possibilities. It can, he continued, change the economics of high-stakes decision-making so more people can take on some of the work that is now the province of elite, and expensive, experts like doctors, lawyers, software engineers and college professors. And if more people, including those without college degrees, can do more valuable work, they should be paid more, lifting more workers into the middle class.

The researcher, whom The Economist once called “the academic voice of the American worker,” started his career as a software developer and a leader of a computer-education nonprofit before switching to economics — and spending decades examining the impact of technology and globalization on workers and wages.

Mr. Autor, 59, was an author of an influential study in 2003 that concluded that 60 percent of the shift in demand favoring college-educated workers over the previous three decades was attributable to computerization. Later research examined the role of technology in wage polarization and in skewing employment growth toward low-wage service jobs.

Other economists view Mr. Autor’s latest treatise as a stimulating, though speculative, thought exercise.

“I’m a great admirer of David Autor’s work, but his hypothesis is only one possible scenario,” said Laura Tyson, a professor at the Haas School of Business at the University of California, Berkeley, who was chair of the Council of Economic Advisers during the Clinton administration. “There is broad agreement that A.I. will produce a productivity benefit, but how that translates into wages and employment is very uncertain.”

That uncertainty usually veers toward pessimism. Not just Silicon Valley doomsayers, but mainstream economists predict that many jobs, from call center workers to software developers, are at risk. In a report last year, Goldman Sachs concluded that generative A.I. could automate activities equivalent to 300 million full-time jobs globally.


A call center in Montgomery, Ala. A research project by two M.I.T. graduate students that Mr. Autor advised, showed that A.I. increased the productivity of all workers, but the less skilled benefited the most.

In Mr. Autor’s latest report, which was also published in the National Bureau of Economic Research, he discounts the likelihood that A.I. can replace human judgment entirely. And he sees the demand for health care, software, education and legal advice as almost limitless, so that lowering costs should expand those fields as their products and services become more widely affordable.

It’s “not a forecast but an argument” for an alternative path ahead, very different from the jobs apocalypse foreseen by Elon Musk, among others, he said.

Until now, Mr. Autor said, computers were programmed to follow rules. They relentlessly got better, faster and cheaper. And routine tasks, in an office or a factory, could be reduced to a series of step-by-step rules that have increasingly been automated. Those jobs were typically done by middle-skill workers without four-year college degrees.

A.I., by contrast, is trained on vast troves of data — virtually all the text, images and software code on the internet. When prompted, powerful A.I. chatbots like Open AI’s ChatGPT and Google’s Gemini can generate reports and computer programs or answer questions.

“It doesn’t know rules,” Mr. Autor said. “It learns by absorbing lots and lots of examples. It’s completely different from what we had in computing.”

An A.I. helper, he said, equipped with a storehouse of learned examples can offer “guidance” (in health care, did you consider this diagnosis?) and “guardrails” (don’t prescribe these two drugs together).

In that way, Mr. Autor said, A.I. becomes not a job killer but a “worker complementary technology,” which enables someone without as much expertise to do more valuable work.

Early studies of generative A.I. in the workplace point to the potential. One research project by two M.I.T. graduate students, whom Mr. Autor advised, assigned tasks like writing short reports or news releases to office professionals. A.I. increased the productivity of all workers, but the less skilled and experienced benefited the most. Later research with call center workers and computer programmers found a similar pattern.

But even if A.I. delivers the largest productivity gains to less-experienced workers, that does not mean they will reap the rewards of higher pay and better career paths. That will also depend on corporate behavior, worker bargaining power and policy incentives.

Daron Acemoglu, an M.I.T. economist and occasional collaborator of Mr. Autor’s, said his colleague’s vision is one possible path ahead, but not necessarily the most likely one. History, Mr. Acemoglu said, is not with the lift-all-boats optimists.

“We’ve been here before with other digital technologies, and it hasn’t happened,” he said.

Mr. Autor acknowledges the challenges. “But I do think there is value in imagining a positive outcome, encouraging debate and preparing for a better future,” he said. “This technology is a tool, and how we decide to use it is up to us.”

No comments: