Adam Becker’s “More Everything Forever” begins by describing the ideas of Eliezer Yudkowsky, an AI guru who Sam Altman thinks deserves a Nobel Prize. Yudkowsky’s ambitions for humanity include “[p]erfect health, immortality,” and a future in which “[i]f you imagine something that’s worse than mansions with robotic servants for everyone, you are not being ambitious enough.” According to Yudkowsky and his peers, a “glorious transhumanist future” awaits us if we get AI right, although we face extinction if we get it wrong.
“AI” and “transhumanist” are new terms for rather older ambitions. As the seedy occultist Dr. Trelawney remarks in Anthony Powell’s 1962 novel, “The Kindly Ones,” “[t]o be forever rich, forever young, never to die … Such was in every age the dream of the alchemist.” Renaissance alchemists won the support of monarchs like Rudolf II, the Holy Roman Emperor who squandered his realm’s money on a futile quest to discover the Philosopher’s Stone. Now, as Becker explains, AGI, or artificial general intelligence, has become the means through which philosophers might transubstantiate our mundane reality into a realm in which the apparently impossible becomes possible: living forever, raising the dead, and remaking the universe in the shape of humanity.
These ideas would be a curiosity, if they weren’t reshaping the world, and policymakers’ understanding of national security. Our epoch is quite as strange as Rudolf II’s Prague. Like a John Crowley novel, it has its own deathless golems and wizards who hope to speak to divine beings through a medium. In Ezra Klein’s description, AI’s coders see themselves as casting spells of summoning, even if they are not sure what lurks on the other side of the portal.
Just as centuries ago, rulers listen to them. The Biden administration bet Americans’ national security on the proposition that AGI was right around the corner, while the Trump administration and its allies in the Gulf seem to believe that AI will help make a world where they will be in charge.
Becker’s excellent and lively book is not about AI as a working technology. It has little to say about the combinations of machine learning and “neural networks” (statistical processing engines that loosely resemble systems of neurons) that, for example, are used to simulate protein folding and complex weather systems. Instead, it is about the idea of AI and other closely related ideas. If it sometimes feels as though we live in a dark self-ramifying fairy tale, it is because the often mundane realities of AI have become interwoven with a set of fantastical notions that long predate the working technologies we have today.
No comments:
Post a Comment