30 July 2023

Our Oppenheimer Moment: The Creation of A.I. Weapons

Alexander C. Karp

In 1942, J. Robert Oppenheimer, the son of a painter and a textile importer, was appointed to lead Project Y, the military effort established by the Manhattan Project to develop nuclear weapons. Oppenheimer and his colleagues worked in secret at a remote laboratory in New Mexico to discover methods for purifying uranium and ultimately to design and build working atomic bombs.

He had a bias toward action and inquiry.

“When you see something that is technically sweet, you go ahead and do it,” he told a government panel that would later assess his fitness to remain privy to U.S. secrets. “And you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.” His security clearance was revoked shortly after his testimony, effectively ending his career in public service.

Oppenheimer’s feelings about his role in conjuring the most destructive weapon of the age would shift after the bombings of Hiroshima and Nagasaki. At a lecture at the Massachusetts Institute of Technology in 1947, he observed that the physicists involved in the development of the bomb “have known sin” and that this is “a knowledge which they cannot lose.”

We have now arrived at a similar crossroads in the science of computing, a crossroads that connects engineering and ethics, where we will again have to choose whether to proceed with the development of a technology whose power and potential we do not yet fully apprehend.

The choice we face is whether to rein in or even halt the development of the most advanced forms of artificial intelligence, which some argue may threaten or someday supersede humanity, or to allow more unfettered experimentation with a technology that has the potential to shape the international politics of this century in the way nuclear arms shaped the last one.

The emergent properties of the latest large language models — their ability to stitch together what seems to pass for a primitive form of knowledge of the workings of our world — are not well understood. In the absence of understanding, the collective reaction to early encounters with this novel technology has been marked by an uneasy blend of wonder and fear.

Some of the latest models have a trillion or more parameters, tunable variables within a computer algorithm, representing a scale of processing that is impossible for the human mind to begin to comprehend. We have learned that the more parameters a model has, the more expressive its representation of the world and the richer its ability to mirror it.

No comments: