Pages

24 April 2026

You have a preview view of this article while we are checking your access. When we have confirmed access, the full article content will load.

Oliver Whang

When Deep Blue, IBM’s chess-playing supercomputer, beat Garry Kasparov in 1997, computers were still just computers. Deep Blue weighed more than a ton, had 32 central processing units and could evaluate 200 million board positions in a second, but everyone knew what it was doing: The computer determined the best next move by simulating, and assigning values to, board positions up to 12 moves ahead (amounting to billions of positions). This ability was programmed into Deep Blue directly by its makers, just as the first modern computer, the Electronic Numerical Integrator and Computer, or ENIAC, was programmed in 1945 to add numbers. These were “white box” systems. There was no mystery around what was going on inside them, even though they were, in a way, intelligent: What else would you call something that was good at chess?

Fifteen years later, in 2012, a research group from the University of Toronto developed a program called AlexNet (named after one of its creators, Alex Krizhevsky) that identified objects in images far more accurately than any previous program — a capability demonstrated when it handily won an image-classifying competition. It was a curious victory because, in most ways, AlexNet hadn’t really been programmed at all.

No comments:

Post a Comment