Pages

27 July 2023

In Race for AI Chips, Google DeepMind Uses AI to Design Specialized Semiconductors

Belle Lin

Researchers at Google DeepMind have discovered a more efficient and automated method of designing computer chips using artificial intelligence, which the lab’s parent company, Alphabet, said could improve its own specialized AI chip.

The focus on building faster, more-efficient chips comes as semiconductor heavyweights like Nvidia and AMD race to provide the computing power for businesses’ ever-growing demand for generative AI capabilities. But cloud-computing giants like Google and Amazon, too, have been designing their own AI chips, and betting that their homegrown hardware can be faster and less costly to run than the competition.

Google said it is exploring the use of its “latest AI breakthroughs” to improve its custom AI chips, called Tensor Processing Units or TPUs. “AI is improving everything we do such as composition, understanding, coding and robotics, and the same is becoming true with hardware design,” a spokesperson said.

For London-based DeepMind, which recently unveiled an AI system that can discover faster algorithms, a goal of using AI techniques like deep learning is to make computing systems—from network resources to data centers and chips—more efficient and sustainable, said DeepMind research scientist Vinod Nair.

“As society is becoming increasingly digital, we need more and more powerful chips, more and more specialized chips for various applications,” he said.

The traditional thinking in improving chip performance relies on a computing notion known as Moore’s Law, in which roughly every two years the number of transistors in a chip doubles. But some experts say that as transistors reach their physical limits, performance gains will come from designing smaller, specialized chips. Applications like ChatGPT, drones and self-driving cars now run on task-focused chips like digital-signal processors and Nvidia’s coveted graphics processors.

DeepMind’s AI-based approach, which it began working on about 18 months ago, focuses on making improvements to logic synthesis, a chip-design phase that involves turning a description of a circuit’s behavior into the actual circuit. Computer chips are made up of millions of logic circuits or “building blocks,” said Sergio Guadarrama, a DeepMind senior staff software engineer. While it is easy to optimize a few of them manually, it is impossible to tackle millions of them, he said.

By applying AI to speed up the design of logic circuits, DeepMind’s goal is to make the design of specialized chips more automated, efficient and less reliant solely on the work of human hardware engineers. That is a difference of thousands of designs generated by AI in one week, compared with one design produced by a human in a few weeks, Guadarrama said.

Key to DeepMind’s breakthrough is its use of deep learning, a technique for classifying patterns using large training data sets and AI neural networks—in other words, a way for machines to learn from data that is loosely modeled on the way a human brain learns to solve problems. The AI lab has applied the same technique to biology, culminating in last year’s announcement that its algorithm AlphaFold had predicted the structure of nearly all known proteins.

For chip design, DeepMind used an approach it calls “circuit neural networks,” allowing the researchers to “shape the problem to look like we are training a neural network, but in fact we’re designing a circuit,” Nair said.

Last month, DeepMind’s approach won a programming contest focused on developing smaller circuits by a significant margin—demonstrating a 27% efficiency improvement over last year’s winner, and a 40% efficiency improvement over this year’s second-place winner, said Alan Mishchenko, a researcher at the University of California, Berkeley and an organizer of the contest.

The DeepMind team’s results were a sort of “Eureka moment,” indicating that logic synthesis has much further progress to make, said Mishchenko, whose research focuses on computationally efficient logic synthesis. As with other scientific breakthroughs, Mishchenko said it is likely that within a few years, researchers and academics will use DeepMind’s results to push the field forward.

David Pan, a professor in electrical and computer engineering at the University of Texas at Austin and an adviser to X, an Alphabet company, said that while there are existing design automation tools that speed up and assist this stage of chip design, such tools are still far from being optimal.

And DeepMind’s results, while they focus on just a small aspect of chip design, are a fundamental step in the entire process of creating a chip, he said.

“DeepMind’s deep learning approach for logic synthesis opens a very interesting new direction to solve the classical logic synthesis problem,” Pan said. “The improvements are generic to all chips, whether specialized ASICs or CPUs or GPUs.”

No comments:

Post a Comment