professor and grad student stand in a lab while the student holds out a computer component

Open-source platform breaks a bottleneck for high-tech chip design

While Congress has invested billions of dollars to support the U.S. semiconductor industry, a little-known bottleneck deep in the chip-design process still threatens to stymie innovation. Today’s high-tech chips have become increasingly hard — and prohibitively expensive — for startups and academics to design and test.

“It used to be the fabrication was the expensive part,” said David Wentzlaff, an associate professor of electrical and computer engineering at Princeton. But he said with modern chips, the balance has shifted. The costs to explore cutting-edge design features, integrate them into an efficient architecture and verify that the chip works as designed has grown faster in recent years than fabrication costs. “The verification costs get very high.”

Wentzlaff, an expert in creating tools that improve and broaden access to the chip-design process, said that, due to rising complexity, the process has become unaffordable for most startups and academic labs — the wellsprings of disruptive technologies.

Now he and graduate student Grigory Chirkov have developed an open-source platform that allows researchers to explore new design features and more easily test high-tech chips using inexpensive and widely available cloud-based systems, lowering the barriers to innovation. The new platform works in conjunction with the open-source processor project his team launched in 2015.

“It’s a tool to let you explore building large chips and building large chips made out of multiple chiplets, which is a new exciting field that’s going on right now,” Wentzlaff said. “And do it at high speed and for low cost,” he added. “You don’t even need to go buy the test hardware. You just rent it from Amazon.”

Chirkov presented the details of the new platform, called SMAPPIC, on March 28 at the 2023 international conference on Architectural Support for Programming Languages and Operating Systems, known as ASPLOS.

researcher stands before his poster
Gregory Chirkov. Photo courtesy the researchers

Complexity is expensive

Anyone who had a desktop computer with a dial-up connection and who now wears a smart watch on her wrist can intuitively grasp the implications of the industry’s long period of exponential growth — computers have become much more powerful, much smaller and much cheaper for six straight decades.

According to Wentzlaff, that trend will come screeching to a halt in a few years, if it hasn’t already. Transistors are about as small as they can be, and so the old way of improving computer architecture is about to fizzle out.

In response, architects are grappling with new ways to advance hardware performance without sacrificing too much on size or cost. While most chips over the past 60 years have used a monolithic design, with a central processor handling all tasks, many manufacturers today are turning to a more modular approach that can distribute resource-intensive tasks like AI and image-processing to specialized components.

Companies including Apple, AMD, Intel and NVIDIA have all rolled out modular chips. Generally, they combine smaller processor tiles called chiplets into a coherent and efficient architecture that shifts the work around according to computational demand.

But this kind of architecture is extremely complex, according to the researchers. The chiplets are each designed to handle specialized tasks and so they aren’t necessarily optimized for the larger system. Approaching chip design from a systems level takes a lot of time. And testing those systems, which includes verifying the architecture works to specification and then validating that it works in practice, on silicon, is one of the most arduous processes in all of computing.

Chirkov said that even experts who specialize in the architecture of chiplet-based systems find this process time-consuming and cumbersome, especially in working with the hardware-description language Verilog, which takes an inordinate amount of effort. All of this costs a lot of money.

Chirkov and Wentzlaff’s new platform makes it easier to explore the fine details of these complex architectures. It’s especially useful for small teams looking to sprint to a deadline and test many ideas simultaneously. Conventionally, those teams would be limited to the hardware they own. With the new platform, they can quickly expand to meet short term needs without investing large amounts of capital.

“One of the big things that happens is when you get close to a deadline you don’t actually just want one of these things, you want five of these things,” Wentzlaff said. With the new platform, he said, “you sort of rent the five that you need for the week before the deadline.”

Related Faculty

David Wentzlaff

Related Departments

Professor writes on white board while talking with grad student.

Electrical and Computer Engineering

Improving human health, energy systems, computing and communications, and security