World’s largest AI chip with 1.2 trillion transistors

| |
1 Star2 Stars3 Stars4 Stars5 Stars

A huge processor has been designed to slash the training time of the “deepest” neural networks from months to minutes. It is a task that will be handled by no fewer than 1.2 trillion transistors packed on a chip that is roughly the size of an iPad.

 In today’s world, though, it takes something more than the mere number of transistors to turn heads in the processor community. AMD’s new EPYC Rome server CPUs come with a gigantic total of 32 billion transistors. The new logic chip (FPGA) Virtex UltraScale+ VU19P made by Xilinx boosts the total to 35 billion, and Intel’s Nervana NNP-T weighs in with 27 billion transistors. Such sizes appear to have become run of the mill.

Redundant cores and connection keep output high. (Image: Cerebras).

With the new chip made by the AI start-up Cerebras, visitors at the Hot Chips Conference at Stanford found the show-stealing sensation they were looking for. A total of 1.2 trillion transistors have found a home on the largest single chip ever made. With its work, the semiconductor company is not exactly in tune with current trends. Chip making is going through a transition in which it is abandoning the increasingly cost-ineffective path of large single chip production and turning to scalable chiplet concepts. Processors produced in this manner no longer consist of an individual silicone chip. Instead, they are made of several chiplets.

One Wafer for one AI chip

But not at Cerebras. Its Wafer Scale Engine (WSE) is a square chip that has an edge length of 215 millimeters and a footprint of 46,225 mm². More cannot be cut out of a 300-mm wafer. As a full-fledged processor, it does not need a host CPU like other AI accelerators.

The “monster” is produced by TSMC, a Taiwanese contract manufacturer for semiconductor products, using a 16 nm process. The 1.2 trillion transistors are distributed across 400,000 programmable computing cores. The chip has 18 GB of SRAM memory. Its accumulated memory bandwidth is listed as 9 PByte/s.

At this size, no conventional cooling system will work. Instead, the circulation system carries the cooling fluid upward through a cold plate.

The start-up continues to keep its lips sealed about the chip’s energy consumption and price. Experts estimate that production costs alone will total about $15,000, and they place its energy usage at 15 kW.

Knowledge base

Cerebras Wafer Scale Engine: an introduction




AI Chip Cerebras (Image: Cerebras).

Deeper than an Apple keyboard - the largest processor yet for artificial intelligence. (Image: Cerebras).