TSMC is chasing a trillion-transistor AI bonanza – Asia Times

TSMC is chasing a trillion-transistor AI bonanza - Asia Times

The disaster that hit Taiwan on April 3 has slowed down TSMC’s semiconductor production operations, and the company’s revenue goal for 2024 has remained unchanged. The company’s companies were built with a high level of disaster weight.

Management is conducting a thorough analysis of the situation, but as it stands, we may take a step back and ensure that the 10-year plan for systems development that Chairman Mark Liu and Chief Scientist Philip Wong have in mind before being forgotten.

On March 28, IEEE Spectrum, the publication of the Institute of Electrical and Electronics Engineers, published an essay,” How We’ll Approach a 1 Trillion Transistor GPU”, which explains how “advances in electronics are feeding the AI boom”.

First, take note that Nvidia’s new Blackwell architecture AI processor combines two system-on-chip ( SOC ) limited to 104 billion transistor graphics processing units ( GPUs ) with a 10 terabytes-per-second interconnect and other circuitry in a single system-on-chip ( SOC). &nbsp,

Reticle-limited means only limited by the printing process’s ability to transfer pattern to silicon wafers at the largest possible size. In the upcoming century, TSMC hopes to increase the number of circuits per GPU by almost twofold.

The article begins with a review of the state of semiconductor production and unnatural intelligence:

  • The IBM Deep Blue computer that defeated world games hero Garry Kasparovs in&nbsp, 1997&nbsp, used&nbsp, 0.6- and 0.35- micron&nbsp, node&nbsp, systems.
  • The&nbsp, AlexNet&nbsp, neural network that won the ImageNet Large Scale Visual Recognition Challenge&nbsp, in 2012, launching the&nbsp, era of machine learning, &nbsp, used 40 nanometer ( nm ) technology.
  • Similar to the first edition of ChatGPT, AlphaGo’s program system used 5- mm technology, which defeated German Go Champion Fan Hui in 2015, to be implemented.
  • The Nvidia Hopper GPU, a refined type of the 4 mm method used by TSMC to create its president, is used to make Blackwell GPUs.

With the&nbsp, computation and memory&nbsp, capacity&nbsp, required for AI training&nbsp, increasing&nbsp, by orders of magnitude, Liu and Wong note that” If the AI revolution is to continue at its current pace, it’s going to need even more from the semiconductor industry” .&nbsp,

This will require not just moving to the 2- mm process network, which is scheduled for 2025, and then the 1.4- mm ( or 14A, A for particle ) network in 2027 or 2028, but likewise advancing from 2D ramping to 3D program integration:

” We are then putting together some cards into a firmly integrated, massively connected program. This is a paradigm change in silicon- technology inclusion”, say the two executives. They explain this as follows:

In the age of AI, the functionality of a system is immediately proportional&nbsp, to the number of circuits integrated into that program. One of the main drawbacks is that the reticle control, or no more than 800 square millimeters, has been established for printed chipmaking equipment. However, the included system’s size can now be increased beyond the reticle control of lithography.

We may integrate a system that can fit many more devices onto a larger piece of silicon than is possible on a single chip by putting some chips on top of a larger interposer, which is the silicon that interconnects are built into. For instance, TSMC’s&nbsp, device- on- wafer- on- material ( CoWoS ) technology can handle up to six reticle fields ‘ value of compute chips, along with a hundred high- throughput- memory ( HBM ) chips.

TSMC has previously used CoWoS in its shift from 7- nm&nbsp, to&nbsp, 4- m technology, putting 50 % more transistors in the same location for Nvidia and different customers. The HBM can be used with GPUs using a technology known as system-on-integrated chips ( SoIC ). &nbsp,

A large- badwidth memory chip consists of a stack of diagonally connected active random- access memory chips&nbsp, atop&nbsp, a control&nbsp, logic integrated circuit. According to TSMC, 12- layer HBM test structures have been created using&nbsp, 3D SoIC&nbsp, technology.

Next, we are told, optical interfaces based on silicon photonics” will allow the scaling up of energy- and area- efficient bandwidths for direct, optical GPU- to- GPU communication, such that hundreds of servers can behave as a single giant GPU with a unified memory” .&nbsp,

These developments, along with advancements in materials science, materials science, and fab equipment, should keep semiconductor systems ‘ energy-efficient performance ( EEP ) rising at a historical rate of about three times every two years. Energy efficiency and processing speed are both expressed in EEP. &nbsp,

If this sounds complicated, that’s because it is. Liu and Wong themselves say,” From here, semiconductor technology will get harder to develop”. But help is on the way in the form of 3Dblox, an open- standard 3D IC design system sponsored by TSMC, Intel, EDA companies Cadence, Siemens and Synopsis and engineering software company Ansys. They call this” A Mead- Conway Moment for 3D Integrated Circuits” .&nbsp,

In 1978, &nbsp, Professor&nbsp, Carver Mead&nbsp, of&nbsp, the California Institute of Technology and Lynn Conway&nbsp, of the&nbsp, Xerox PARC&nbsp, research and development company created a computer- aided design system that enabled engineers to design very large- scale integrated circuits without much knowledge of the semiconductor process technology required to make them. 3Dblox does the same for 3D ICs and packaging, say Liu and Wong, giving designers” a free hand to work on a 3D IC system design, regardless of the underlying technology” .&nbsp,

According to Liu and Wong,” an integrated AI system can be made of as many energy-efficient transistors as possible, have a suitable system architecture for specialized compute workloads, and have an optimal relationship between software and hardware in the era of artificial intelligence.” That sounds like AI- enabled design of AI processors, most of them made by TSMC. &nbsp,

Meanwhile, Taiwanese media report that most of TSMC’s manufacturing capacity is back on line. Buildings, some pieces of equipment and wafers in process were damaged, but the most important parts of the production lines, including the advanced ( and very expensive ) EUV lithography systems, were not. &nbsp, &nbsp,

To protect its operations from earthquakes, TSMC has been putting in what are known as seismic management measures for the past 25 years. As an indicator of their success, Taiwan’s DigiTimes reports that TSMC’s estimated loss from the April 3 earthquake, after insurance payments, is likely to be about NT$ 2 billion, or only US$ 62.2 million at the current exchange rate.

Follow this writer on&nbsp, X: @ScottFo83517667