3D Inspection Solutions

AI is helping create the chips that design AI chips

Today’s microchips have connections that are 20-times smaller than the COVID-19 virus, spurring the need for new AI and machine learning approaches to their manufacture. | GETTY IMAGES

As artificial intelligence drives demand for more advanced semiconductors, new techniques in AI are becoming crucial to continued progress in chip manufacturing.

The entire semiconductor supply chain, from design through to final fabrication, is now dominated by data. Over 100 petabytes of information is created and collated during the manufacturing process, according to one estimate by Intel Corp. That’s equivalent to a 170-year-long YouTube video.

Data analytics and machine learning, a discipline within AI, is so integral to the process of making and testing chips that Taiwan Semiconductor Manufacturing Co. employs dozens of AI engineers and has its own machine-learning department. Whereas humans were once trained to visually inspect a chip for defects, the small scale and increasing complexity of electronic components has seen that function handed over to AI systems.

3D Inspection Solutions | Koh Young America

Photolithography is one of the most critical steps. This is the process of shining a light through a glass mask onto a chemically treated slice of silicon to create a circuit. It’s similar to old-school photography where a final print is developed in a darkroom.

The problem is that light diffracts, which means that the lines actually drawn on the surface of a chip differ from the mask’s pattern. At larger geometries these flaws didn’t matter too much because the design had enough wiggle room to still be functional. But as dimensions shrunk in line with Moore’s Law, tolerance for errors disappeared. For decades engineers tackled these distortions by deploying a technique called optical proximity correction (OPC) which adds extra shapes to the original design so that the final result more closely matches the intended circuitry.

Today’s chips have connections as thin as 5 nanometers, 20-times smaller than the COVID-19 virus, spurring the need for new approaches. Thankfully the errors between design and result aren’t entirely random. Engineers can predict the variations by working backward: Start with what you hope to achieve and crunch a lot of numbers to work out what the photolithography mask should look like to achieve it. This technique, called inverse lithography, was pioneered 20 years ago by Peng Danping at Silicon Valley software startup Luminescent. That Peng, who since moved to TSMC as a director of engineering, completed his Ph.D. not in electrical engineering but applied mathematics hints at the data-centric nature of inverse lithography technology (ILT).

With hundreds of different parameters to consider — such as light intensity, wavelength, chemical properties, width and depth of circuitry — this process is extremely data intensive. At its core, inverse lithography is a mathematical problem. The design of an ILT mask takes 10-times longer to compute than older OPC-based approaches, with the size of a file holding the pattern up to seven times larger.

Collating data, formulating algorithms and running thousands of mathematical computations is precisely what semiconductors are made for, so it was only a matter of time before artificial intelligence was deployed to try to more efficiently design artificial intelligence chips.

It is, in many respects, a very complicated graphics problem. The goal is to build a microscopic three-dimensional structure from multiple layers of two-dimensional images.

Nvidia Corp., which is now the world’s leader in AI chips, started off designing graphics processing units for computers 30 years ago. It stumbled upon AI because, like graphics, it’s a sector of computing that requires massive amounts of number-crunching power. The company’s central role in AI saw it recently forecast sales this quarter that surpassed expectations, driving the stock up around 25% in pre-market trading. That pushes it toward a $1 trillion valuation.

Images on a computer screen are little more than a superfine grid of colored dots. Calculating which to light up as red, green or blue can be done in parallel because each point on the screen is independent of every other dot. For a graphics-heavy computer game to run smoothly these calculations need to be done quickly and in bulk. While central processing units are good at performing a variety of operations, including juggling multiple tasks at once, modern GPUs are created specifically for parallel computing.

Now Nvidia is using its own graphics processors and a library of software it created to make semiconductor lithography more efficient. In a blog post last year, the Californian company explained that by using its graphics chips it could run inverse lithography computations 10-times faster than on standard processors. Earlier this year, it upped that estimate, saying its approach could accelerate the process by 40 times. With a suite of design tools and its own algorithms, collectively marketed under the term cuLitho, the company is working with TSMC and semiconductor design-software provider Synopsys Inc.

This collection of software and hardware wasn’t develop by Nvidia for altruistic reasons. The company wants to find more uses for its expensive semiconductors and it needs to ensure that the process of bringing its chip designs to market remains smooth and as cheap as possible. While we all marvel at the ability of ChatGPT software to write software, we’ll see the increasing role of AI chips in creating AI chips.

Tim Culpan is a Bloomberg Opinion columnist covering technology in Asia.

Subscribe to the latest Electronics Manufacturing News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Electronics Manufacturing Partner