Mask Optimization using GAN

By Ishan Aphale

Lithography is one of the key steps in semiconductor manufacturing, which turns the designed circuit and layout into real objects. Two popular research directions are lithography hotspot detection and types of mask optimizations. We will focus on mask optimization in this blog. Mask optimization tries to compensate diffraction information loss of design patterns such that the remaining pattern after lithography is very close to the design patterns. Mask optimization plays an important role in VLSI design. Optical proximity correction (OPC) and sub-resolution assist feature (SRAF) insertion are two main methods to increase the printability of the target pattern, and we will be looking into them with the point of view of GANs. We assume basic knowledge of GANs.

1) Optical Proximity Correction

With the improvement of semiconductor technology and the scaling down of ICs, traditional OPC techniques are becoming more and more complicated and time-consuming. Yang et al. proposed a new OPC method based on generative adversarial networks (GANs). A Generator (G) is used to generate the mask pattern from the target pattern, and a discriminator (D) is used to calculate the quality of the generated mask. GAN-OPC can avoid complicated computations in ILT-based OPC (inverse lithography technique), but it faces the problem that the algorithm is difficult to converge. To deal with this problem, ILT-guided pre-training is suggested. In the pre-training stage, the Discriminator network is replaced with the ILT convolution model, and only the G network is trained. After pre-training, the ILT model that has very large cost is removed, and the full network is trained. The training flow of GAN-OPC and ILT-guided pre-training is shown in the figure below. The experimental results show that the GAN-based methodology can accelerate ILT based OPC significantly and generate more accurate mask patterns.


(a) GAN-OPC and (b) ILT guided pre-training

As an improvement to this, the Enhanced GAN-OPC framework was introduced, which significantly improves the training efficiency in a more elegant way compared to pretraining with ILT engine. The EGAN-OPC framework includes a U-Net structure that allows gradients to be easily backpropagated to early layers and an SPSR architecture for better generated mask quality.
U-Net: We have noticed the GANs are typically deeper than traditional neural networks, which brings more challenges due to a longer gradient backpropagation path. A common solution is creating shortcut links that allow addition or stacking of feature maps in different layers, such that gradients can be more efficiently backpropagated from output layer to early layers. Here, we enhance our generative design with a U-Net-like structure where intermediate feature maps in the encoder are stacked at corresponding layers in the decoder. Such architecture has two good properties: 1) the inevitable information loss in strided convolution layer can be drastically reduced and 2) the gradient vanishing problem can be alleviated with multiple shortcut links bypassing intermediate feature maps.
Subpixel Super-Resolution: In the previous designs, low level features in intermediate generator layers are cast back to mask images by standard strided deconvolution operation. SPSR is an upsampling solution that has been widely used in super-resolution tasks.

Thus, in this article we took a look at a GAN-based mask optimization flow.

Comments

  1. This comment has been removed by the author.

    ReplyDelete
  2. Explanation is crisp and to the point. A good read!

    ReplyDelete
  3. Very well articulated!!! but why specifically GANs are used?

    ReplyDelete
    Replies
    1. The generator discriminator architecture helps alot

      Delete

Post a Comment

Popular posts from this blog

FAST STATIC IR DROP PREDICTION USING MACHINE LEARNING

Machine Learning for Analog Layout

High Level Synthesis using Linear Regression