Posts

FAST STATIC IR DROP PREDICTION USING MACHINE LEARNING

By Anmol Salvi INTRODUCTION: Logic Synthesis , High Level Synthesis , IR drop estimation and Lithography hotspot detection all fall under one broad category in a way which is Electronic Design Automation and are all related to each other to a certain extent.IR drop constraint is one of the fundamental requirements that is enforced in almost every chip design. However, it’s evaluation ends up taking a large amount of time and also the mitigation techniques for fixing any violations may require a number of iterations. Thus , fast and more importantly accurate IR drop prediction becomes essential for reducing the overall design turnaround time. Very recently, machine learning (ML) techniques have been studied very actively so as to allow for fast IR drop estimation owing to their potential promise and recorded success in a lot of fields. IR drop which is also called as voltage drop, is defined as the deviation of a power supply level from its specification which occurs when current begi...

Machine Learning for Verification and Testing

  Introduction Coverage is the primary concern when designing a test set in verification and testing problems. However, the definition of “coverage” is different in different problems. For example, for digital design, the test set is supposed to cover as many states of the finite state machine (FSM) or input situations as possible. For analog/RF design, since the input is continuous and the system can be very sensitive to environmental disturbance, a sampling strategy that can cover most input values and working situations is needed. As for the test of semiconductor technology, a test point is a design that needs to be synthesized or fabricated, and the test set needs to cover the whole technology library. Test Set Redundancy Reduction for Digital Design Verification The verification of a digital design will be carried out in each stage of the EDA flow, in which the verification space of a digital design under test (DUT) usually includes...

Test & Diagnosis Complexity Reduction using GCNs

By Ishan Aphale There are mainly two ways of accelerating the verification and testing process: 1) Reducing the test set redundancy; 2) Reducing the complexity of the testing, verification and diagnosis process. To reduce the test set redundancy or to optimize the generation of test instances, coverage-directed test generation has been studied for a long time, which can be aided by lots of ML algorithms. Recently, test set redundancy reduction of analog/RF design or even the test of semiconductor technology have raised a lot of attention, and more ML methods are applied to solve these problems. As for reducing the verification & test complexity, there are studies that adopt low-cost tests for analog/RF design, and some other studies that focus on fast bug classification and localization. Recently, GCNs are used to solve the observation point insertion problem for the testing stage. Inserting an observation point between the output of module 1 and the input of module 2 will make the...

Machine Learning for Circuit Topology Design Automation

In case of analog circuit design, the two important steps are topology design and the determination of device sizes and parameters. The  topology  of an electronic  circuit  is the form taken by the network of interconnections of the  circuit  components. The process of topology design is time consuming and having an unsuitable topology leads to redesign. Even today, automation tools for topology design are still much less explored due to its high degree of freedom. The three cases where ML is used in topology design are: 1.     Topology Selection Fuzzy logic  is an approach to computing based on "degrees of truth" rather than the usual binaries of 1(true) or 0(false). A fuzzy logic based topology selection tool, FASY, was proposed in 1996. It uses fuzzy logic to describe relationships between specifications (e.g., DC gain) and alternatives and use backpropagation to train the optimizer. Along with that CNN (convolutional neural ...

High Level Synthesis using Linear Regression

Image
High Level Synthesis (HLS) works at the algorithmic level in hardware design systems. HLS tools provide automatic conversion from C/C++/SystemC-based specification to hardware description languages like Verilog or VHDL. It has been inferred that these HLS tools are improving the productivity in customised hardware design systems. In certain cases of HLS, the long synthesis time for each design brings about a restriction on design space exploration (DSE). DSE is the process of finding some design solution which optimally meets the design requirements from a space of tentative design points. This exploration is quite complex considering the various levels of abstraction and the entire process from selection of parameter values to choosing an algorithm and then optimising it. Machine Learning for High Level Synthesis: Machine Learning techniques are applied to improve the performance of HLS tools with 3 major advantages: 1. Fast and accurate result estimation 2. Refining the conventional ...

Machine Learning for Result Estimation

Image
Introduction High level Synthesis tools are greatly used in logic synthesis. The reports from HLS tools provide important guidance for tuning the high-level directives. However, acquiring accurate result estimation in an early stage is difficult due to complex optimizations in the physical synthesis, imposing a trade-off between accuracy (waiting for post-synthesis results) and efficiency (evaluating in the HLS stage). ML can be used to improve the accuracy of HLS reports through learning from real design benchmarks. Estimation of Timing, Resource Usage and Operation Delay: The overall workflow of timing and resource usage prediction is as shown in the following figure. The main methodology is to train an ML model that takes HLS reports as input and outputs a more accurate implementation report without conducting the time-consuming post-implementation. The workflow can be divided into two steps: data processing and training estimation models. Step 1 - Data Processing : To enable ML...

Machine Learning for Analog Layout

By Ishan Aphale In back-end analog/mixed-signal (AMS) design flow, well generation persists as a fundamental challenge for layout compactness, routing complexity, circuit performance and robustness. The immaturity of AMS layout automation tools comes to a large extent from the difficulty in comprehending and incorporating designer expertise. To mimic the behavior of experienced designers in well generation, we can use a generative adversarial network (GAN) guided well generation framework with a post-refinement stage leveraging the previous high-quality manually-crafted layouts. Guiding regions for wells are first created by a trained GAN model, after which the well generation results are legalized through post-refinement to satisfy design rules. The network learns and mimics designers’ behavior from manual layouts. Experiments show that generated wells have comparable post-layout circuit performance with manual designs on the op-amp circuit. The power of machine learning algorithms ha...