LOGIC SYNTHESIS USING DNN
By Anmol Salvi
INTRODUCTION:
With advancements and growth in the data science industry ,
the current applications of DNN(Deep Neural Networks) and various other machine
learning techniques are seemingly limitless . Similarly in this manner , DNN
also finds it’s application in logic synthesis and can be used to better
optimize the entire process and also reduce the time consumed in this process.
Before looking into how DNN techniques are applied for logic synthesis
applications, first certain key terms need to be discussed. They include:-
A)Design Space Complexity of ALS(Approximate Logic
Synthesis):- This involves gate replacement by going through each node and
performing approximation on all and also calculating the error rate at primary
outputs resulting from this approximation. Thus it’s complexity analysis has 2
parts namely:-
1)Node replacement
2)Error propogation
B)Deep Neural Networks:-A DNN has an input layer along with
a number of hidden layers which are basically a group of neurons and are used
for feature selection. An output layer also exists. Inputs from the hidden
layer are passed through a non linear
activation function which includes the relu , leaky relu and sigmoid functions
among others and they help minimize error in prediction using
backpropogation and updation of features
at important nodes.
IMPLEMENTATION:-The input circuit to be optimized is first
mapped using the cut-based technology mapping module of ABC post which
depending on the mode of optimization (eg-power), critical nodes are extracted
and are then passed onto the approximation module , this module replaces a node
and after consulting the trained DNN model predicts the worst error rate at a primary
output. If this worst error rate doesn’t violate the already predetermined
error constraint by the user , the approximation module accepts this
replacement , else it undo’s the last replacement and moves onto another node.
This approximation process ends when either the error budget is consumed fully
or all nodes in the circuit are visited. The learning rate needs to be set to a
small value and using an optimizer such as Adam is preferred. The number of
epochs can be determined by checking behaviour and accuracy of the validation
set , a number greater than 10 is preferred to start at. Binary cross entropy
is preferred to be used as the loss function.
ADVANTAGES:-A)Saves time.
B)Ensures greater accuracy.
C)Thereby ensures better performance.
CONCLUSION:-One can thus conclude that Logic synthesis using
DNN is one of the many boons of the advancement of the data science industry
and making the most of this technique in order to better optimize power
consumption & time consumption parameters among others , is essential and
can thus serve to revolutionize just how logic synthesis is carried out in the
VLSI industry.
VERY INFORMATIVE
ReplyDelete