High Level Synthesis using Linear Regression
High Level Synthesis (HLS) works at the algorithmic level in hardware design systems. HLS tools provide automatic conversion from C/C++/SystemC-based specification to hardware description languages like Verilog or VHDL. It has been inferred that these HLS tools are improving the productivity in customised hardware design systems. In certain cases of HLS, the long synthesis time for each design brings about a restriction on design space exploration (DSE). DSE is the process of finding some design solution which optimally meets the design requirements from a space of tentative design points. This exploration is quite complex considering the various levels of abstraction and the entire process from selection of parameter values to choosing an algorithm and then optimising it.
Machine Learning for High Level Synthesis:
Machine Learning techniques are applied to improve the performance of HLS tools with 3 major advantages:
1. Fast and accurate result estimation
2. Refining the conventional DSE Algorithms
3. Reforming DSE as active learning problem
Performance Prediction using ML:
While HLS-generated register-transfer-level (RTL) models are in general not human-readable, HLS tools provide a set of reports to quantitatively convey the expected performance, timing, resource usage, and composition of the synthesized RTL design. ML can be used to improve the accuracy of HLS reports through learning from real design benchmarks. Multiple supervised learning techniques like Linear Regression, Random Forest, Support Vector Machine, etc. are used to test the quality of the result through data processing training estimation models. Features can be extracted from the HLS reports to accurately predict post-implementation results without actually running the implementation flow. Using these features and training the ML models, we can achieve high accuracy for the estimation tasks.
Linear Regression for Performance Prediction:
Linear regression is a prediction oriented Machine Learning algorithm which attempts to model the relationship between two variables by fitting a linear equation to observed data.
To enable ML technique for HLS estimation, a dataset is built up of implementation results. Features are extracted from this dataset, redundant features are then removed and irrelevant features are eliminated. Regression models are trained to estimate post-implementation resource usages as well as classification models to predict whether the target clock period is met for the implemented design.
The classic linear regression model, ybi = xi >w, models the target ybi as a linear combination of features xi. Linear regression fits this model onto the training data to determine the w such that a loss function is minimized, where w represents the vector of coefficients for the learned model. We use the Lasso linear model with a loss function of Xw − y k 2 2 + γ kwk1 to train a linear model that minimizes the least-square penalty on the training data with L1 regularization. By tuning the hyperparameter γ, the L1 regularization term γ kwk1 allows us to induce various degree of sparsity into w and in turn regulate the complexity of the model.
The accuracy of the model can be calculated using the relative absolute error (RAE).
References:
- Machine Learning for Electronic Design Automation: A Survey
- Pyramid: Machine Learning Framework to Estimate the Optimal Timing and Resource Usage of a High-Level Synthesis Design
- Fast and Accurate Estimation of Quality of Results in High-Level Synthesis with Machine Learning
- https://www.sciencedirect.com/topics/computer-science/design-space-exploration
Well written and easy to understand
ReplyDeleteWell written
ReplyDeleteGreat to see how ML techniques can be integrated with hardware as well!
ReplyDelete