Skip to content

hls4ml Optimization API [Part 1]

Created by: bo3z

This pull request introduces the first part of the hls4ml Optimization API - an automated workflow for hardware-aware model compression. By formulating pruning and weight sharing as a linear optimisation problem, the workflow iteratively selects redundant weights, considering the overall impact on hardware. The tool currently supports Keras and QKeras models, as well as various hardware objectives on GPUs (FLOPs) and FPGAs, with a Vivado hls4ml backend (DSP, BRAM, FF). However, the tool is both hardware- and framework-agnostic - most of the concepts readily generalise to other frameworks (e.g. PyTorch) and other hardware (e.g. Quartus backend). This allows end users to write custom objectives (e.g. Quartus latency optimisation), following a similar template.

Furthermore, this tool aims to bridge the gap between other libraries for model compression, such as TensorFlow Model Optimization and QKeras. The tool is directly integrated with QKeras and an updated version of Keras Surgeon, to aid model compression. Finally, this tool provides out-of-the-box support for structured pruning (filters, neurons), as well as gradient based ranking methods.

The exact implementation and motivations are further explained in the attached presentation. Initial results are shown on both classification and regression with various objectives including, sparsity, GPU FLOP reduction, Vivado DSPs and FFs. Since this is a large PR, it is recommended to review the commits one by one, as each commit is self-contained and can be checked out by itself. They are briefly explained below.

Supporting document and presentation

Available at: https://indico.cern.ch/event/1278049/

Type of change

  • New feature (non-breaking change which adds functionality)
  • A new research paper code implementation

Description

Contributions:

  • A new pattern pruning approach, inspired by the parallelism of FPGAs. For more information on the inspiration behind this approach, see the attached presentation.
  • Easy, out-of-the-box support for structured pruning and weight sharing.
  • Formulation of pruning as a Knapsack optimisation problem, relative to a hardware objective - maximise network performance while minimising some hardware resource(s)
  • Integration with Keras Surgeon and extended support for QKeras, to further reduce model footprint.
  • End-to-end flow for model optimisation, including an extensible & open-source library for hardware-aware and agnostic pruning of Keras & TensorFlow models.

Tests

  • Eight new unit tests were written in the PyTest framework. These tests are stored under hls4ml/test/pytest/optimization. Each test follows a single addition to the framework and are better explained by the individual commits.
  • Results on unstructured sparsity and Vivado resource estimate are recorded below. GPU FLOP optimisation will form a basis for a future study.
  • A full working example of how to use the tool is provided in the documentation folder, under the advanced section. Benchmark models, data sets and automated scripts will shortly be available in an additional repository, to be made public soon. Raw results of the synthesis, will be available on CERNBox.

Implementation Details

  • 8ba42060 - introduces configuration files for the tool, as well as model attribute builder. A model attribute builder extracts layer information, and stores them in a framework-independent class. Depending on the objective, the attribute builder can take additional arguments, such as hls4ml config dictionary. All of these are stored in a generic class and used for selecting algorithm parameters 12fba05d - introduces three schedulers for sparsity - constant increment, polynomially decaying and binary halving, where the search space is iteratively halved until the optimal sparsity is found.
  • d95c9562 - introduces utils for training Keras models, such as model gradients, back-propagation with weight freezing, calculating per-layer sparsity etc. Additionally, two new regularisers are added - one for Dense-based layers (Dense, QDense, but additionally works for Recurrent layers) and one for Conv2D layers (Conv2D, QConv2D). The regularisers can both penalise weight magnitude (pruning) or variance (weight sharing) at an arbitrary level -filter / neuron level, block or pattern.
  • de51797b - introduces various solvers (exact, greedy, MIP etc.) for the Knapsack problem, which is used to formulate model compression. Reasoning behind the formulation of pruning as a linear program (LP) is given in the attached document. By considering hardware utilisation as problem constraint and network performance as the objective function, it is possible to remove weights in a more informed way that unstructured pruning. Furthermore, this commit introduces the concept of objectives - a metric(s), such as hardware utilisation, latency or parameter count, the optimization problem should minimise.
  • a49a1134 - introduces the logic behind selecting redundant (groups of weights), in an operation called masking. When ranking weights, both magnitude-based and gradient-based methods are possible. While gradient-based methods might produce better results, they are computationally expensive, so the choice is left for end users.
  • e655ab60 - introduces integration with Keras Surgeon, a library for removing structures (filters, neurons) from a Keras model and rewiring the graph. Keras Surgeon is no longer under active development, so it does not work with TensorFlow 2.3+. An updated version is stored in a forked repo on my GitHub, and a new addition to it is support for QKeras models: https://github.com/bo3z/keras-surgeon
  • 399a98d4 - introduces a necessary pre-requisite for model pruning, the model builder, which adds a regularisation loss to every optimizable layer; to capture the loss of removing some of the (groups of) weights during training. The hyperparameters are automatically set using Keras Tuner.
  • 47392ba9 - introduces the top-level function for Keras model compression and an objective for minimising GPU FLOPs.
  • a778e393 - introduces the wrapper function for compressing a Keras model, given a hls4ml config dictionary, making use of the above function. Furthermore, it introduces objective for minimising DSP, FFs and BRAM [WIP]
  • 7cd25a01 - introduces documentation for the new tool, with a full working example.
  • Minor bug fixes and pre-commit f792ea6a and 82779ff7

Results

Comparison with TensorFlow Model Optimization

The proposed is evaluated on a range of tasks including: jet classification, SVHN classification from the Fast CNNs paper and a Lenet-like model on Fashion MNIST classification. First, the developed library is compared with TFMOT, in terms of unstructured sparsity, across five trials. As seen, the two perform similarly, with hls4ml being significantly better on LeNet. compare_hls4ml_tfmot

DSP-level pruning

Secondly, the method is evaluated on a range of reuse factors with strategy set to resource. These results are after full Vivado synthesis. Latency is reported from CoSim, not HLS estimate and it is in terms of clock cycles, reported as min and max. Where the model has been pruned, it was accelerated using "Unrolled Dense" #806. The baseline models are accelerated using the current version of master, 0.7 - 0.7.1. The decrease in latency is likely because unrolled dense uses the pipeline pragma, while standard resource uses dataflow. However, this is fine as pruning also reduces the number of LUT & FF. BM stands for baseline model, quantised to 16 bits (either <16, 6> or <16, 8>, depending on the accuracy) and BP-DSP stands for a model optimised for DSP utilisation, again quantised to 16 bits. BP-MO stands for multi-objective optimisation, targeting both BRAM and DSP utilisation.

First, DSP-level pruning is tested - the idea is to verify the effects of "pattern pruning" - pruning all the weights processed by the same DSP as RF varies. This is shown for jet tagging and SVHN, in both cases achieving significant reduction in DSP utilisation. Furthermore, due to the way hls4ml transposed and stores weights in BRAM, BRAM is also likely to reduce (the same way if pruning unstructured, some structures might be removed) dsp_optimisation

Multi-objective pruning

Next, verify multi-objective pruning - by pruning all the weights stored in the same BRAM (precision was set to 18 bits, due to the 36-bit width of BRAM), one can remove one block of RAM and two DSP for every pruned structured. Results are shown on jet tagging, since streaming CNNs overuse BRAM - however, in the next table, it is shown how this method can apply to LeNet, significantly reducing DSP utilisation and slightly reducing BRAM. multi_objective

Heterogeneous multi-objective pruning for fast inference of LeNet

Consider accelerating a LeNet - in its simple form, it is too large to be accelerated fully unrolled, as the dense layers have ~48k and ~10k weights. Therefore, the design is pruned and accelerated heterogeneously - the Conv2D layers have a latency strategy and RF set to 1. The Dense layers have a Resource strategy - the first Dense layer uses a RF of 25 and the second on of 12. The output layer uses Latency strategy and RF = 1. The design is accelerated with <18, 8> precision. The effects of multi-objective pruning are shown in the table below. The algorithm will choose to prune some individual weights (a single DSP in Conv2D layers) and some groups of weights (a single BRAM block and 2 DSPs in Dense layers, depending on the solution of Knapsack problem).

lenet

Finally, it is shown how multi-objective pruning can be used to accelerate a general-purpose CNN for fast image classification on a medium-range accelerator card, ZCU102. The latency is reported in clock cycles, and the increase is likely due to the write out of the accelerator card. Screenshot 2023-06-16 at 14 39 14

Known limitations

This is the first part of the optimization API, introducing the software and ML-side of things. The second part will focus on hardware-specific implementations and improvements, including:

  • Code generation for Dense layers #809 uses exactly as many as DSP as required. However, there seems to be a small over-utilisation in LUTs / FFs, probably due to some fanout.
  • Sometimes (streaming CNNs) will overuse BRAM - still unclear why this is the case, as HLS estimates for BRAM are the same in unrolled and standard dense multiplication.
  • Lack of support for recurrent and Conv1D layers - however, the extensions follow a similar implementation to existing code, but haven't had the time to test it.

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

Merge request reports