Skip to content

Adding GarNet and introducing some changes to the core to make GarNet possible

Javier Duarte requested to merge github/fork/yiiyama/graph-hls-paper into master

Created by: yiiyama

Adding support to a graph neural network (GarNet). The compatible keras implementation of GarNet is not the original but this temporary-ish code. We will discuss how the new implementation can be included in the "official" repository.

hls4ml/templates/vivado/nnet_utils/nnet_garnet.h is the source code for HLS GarNet. Aside from adding this file and inserting lines to convert / configure in keras_to_hls.py, hls_model.py, vivado_template.py, and vivado_writer.py, I introduced some changes to the hls4ml core software in order to accommodate the needs of graph networks:

  • keras_to_hls.py was written with a linear model (single input, single flow of data through layers) in mind. Particularly, layers being parsed were communicating the shapes of the input and output arrays through the current_shape variable. Since GarNet takes two inputs (array of vertex features and the number of vertices in the given sample), I replaced this communication mechanism with more explicit input_shapes and output_shapes variables. There is still an assumption that each layer outputs a single array, but it is easy to generalize that if a need arises.
  • In hls_model.py and vivado_writer.py, I added some more handles to the models and layers that can be steered in the config yml file:
    • GlobalPipelining (model property): By default, the full model is either PIPELINEd or DATAFLOWed, depending on the IOType and Strategy parameters. GlobalPipelining config allows directly setting or turning off (set to None) pipelining.
    • OutputPartitioning (layer property): By default, layer outputs are partitioned completely. Graph network outputs are large arrays, so this would be a problem. To specify a partitioning, add a line like OutputPartitioning: partition,cyclic,4 (translates to ARRAY_PARTITION cyclic factor=4) under the LayerType or LayerName config to set the partitioning type. (GarNet layers automatically set their output partitioning using the reuse_factor, so users actually don't need to use this config for anything at the moment.)
    • CustomizeIO, InterfaceMode (model properties): Input and last layers can have custom partitioning using the OutputPartitioning config. Set CustomizeIO to 1 to have the model I/O actually be partitioned accordingly. InterfaceMode allows setting the value of HLS INTERFACE pragma by hand.
  • In vivado_templates.py and templates.py, I added the "include list" feature so that only the header files that are actually used in the given model are included. Whenever we add a layer, we need to specify which headers are to be included in vivado_templates.py.
  • C++ template reorganization: I wanted parameters.h to include the weights for GarNet (so that the config structs can refer to the weight arrays). To make this happen, header inclusions in the C++ templates have been shuffled:
    • Added a new file defines.h that has the hls-fpga-machine-learning insert numbers and hls-fpga-machine-learning insert layer-precision lines (originally in parameters.h)
    • Line hls-fpga-machine-learning insert weights is now in parameters.h
    • myproject.h includes defines.h
    • myproject.cpp includes parameters.h
    • myproject_test.cpp does not include parameters.h any more (it only needed the number macros and typedefs, which are now in defines.h and are included through myproject.h)

It's a long list of feature updates - I can split it into multiple PRs if that's more convenient.

Merge request reports

Loading