Skip to content

WIP: More struct usage for Input Typenames

Javier Duarte requested to merge ejk/more-struct-usage into master

Created by: ejk43

To my surprise, it seems that we can actually define typenames inside a template struct. (Relevant nnet changes here: https://github.com/hls-fpga-machine-learning/hls-fpga-machine-learning/compare/ejk/more-struct-usage?expand=1#diff-b37c065f136460b015788b96b5c25102)

I put together a new proof of concept that pushes the bias, weights, and accumulator typedefinitions into the config structure. I have not edited the python generation script yet, so it's not fully compatible with everything else (hence the Work In Progress), but I thought I'd float the idea to see how far you'd like to go with the struct input.

Here's example code from the main function:

nnet::compute_layer<input_t, layer1_t, config1>(data, logits1, w1, b1);
nnet::relu<layer1_t, layer1_t, N_LAYER_1>(logits1, layer1_out);

Personally I think it cleans up the interface nicely and becomes very explicit about the type definitions per layer (ie, each layer has their OWN config structure, with their own internal typedefs). Thoughts?

Merge request reports

Loading