Skip to content

Update to conv1d, now accepting multiple filters and sum over channels

Javier Duarte requested to merge bjk/conv1d into master

Created by: benjaminkreis

This is an update to the conv1d that:

  • Allows for multiple filters
  • Sums over channels within a filter
  • Configurable stride
  • Configurable padding
  • Flatten layer to connect to activation or dense layer

I've tested a few examples against keras for networks similar in size to the now updated example project "conv-1layer" and the results agree to within a couple of percent. I saw a number of problems with bigger networks (e.g. missing timing target, missing pipeline target, used up all "memory ports", etc.) but this is exactly what we want to explore, so I think this is ready for testing now that I claim it's running for a small CNN.

Some things that could/need to be improved:

  • Need to implement multiplier limit for reuse and compression. The equation in there right now isn't correct (missing stride, n_zeros, more?).
  • Some of the parameters of the struct are redundant. E.g. you can figure out the number of outputs from the number of inputs and padding. Or you can figure out the padding from the filter size and type of padding. Etc.
  • If the act of flattening is actually taking logic, as opposed to just being "virtual" in C++ world, then we should only use it when connecting to a dense layer. For the case of conv1d->activation->conv1d, we should make new activation functions that accept 2D arrays (features vs channel).
  • Still need to write the hls_writer part

Merge request reports

Loading