BatchNormalization scale and bias array size
Created by: jmitrevs
The normalize
function call in both nnet_batchnorm.h
and nnet_batchnorm_stream.h
has an interface of:
template<class data_T, class res_T, typename CONFIG_T>
void normalize(
data_T data[CONFIG_T::n_in],
res_T res[CONFIG_T::n_in],
typename CONFIG_T::scale_t scale[CONFIG_T::n_in],
typename CONFIG_T::bias_t bias[CONFIG_T::n_in]
)
However, in the case when CONFIG_T::n_filt == -1
, only the first CONFIG_T::n_filt
of the arrays is used. This value is usually much smaller. The code should be updated to reflect this (potentially by me).
Two options that I can think of are to either introduce another constant, say CONFIG::n_scale_bias
, that is set appropriately to the size of the scale and bias, or to have separate normailze
variants depending on the value of CONFIG_T::n_filt
.