Bugfix: low precision weight printing
Created by: thesps
Weights are 'quantized' when written to the header / txt file to limit the file size. It seems we have been printing these at a lower precision than what the chosen data type can represent. We are thereby adding an unnecessary quantization to models. I found this when evaluating low-precision QKeras models, but it affects all models, and I consider this a bug, so I make this PR straight to master rather than the QKeras development branch.
Here's some examples:
QKeras model with ap_fixed<4,0> types
The smallest value that can represented therefore is:
LSB value 2**-4 = 0.0625
First weights from HLSModel object: array([[ 0.125 , -0.4375, -0.4375
HLS project before (weight header file):
weight2_t w2[1024] = {0.12, -0.44, -0.44, -0.50, 0.00, 0.12,
HLS project after:
weight2_t w2[1024] = {0.1250, -0.4375, -0.4375, -0.5000, 0.0000, 0.1250
Jet tagging model in branch: Default precision ap_fixed<16,6> LSB value 2**-10 = 0.0009765625
From Keras model:
In [17]: model.layers[1].get_weights()[0]
Out[17]:
array([[ 0.27313474, -0.12113316, 0.4952146 , ...,
Precision limited by numpy print options to 8 digits
HLS project before (w2.h):
model_default_t w2[1024] = {0.2731, -0.1211, 0.4952, 0.0374
HLS project after (w2.h):
model_default_t w2[1024] = {0.2731347382, -0.1211331636, 0.4952146113, 0.0374328494
Note we don't round when printing, hence why values don't end in 5