QDense (and other layers) with `quantized_bits(8, 0, alpha=1)` weights never achieve agreement?
Created by: lgray
Prerequisites
Please make sure to check off these prerequisites before submitting a bug report.
-
Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out. -
Check that the issue hasn't already been reported, by checking the currently open issues. -
If there are steps to reproduce the problem, make sure to write them down below. -
If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.
Quick summary
Networks using quantized_bits(8, 0, alpha=1)
in qkeras do not close with their HLS4ML implementations.
I've seen this for QSeparableConv2D, QConv2D, and QDense. As soon as I use quantized_bits(8,1,alpha=1)
for the QDense in the example below the agreement becomes perfect.
Steps to Reproduce
Install hls4ml from the main branch and then run:
x_in = Input((16,))
x = QDense(16, kernel_quantizer=quantized_bits(8,0,alpha=1), bias_quantizer=quantized_bits(8,0,alpha=1))(x_in)
x = QActivation("quantized_relu(10,2)")(x)
model = Model(inputs=x_in, outputs=x)
config = hls4ml.utils.config_from_keras_model(model, granularity='name', default_precision='fixed<64,16>')
config['LayerName'][list(config["LayerName"].keys())[0]]['Precision']['result'] = 'fixed<8,1>'
#config['LayerName'][list(config["LayerName"].keys())[1]]['Precision']['depthwise'] = 'fixed<4,2,RND_CONV,SAT_SYM>'
#config['LayerName'][list(config["LayerName"].keys())[1]]['Precision']['pointwise'] = 'fixed<4,2,RND_CONV,SAT_SYM>'
print(config)
hls_model = hls4ml.converters.convert_from_keras_model(
model, hls_config=config, output_dir='minimalrepro_hls4ml/hls4ml_prj', part='xcu250-figd2104-2L-e', io_type="io_stream",
)
hls_model.compile()
#data = quantized_bits(8, 0, alpha=1)(np.random.rand(10000,13,21,20)).numpy()
data = quantized_bits(8, 0, alpha=1)(np.random.rand(10000,16)).numpy()
qkeras_out = model.predict(data)
hls_out = hls_model.predict(data)
plt.figure()
plt.scatter(hls_out.flatten(), qkeras_out.flatten(), s=0.2)
min_x = min(np.amin(hls_out), np.amin(qkeras_out))
max_x = max(np.amax(hls_out), np.amax(qkeras_out))
plt.plot([min_x, max_x], [min_x, max_x], c='gray')
plt.xlabel('hls4ml')
plt.ylabel('QKeras');
Expected behavior
Perfect agreement.