Skip to content

Unsupported layer type: SlicingOpLambda

Created by: rkovachfuentes

Prerequisites

Please make sure to check off these prerequisites before submitting a bug report.

  • Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
  • Check that the issue hasn't already been reported, by checking the currently open issues.
  • If there are steps to reproduce the problem, make sure to write them down below.
  • If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.

Quick summary

Please give a brief and concise description of the bug.

Unsupported Layer Type error is raised when trying to configure qkeras model (using hls4ml.utils.config_from_keras_model).

Details

Please add to the following sections to describe the bug as accurately as possible.

Steps to Reproduce

Add what needs to be done to reproduce the bug. Add commented code examples and make sure to include the original model files / code, and the commit hash you are working on.

  1. Clone the hls4ml repository
  2. Checkout the master branch, with commit hash: commit dd18adb1d3fb1ac3bf18c2b7feb37f44c10b6262
  3. Reload qkeras model from h5, using: model = qkeras.utils.load_qmodel('/home/rkovachf/hls4ml-tutorial/hls4mltest.h5',custom_objects)
  4. Model architecture is described below:
def CreateQModel(shape):
    x = x_in = Input(shape)
    x = QConv2D(5,3, activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", name="conv2d1")(x)
    '''QConv2D(5,3,
        kernel_quantizer="stochastic_ternary",
        bias_quantizer="ternary", name="first_conv2d")(x)'''
    x = QActivation(activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", 
                    name="relu1")(x)
    
    x_conv = QConv2D(5,3, activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", name="conv2d2")(x)
    x = QAveragePooling2D(pool_size=(9, 3), strides=None, padding="valid", data_format=None)(x_conv[...,:1])
    y = QAveragePooling2D(pool_size=(3, 17), strides=None, padding="valid", data_format=None)(x_conv[...,1:2])
    cota = QAveragePooling2D(pool_size=(2,2), strides=None, padding="valid", data_format=None)(x_conv[...,2:3])
    cotb = QAveragePooling2D(pool_size=(2,2), strides=None, padding="valid", data_format=None)(x_conv[...,3:4])
    cov = QAveragePooling2D(pool_size=(2,2), strides=None, padding="valid", data_format=None)(x_conv[...,4:5])
    cota = QActivation(activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", 
                    name="relucota")(cota)
    
    cotb = QActivation(activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", 
                    name="relucotb")(cotb)
    
    cota = Flatten()(cota)
    cotb = Flatten()(cotb)
    x = QActivation(activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", 
                    name="relux")(x)
    
    y = QActivation(activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", 
                    name="reluy")(y)
    
    x = Flatten()(x)
    y = Flatten()(y)
    x = QDense(units = 10, 
               kernel_quantizer="quantized_bits(bits = 8,integer = 4,symmetric=0)",
               use_bias= True,
               bias_quantizer= "quantized_bits(bits = 8,integer = 4,symmetric=0)",
               name = "dense10x")(x)
    y = QDense(units = 10, 
               kernel_quantizer="quantized_bits(bits = 8,integer = 4,symmetric=0)",
               use_bias= True,
               bias_quantizer= "quantized_bits(bits = 8,integer = 4,symmetric=0)",
               name = "dense10y")(y)
    cota = QDense(units = 10, 
               kernel_quantizer="quantized_bits(bits = 8,integer = 4,symmetric=0)",
               use_bias= True,
               bias_quantizer= "quantized_bits(bits = 8,integer = 4,symmetric=0)",
               name = "dense10cota")(cota)
    cotb = QDense(units = 10, 
               kernel_quantizer="quantized_bits(bits = 8,integer = 4,symmetric=0)",
               use_bias= True,
               bias_quantizer= "quantized_bits(bits = 8,integer = 4,symmetric=0)",
               name = "dense10cotb")(cotb)
    x = QActivation(activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", 
                    name="relu2x")(x)
    
    y = QActivation(activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", 
                    name="relu2y")(y)
    
    cota = QActivation(activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", 
                    name="relu2cota")(cota)
    
    cotb = QActivation(activation="quantized_relu(bits = 16, integer = 4, use_sigmoid = 1)", 
                    name="relu2cotb")(cotb)
    
    x = QDense(units = 2, 
               kernel_quantizer="quantized_bits(bits = 8,integer = 4,symmetric=0)",
               use_bias= True,
               bias_quantizer= "quantized_bits(bits = 8,integer = 4,symmetric=0)",
               name = "dense2x")(x)
    y = QDense(units = 2, 
               kernel_quantizer="quantized_bits(bits = 8,integer = 4,symmetric=0)",
               use_bias= True,
               bias_quantizer= "quantized_bits(bits = 8,integer = 4,symmetric=0)",
               name = "dense2y")(y)
    cota = QDense(units = 2, 
               kernel_quantizer="quantized_bits(bits = 8,integer = 4,symmetric=0)",
               use_bias= True,
               bias_quantizer= "quantized_bits(bits = 8,integer = 4,symmetric=0)",
               name = "dense2cota")(cota)
    cotb = QDense(units = 2, 
               kernel_quantizer="quantized_bits(bits = 8,integer = 4,symmetric=0)",
               use_bias= True,
               bias_quantizer= "quantized_bits(bits = 8,integer = 4,symmetric=0)",
               name = "dense2cotb")(cotb)
    cov = Flatten()(cov)
    cov = QDense(units = 6, 
               kernel_quantizer="quantized_bits(bits = 8,integer = 4,symmetric=0)",
               use_bias= True,
               bias_quantizer= "quantized_bits(bits = 8,integer = 4,symmetric=0)",
               name = "densecov")(cov)
    xy = tf.concat([x[...,:1],y[...,:1],cota[...,:1],cotb[...,:1],
                    x[...,1:2],y[...,1:2],cota[...,1:2],cotb[...,1:2],
                   cov], axis=1)
    model = Model(inputs=x_in, outputs=xy)
    return model
  1. Error occurs:
Exception                                 Traceback (most recent call last)
Cell In[23], line 4
      1 import hls4ml
      2 import plotting
----> 4 config = hls4ml.utils.config_from_keras_model(model, granularity='name')
      5 config['LayerName']['softmax']['exp_table_t'] = 'ap_fixed<18,8>'
      6 config['LayerName']['softmax']['inv_table_t'] = 'ap_fixed<18,4>'

File ~/.conda/envs/hls4ml-tutorial/lib/python3.10/site-packages/hls4ml/utils/config.py:138, in config_from_keras_model(model, granularity, backend, default_precision, default_reuse_factor)
    134     model_arch = json.loads(model.to_json())
    136 reader = hls4ml.converters.KerasModelReader(model)
--> 138 layer_list, _, _ = hls4ml.converters.parse_keras_model(model_arch, reader)
    140 def make_layer_config(layer):
    141     cls_name = layer['class_name']

File ~/.conda/envs/hls4ml-tutorial/lib/python3.10/site-packages/hls4ml/converters/keras_to_hls.py:226, in parse_keras_model(model_arch, reader)
    224 for keras_layer in layer_config:
    225     if keras_layer['class_name'] not in supported_layers:
--> 226         raise Exception('ERROR: Unsupported layer type: {}'.format(keras_layer['class_name']))
    228 output_shapes = {}
    229 output_shape = None

Exception: ERROR: Unsupported layer type: SlicingOpLambda

Optional

Possible fix

If you already know where the issue stems from, or you have a hint please let us know.

Please add support for SlicingOpLambda.

Additional context

Add any other context about the problem here.