PyTorch conversion issues
Created by: thesps
Linear
layer with bias=False
Improper handling of PyTorch When a PyTorch model has a Linear
layer with bias=False
, conversion crashes out when trying to access the non-existent bias data from the model.
Minimal example to reproduce the issue:
import torch
import hls4ml
model = torch.nn.Sequential(torch.nn.Linear(1,1,bias=False))
config = hls4ml.utils.config_from_pytorch_model(model)
hls_model = hls4ml.converters.convert_from_pytorch_model(model, (1,1) hls_config=config)
hls_model = hls4ml.converters.convert_from_pytorch_model(model, hls_config=config)
End of error message:
~/Work/hls4ml/hls4ml-master/hls4ml/model/hls_layers.py in add_bias(self, quantizer)
460
461 def add_bias(self, quantizer=None):
--> 462 data = self.model.get_weights_data(self.name, 'bias')
463 precision = None
464 type_name = None
~/Work/hls4ml/hls4ml-master/hls4ml/model/hls_model.py in get_weights_data(self, layer_name, var_name)
477
478 def get_weights_data(self, layer_name, var_name):
--> 479 return self.reader.get_weights_data(layer_name, var_name)
480
481 def next_layer(self):
~/Work/hls4ml/hls4ml-master/hls4ml/converters/pytorch_to_hls.py in get_weights_data(self, layer_name, var_name)
58 var_name = torch_paramap[var_name]
59
---> 60 data = self.state_dict[layer_name + '.' + var_name].numpy().transpose() #Look at transpose when systhesis produce lousy results. Might need to remove it.
61
62 return data
KeyError: '0.bias'
ParametrizedActivation
parameter
Non-handling of The parameter(s) of ParametrizedActivation
layers, e.g. ELU
, PReLU
, LeakyReLU
don't get propagated through the conversion. Simple demonstration case:
import torch
import hls4ml
model = torch.nn.Sequential(torch.nn.ELU(alpha=0.5))
hls4ml.utils.config_from_pytorch_model(model)
hls_model = hls4ml.converters.convert_from_pytorch_model(model, input_shape=[1,1])
print(list(hls_model.get_layers())[1].attributes)
Gives:
{'class_name': 'ELU', 'activation': 'ELU', 'name': '0', 'accum_t': <hls4ml.model.hls_layers.FixedPrecisionType object at 0x7fa619b242d0>, 'table_t': <hls4ml.model.hls_layers.FixedPrecisionType object at 0x7fa619f10750>, 'table_size': 1024}
The Keras equivalent also has the attribute activ_param: 0.5
, and the absence of that gives wrong inference results.
Tasks (can be assigned / self-assigned)
-
Develop simple test cases that currently fail (working on it) -
Fix bias=False
handling -
Fix ParametrizedActivation
handling