Skip to content

Automatic precision inference

Javier Duarte requested to merge github/fork/vloncar/auto_precision into main

Created by: vloncar

Description

This introduces the ability to specify auto as a precision string, that implies hls4ml should infer the precision in some way. This is not exposed by default via the config_from... functions for now. The goal is to have the framework for inferring types in some ways within hls4ml (e.g., QONNX parser) before fully exposing it to users. An initial inference of precision has been added via the infer_precision_types optimizer, based on previous attempts by various people. It's not advanced in any way. During testing, I encountered some issues with SeparableConv1D templates which I fixed.

Type of change

  • Bug fix (non-breaking change that fixes an issue) - Only related to the SeparableConv1D issue
  • New feature (non-breaking change which adds functionality)

Tests

There are new tests in test_auto_precision.py that cover the few use cases.

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

Merge request reports