Skip to content

Updated QONNX parsing

Javier Duarte requested to merge github/fork/jmitrevs/qonnx_0p8 into main

Created by: jmitrevs

Description

This change updates the ONNX parser and adds support for QONNX. It replaces PR #591. It only supports ONNX that has been cleaned by the qonnx package, including converting convolutions to be channels-last and changing Gemm to MatMul and Add.

In QONNX Quant nodes can act on constants as well as the datapath. To make handling this easier, we explicitly put constants in the initial graph. There are also some helper nodes like MatMul and Conv that are introduced to support the explicit constant nodes. After the convert flow, no special ONNX nodes remain in the graph, though.

Generally Quant nodes that have power-of-2 scales and no zero-offset get converted to fixed data types either by setting the types of constants or adding a linear activation that is usually merged into preceding nodes. Non-power-of-2 scales result in ApplyAlpha nodes beings added to scale and unscale, with propagation across some layers. This can be further optimized and has generally been tested less.

Binary networks are not yet supported.

Currently some of the automatic type setting depends on QONNX-set attributes. When we introduce auto type values, this should be updated accordingly.

Type of change

  • New feature (non-breaking change which adds functionality)
  • A new research paper code implementation

Tests

The pytest, test_qonnx.py, is the main test, building some models from the QONNX model zoo

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

Merge request reports

Loading