Skip to content

Fix loading weights in GarNetStacked and GarNet internal array precisions

Javier Duarte requested to merge github/fork/joshlerner/main into main

Created by: joshlerner

Description

In the change from using the reader to storing weights as attributes, GarNetStack input feature weights + biases and output feature weights were missed. Fixed by storing all GarNetStack weights/biases as attributes.

All non default precisions specified for internal GarNet arrays (edge weight, norm, etc.) were not converted to CPP definitions, and produced typedef errors in firmware/parameters.h. Fixed by applying an APTypeConverter to all internal array precisions, not just those with default values.

Modified contrib/garnet.py to include an output activation for GarNetStack models, which was necessary to test above changes. This had previously been commented out due to being unused.

Type of change

  • Bug fix (non-breaking change that fixes an issue)

Tests

Added test similar to pre-existing in test_garnet.py for GarNetStack models

  • GarNet internal arrays are included in the generated config automatically for name and type granularity
  • Previous tests overwrote these specifications, but the new test includes non-default internal arrays in the config

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

Merge request reports

Loading