Skip to content

Add support for flattening to the pytorch parser

Javier Duarte requested to merge github/fork/JanFSchulte/pytorch_flatten into main

Created by: JanFSchulte

Currently, Flatten layers are skipped in the pytorch parser. Additionally, this operation is not flagged as unsupported, so the model will parse, but exhibit incorrect behavior. This PR adds support for these layers by adding them to the parser. The optimizer pass that converts the operations to channels_last for pytorch models is adapted to transpose the input to the flattener so the output elements are in the correct order.

Type of change

For a new feature or function, please create an issue first to discuss it with us before submitting a pull request.

Note: Please delete options that are not relevant.

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change which adds functionality)

Tests

Verified that a simple model with a Conv2D and a Flatten operation give correct results, both for the torch.nn.Flatten and torch.flatten() interfaces to this operation in pytorch. Pytest has been added to verify this.

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

Merge request reports

Loading