Model optimizations and Average Puppi-ML Resolution Calculation
Created by: hanhiller
- removed redundant inputs to model (px and py— they are used in the puppi calculation still)
- NOTE: Currently these are still fed into the model
- removed redundant/useless pdgID information
- now neg and pos. pdgIDs map to the same particle type (charge already specified)
- removed pdgIDs which do not appear in the data set [0,1,2]
- embedding output dimension=2 (was 8)
- fixed bug with puppi resolution calculation (was being scaled by ML response correction)
- Now puppi resolutions will remain fixed between trainings
- added calculation of the average difference of ML and puppi resolutions (over all bins weighted by the number of events in each bin) (greater average difference = better model)
- This metric gives us a way to qualitatively compare the resolutions between different models
- added error bars to the METx and METy plots ( these upper and lower errors are the widths of the distribution at points slightly wider and narrower than 1 SD )
- For comparisons sake, here are the average resolution differences for the default (current main) mode: 3 layers (64,32,160)
- Root workflow: average xRes difference=5.21, average yRes difference=5.30
- h5 workflow: average xRes difference=4.98, average yRes difference=4.91
- quantized model trains properly
- 16,6 bit model out preforms the default model (average xRes dif=5.59, average yRes dif=5.54)
- 2 layers (16-32) seems to train well
- root work flow: 5.5 hrs, average xRes dif=3.92, average yRes dif =4.04
- h5 workflow: ~45min, average xRes dif=4.92, yRes difference=5.20