-
hanhiller authored
* "started Han's generator" * "added Javier's generator" * "data gen, into train.py" * " data gen implement" * "rm files" * "rm files" * 3 data gens * update train.py; 3 generators * 3 generators; working * 1 generator class,subsets defined by list files, valid and test names swapped * small fixes * added data location for when in l1metml-pod; added keras module to setup file * added job script * fixed job yml file * edited batch commands * edited conda setup sh * re-edited conda setup sh * edit dockerfile * re-update docker file * import tensorflow.keras in loss * edit batch size * ditto * added extra script enabling either data gerneration from root or conversion into h5 * small errors fixed * ditto * ditto * ditto * changed epoch number * ditto * 1 epoch test * small fix * ditto * ditto * ditto * change indicies for puppi_met calc * trying to figure out puppi_met * test data for root training * added ak dependency * small error fix * ditto * ditto * ditto * ditto * update data gen and utils to compute puppi met correctly * one training file * small fixes * ditto * ditto * ditto * ditto * ditto * ditto * ditto * ditto * ditto * ditto * ditto * ditto * ditto * change n epochs 30 * change n epochs 35 * change n epochs back to 100 * change n epochs 35 * 100 epochs * 20 epochs * Javier's comments * added / in output path for plots * removed import utils function * addded quantized model in models.py * removed several arguments to layers; quantized model now runs * This should be the mergable branch * updated readme with --quantized flag * ditto * added / to paths * fixed h5 training * added back check if converted fiels exist * removed qkeras import and dependencies * 1 epoch * fixed os.path.isfile() * changed comments * cleanup * in/out paths fixed * small updates to readme and arguments help * kernel/bias qunatizers fixed * fixed issues with quantizers * h5 puppi appears to be working * custom_loss for mode 1, changed prefactor to 100 * larger batch for h5s * h5 batch 1024 * custom loss for h5, 2048 batch * changed #layers to 3 * job script update * small error fixed * ditto * added comment in models * no change * added functions to Write_MET_binned_histogram.py * with_bias=True * added quantized varibable reference in h5 workflow * with bias = false * import qkeras * import qkeras.layers * removed import QGlobalAveragePooling2D * testing qunatized model * ditto * ditto * ditto * update to plots * trying quantized_relu * trying 16,6 bits for qmodel * trying alpha=10 * trying alpha=auto * create branch * changed pdgIds * 2_layers * hyperparameters32-16-8 * trying 2layers * ditto * cleaning up pdgIds * added quantized model * qkeras imports * trying alpha=10 on h5 quantized * trying emb_out=2 * ditto * model compression: nlayers, embout, pdgId; quantized model 16,6 works * added avg resolutions calculations * trying hyperparams 16-32 * small fix * hyperparams 32-16 * hyperparams 32-64 * hyperparams 8-32 * hyperparams 32-64 * updates to resolution plotting/ model compression * small fix * small fix * small fix * added back in pt and phi as inputs * small fix * small fix * small fix * small fix * issue with root inputs * updating models.py * features=6 * features=6 * Javier's changes * fixed syntax error * uncommented training calls * ditto * input dim for root model * update Qmodel workflow * update Qmodel workflow * fixed bias layer * small fix * fix to concatenate layer * fix to multiply layer * fix to multiply layer * trying 16-32 hyperparams * Obligitory commit. This branch should be mergable * trying 8,52 units * trying 16,32 units * trying 52,52 units * cleanup * Javier's changes. PR should be meragable. * put units variable inside model functins * added units as training argument; error calculation is 1/rootN, average res difference calculation mistake fixed * update readME and small fix * fixed bug with feature arrays, changes to res plots/calculations * small units bug fix * updates to res plots * small fix * obligatory commit * obligatory * commit for PR * code is ready for pr * response correction bug fix * autopep8 and rootN calculation * update readme * ready for master merge * removed pxpy as dense layer inputs + small syntax fixes in plots * autopep8 Co-authored-by: Han Slade Hiller <hhiller@lxplus790.cern.ch> Co-authored-by: Han Slade Hiller <hhiller@lxplus7117.cern.ch> Co-authored-by: Han Slade Hiller <hhiller@lxplus7100.cern.ch> Co-authored-by: Han Slade Hiller <hhiller@lxplus723.cern.ch> Co-authored-by: hanhiller <hanhiller@me.clm> Co-authored-by: Javier Duarte <jduarte@ucsd.edu>