-
v0.15.2
Force allreduce of all gradients in step(), bugfixes
-
v0.13.11
Add compatibility with PyTorch 0.4.1
-
v0.13.10
Support for IBM PowerAI DDL & APIs to restore optimizer state
-
v0.13.8
Critical Bugfix: PyTorch must wait for GPU data before allreduce
-
v0.13.7
Critical Bugfix: non-fused allreduce produces incorrect results
-
v0.13.5
Fix PyTorch master break - use proper THTensor_storage() API
-
v0.13.4
Bugfix mpi4py: Create a private MPI communicator
-
v0.13.3
Collective control plane & other low latency improvements
-
v0.13.2
Bugfix: `python setup.py install` requires cffi
-
v0.13.1
Support TensorFlow Optimizers that override compute_gradients/apply_gradients
-
v0.12.1
Avoid deadlock if worker crashes with an exception
-
v0.12.0
New major release
-
v0.11.2
Add Keras ImageNet training example and LearningRateScheduleCallback