Blog Posts


December 19, 2021
Training neural networks in parallel is essential for developing larger deep learning models as well as using larger datasets. Modern deep learning packages offer easy implementation of parallel training. However, this offers very little in the way of understanding how parallelization of neural network training actually works. This post looks at how to implement parallel training of a neural network from the ground up to gain a real understanding of the (simple) parallel programming needed. By averaging gradients between processes using libraries such as MPI or NCCL, neural networks can be efficiently trained using multiple GPUs.
May 30, 2020
Koopman observable subspaces provide a unique way to represent a dynamical system that is particularly attractive for machine learning. Many physical systems exhibit extremely non-linear, multi-scale and chaotic phenomena which can be difficult to model and control. Koopman brings promises of being able to represent any dynamical system through linear dynamics. We explore the fundamentals of Koopman operators, the simplifications and challenges they bring to dynamical modeling, and how they can be exploited for developing machine learning models of physical systems.