Skip to main content

Flexible Hardware as the Key to Accelerating Optimized Neural Networks

Tong (Tony) Geng
Postdoc, Physical & Computational Sciences Directorate (PCSD)
Pacific Northwest National Laboratory (PNNL)
ECSE Seminar Series
https://rensselaer.webex.com/rensselaer/j.php?MTID=m1178c85fec3537fefe8ffcaad826bd1d
Mon, December 20, 2021 at 4:00 PM

In the past decade, Artificial Intelligence through Neural Networks (NNs) has penetrated virtually every aspect of human life. A basic problem for many NN deployments is that their target applications pose stringent requirements on latency, throughput, and accuracy. Much research has therefore gone into various aspects of improving NN performance. My research focuses on the hardware acceleration of NNs.

The problem in creating hardware to accelerate NNs is that optimized NN models, with redundant and superfluous computations largely pruned, typically have significant irregularities, making them hardware-unfriendly. As these algorithmic optimization methods continue to be developed, NN models become ever more irregular, a trend likely to continue for some time. One common way to manage NN irregularity is by eliminating it through model regularization, i.e. training models to follow regular patterns, but all irregularities cannot be eliminated this way, and many require mechanisms that handle irregularity on-the-fly.

In this talk, I will discuss my approach: instead of regularizing models to make them hardware-friendly, I leverage hardware flexibility from reconfigurability to create novel architectures and systems that are friendly to irregular NN models. These architectures can handle irregularity for a wide range of NN domains, from real-time single-node inference to high-performance large-scale training. I will begin my talk by briefly discussing my work on these two topics and an overview of my overall framework of hardware flexibility as the key to optimized NNs, followed by an in-depth discussion of accelerating Graph Neural Networks (GNNs). GNNs are drastically expanding the applications of machine learning methods and appear to pose the most significant computational challenges yet. Finally, I will present my vision for the future of graph intelligence and the importance of heterogeneity in future architecture and system research, especially for ML in the post-Moore-law era.

Tong (Tony) Geng is a postdoc in the Physical & Computational Sciences Directorate (PCSD) at the Pacific Northwest National Laboratory (PNNL).  He received his Ph.D. in Computer Engineering at Boston University in 2021. His research interests are at the intersection of computer architecture & systems, machine learning, graph intelligence, and high-performance computing. He is the recipient of the Outstanding Postdoc Award at PNNL in 2021 and the best paper award in ICCD 2021. He has served on the TPC of HPCA2022, IPDPS 2021, FPL 2021, and ASAP 2021. His papers have appeared in MICRO, HPCA, SC, TPDS, TC, ICS, and ICCAD.