Improving Robust and Efficient Neural Network Training via Bi-level Optimization

Sijia Liu
Assistant Professor, Department of Computer Science and Engineering
Michigan State University
JEC 3117
Wed, March 01, 2023 at 4:00 PM

This talk will introduce Bi-level Machine Learning, an emerging topic rooted in bi-level optimization (BLO), to tackle neural network training challenges on robustness and efficiency. In the first part, I will revisit adversarial training (AT)–a widely recognized training mechanism to gain adversarial robustness of deep neural networks–from a fresh BLO viewpoint. Built upon that, I will introduce a new theoretically-grounded, and computationally-efficient robust training algorithm termed Fast Bi-level AT (Fast-BAT). In the second part, I will study the problem of how to prune large-scale neural networks for improved generalization and efficiency. Conventionally, iterative magnitude pruning (IMP) is the predominant sparse learning method to find ‘winning’ sparse sub-networks successfully. Yet, the computation cost of IMP grows prohibitively as the sparsity ratio increases. I will show that BLO provides a systematic model pruning framework that can close the gap between pruning accuracy and efficiency. Please see the OPTML repository for the associated open-sourcing projects.

headshot of east Asian man with black hair and glasses wearing a white shirt on a yellow background

Dr. Sijia Liu is currently an Assistant Professor at the Department of Computer Science and Engineering, Michigan State University, and an Affiliated Professor at the MIT-IBM Watson AI Lab, IBM Research. His research spans the areas of machine learning, optimization, signal processing, and computational biology, with a focus on Trustworthy and Scalable ML. He received the Best Paper Runner-Up Award at the Conference on Uncertainty in Artificial Intelligence (UAI) in 2022 and the Best Student Paper Award at the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) in  2017. He has published over 60 papers at top-tier ML/AI conferences and (co-)organized several tutorials and workshops on Trustworthy ML and Optimization for Deep Learning at, e.g., KDD, AAAI, CVPR, and ICML.