There have been extensive interests in using machine learning for the control of unknown dynamical systems, yet such approaches oftentimes suffer from poor sample efficiency when the system dimension is high. In this talk, we show that, via two concrete problems, the underlying system structure can be exploited to significantly improve efficiency and avoid exponential blowup issues known in the literature. (i) In the first part, we consider the learn-to-stabilize problem where we learn to stabilize an unknown linear dynamical system. This problem is known to suffer from exponential blow-ups in the system dimension, and we propose an approach that leverages the stable/unstable subspace decomposition to avoid the blow-up when the number of unstable eigenvalues is small. (ii) In the second example, we consider reinforcement learning for multi-agent networked systems, and we show that exploiting the sparse network structure can avoid the exponential blow-up in the number of agents.

Guannan Qu has been an Assistant Professor at the Electrical and Computer Engineering Department of Carnegie Mellon University since September 2021. He received his B.S. degree in Electrical Engineering from Tsinghua University in Beijing, China in 2014, and his Ph.D. in Applied Mathematics from Harvard University in Cambridge, MA in 2019. He was a CMI and Resnick postdoctoral scholar in the Department of Computing and Mathematical Sciences at California Institute of Technology from 2019 to 2021. He is the recipient of Caltech Simoudis Discovery Award, PIMCO Fellowship, Amazon AI4Science Fellowship, and IEEE SmartGridComm Best Student Paper Reward. His research interest lies in control, optimization, and machine/reinforcement learning with applications to power systems, multi-agent systems, Internet of things, smart city, etc.