Many constrained sequential decision-making processes such as safe AV navigation, wireless network control, caching, cloud computing, etc., can be cast as Constrained Markov Decision Processes (CMDP). Reinforcement Learning (RL) algorithms have been used to learn optimal policies for unknown unconstrained MDP. Extending these RL algorithms to unknown CMDP, brings the additional challenge of not only maximizing the reward but also satisfying the constraints. Further, in most of the practical applications, one has to rely on the offline database as online interaction might be costly or infeasible.
While the unconstrained offline RL setting is relatively well-understood, the offline CMDP or safe offline RL setup is yet to be considered. For example, consider a database that consists of data coming from a safe behavioral policy, it remained an open problem on how to develop an algorithm that would provide safety while maximizing the reward with provable guarantee. In particular, the existing works on safe offline RL rely on `all policy coverability' rather than the gold standard 'single policy coverability' meaning that the database must contain state-action pairs coming from all the policies which is not practical in safety-critical setup as the database might not contain unsafe state-action pairs. We closed the gap in our recent research. In our recent work, we developed a weighted safe actor-critic (WSAC) algorithm that can produce a policy that outperforms any behavioral policy while maintaining the same level of safety, which is critical to designing a safe algorithm for offline RL. Additionally, we compare WSAC with existing state-of-the-art safe offline RL algorithms in several continuous control environments. WSAC outperforms all baselines across a range of tasks, supporting the theoretical results.
Arnob Ghosh has been an Assistant Professor at the Department of Electrical and Computer Engineering at New Jersey Institute of Technology since September 2023. Before that Arnob Ghosh was a Research Scientist at the Dept. of Electrical and Computer Engg. at the Ohio State University with Ness Shroff. Arnob Ghosh obtained his Ph.D. degree from the University of Pennsylvania, USA in Electrical and Systems Engg.. Prior to joining the OSU, he was an Assistant Professor at the IIT-Delhi.
Arnob Ghosh has worked in diverse areas with the theme of efficient decision-making in interconnected systems. His current research interests include Reinforcement Learning, game theory, Online Learning, and decision theory, and applying those tools in various engineering applications such as Cyber physical system, wireless communication, and dynamical systems. He has published in several top-tier journals and conferences (including top ML conferences such as NeurIPS, AISTATS, and ICLR). His paper was the runner-up in the best paper category at IEEE WiOpt'22. He has also been included as among the top reviewers at top ML conferences.