Vulnerability and Robustness of Deep Reinforcement Learning Agents

Sunday, April 26, 2020 - 9:00am - 9:30am
Keller 3-180
Soumik Sarkar (Iowa State University)
With ubiquitous sensing and high-performance computing capabilities, Machine Learning (ML) is poised to play a major role in transforming many engineering systems into sophisticated Cyber-Physical Systems (CPS), that demand sophisticated information processing for actionable information. However, the growing prospect of ML models being used in autonomous CPSs (e.g., self-driving cars) has raised concerns around safety and robustness of autonomous agents. Recent work on creating adversarial attacks have shown that it is computationally feasible for a bad actor to fool a machine/deep learning model to behave
sub-optimally. While vulnerability of Deep Convolutional Neural Network (CNN) models are critical for perception of autonomous systems, exploration of adversarial attacks on Deep Reinforcement Learning (DRL) agents are important in the context of autonomous decision-making. I will discuss some of our recent attempts to address vulnerability and robustness of DRL agents from a CPS perspective that can substantially reduce the risks of using ML in autonomous decision-making problems.