Recent work beginning to explore connections between deep learning and partial differential equations (PDEs) has the potential to address fundamental issues in both deep learning and numerical PDEs. A fundamental issue in deep learning are adversarial attacks, whereby a hacker can manipulate the data by adding imperceptible noise to an image, causing the network to make an incorrect classification with high confidence. Adversarial attacks pose a significant challenge to application domains where security is paramount, such as video surveillance or self-driving cars, and current approaches to adversarially robust training lead to poor classification results. Additionally, another fundamental issue is the connection between stochastic gradient descent (SGD) and generalization in deep learning. At the same time, ideas and algorithms from machine learning have been propagating into other fields, providing, for example, scalable algorithms for solving high-dimensional PDEs and Nesterov-style accelerated PDE solvers. There are also interesting new directions concerned with learning PDEs from data.
This workshop aims to build a bridge between PDEs and deep learning, encouraging ideas to flow back and forth and leading to novel algorithms in both domains. Subsequent working groups will focus on algorithms for defending against adversarial attacks and understanding the connection between SGD and viscosity solutions.