Hierarchical deep generative networks for Bayesian inverse problems

Monday, October 22, 2018 - 2:00pm - 2:50pm
Keller 3-180
Pengchuan Zhang (Microsoft Research)
Deep generative networks have achieved great success in high dimensional density approximation, especially for approximating the distributions of natural images and languages. In this talk, we propose to leverage their approximation capability to approximate posterior distributions in Bayesian Inverse Problems (BIPs). To train deep generative networks, we propose a class of methods that can be combined with any sample-based Bayesian inference algorithm and learn from incremental improvement between two consecutive steps of these samplebased methods. To stabilize the training, we further propose a hierarchical deep generative model that first generates low resolution posterior and then gradually adds details. A hierarchical training loss is designed accordingly. In our experiment, we compare the performance of our training methods when combined with different sample-based algorithms, including various MCMC algorithms, ensemble Kalman filter and Stein variational gradient descent. Our experiment results show promising results of applying deep generative networks to high-dimensional BIPs.