Campuses:

Distributed and Learning Based Methods for Optimizing Non-convex Problems

Tuesday, January 16, 2018 - 1:25pm - 2:25pm
Lind 409
Mingyi Hong (University of Minnesota, Twin Cities)
In this work, we discuss recent progress on distributed and learning-based approach for non-convex optimization. In the first part of the talk, we consider a distributed setting with multiple connected agents, who jointly optimize a possibly non-convex objective function. We propose a proximal primal-dual algorithm which allows the agents to distributedly carry out the computation by only utilizing local information. We provide tightly complexity analysis and discuss a few extensions and open problems. In the second part, we discuss a learning based approach for non-convex optimization. The key idea is to treat the input and output of an optimization algorithm as an unknown non-linear mapping and to use a deep neural network (DNN) to approximate it. We characterize a class of ‘learnable algorithms’ and then design DNNs to approximate some algorithms of interest in wireless communications. Extensive numerical simulation is provided to demonstrate the performance of the proposed approach.

Bio
Mingyi Hong received his Ph.D. degree from University of Virginia in 2011. Since August 2017, he has been an Assistant Professor in the Department of Electrical and Computer Engineering, University of Minnesota. From 2014-2017 he has been a Black & Veatch Faculty Fellow and an Assistant Professor with the Department of Industrial and Manufacturing Systems Engineering, Iowa State University. He has been serving on the IEEE Signal Processing for Communications and Networking (SPCOM), and Machine Learning for Signal Processing (MLSP) Technical Committees. His works have been selected as finalists for the Best Paper Prize for Young Researchers in Continuous Optimization by the Mathematical Optimization Society in 2013, 2016. His research interests are primarily in the fields of optimization theory and applications in signal processing and machine learning.