报告题目：Towards Understanding Overparameterized Deep Neural Networks: From Optimization To Generalization
Deep learning has achieved tremendous successes in many applications. However, why deep learning is so powerful is still less well understood. One of the mysteries is that deep neural networks used in practice are often over-parameterized such that they can even fit random labels to the input data, while they can still achieve very small test error when trained with real labels. In order to understand this phenomenon, in this talk, I will first show that with over-parameterization and a proper random initialization, gradient-based methods can find the global minima of the training loss for DNNs with the ReLU activation function. Then I will show stochastic gradient descent with a proper random initialization is able to train a sufficiently over-parameterized DNN to achieve small generalization error. I will conclude by discussing implications, challenges and future work along this line of research.
QuanquanGu is an Assistant Professor of Computer Science at the University of California, Los Angeles. His current research is in the area of artificial intelligence and machine learning, with a focus on developing and analyzing nonconvex optimization algorithms for machine learning to understand large-scale, dynamic, complex and heterogeneous data, and building the theoretical foundations of deep learning. He received his Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign in 2014. He has received a couple of awards for his work, including Yahoo! ACE award, Simons Berkeley Research Fellowship, NSF CAREER Award, Salesforce Deep Learning Research Award, and Adobe Data Science Research Award. He has published 100+ peer-reviewed papers on top-tier machine learning venues such as JMLR, MLJ, ICML, NeurIPS, AISTATS, UAI. He also serves as a section editor for PLOS One, area chair for ICML, NeurIPS, ICLR, AISTATS and AAAI, and senior program committee member for IJCAI, KDD and ACML.