Campuses:

Statistical Learning Theory

Tuesday, September 17, 2019 - 9:00am - 10:00am
Andrew Barron (Yale University)
For deep nets we examine contraction properties of complexity for each layer of the network. For any ReLU network there is, without loss of generality, a representation in which the sum of the absolute values of the weights into each node is exactly 1, and the input layer variables are multiplied by a value V coinciding with the total variation of the path weights. Implications are given for Gaussian complexity, Rademacher complexity, statistical risk, and metric entropy, all of which are shown to be proportional to V.
Thursday, March 29, 2012 - 5:30pm - 6:15pm
Nathan (Nati) Srebro (Toyota Technological Institute at Chicago)
I will discuss deep connections between Statistical Learning, Online
Learning and Optimization. I will show that there is a tight
correspondence between the sample size required for learning and the
number of local oracle accesses required for optimization, and the
same measures of complexity (e.g. the fat-shattering dimension or
Rademacher complexity) control both of them. Furthermore, I will show
how the Mirror Descent method, and in particular its stochastic/online
variant, is in a strong sense universal for online learning,
Subscribe to RSS - Statistical Learning Theory