NIPS: Spotlight Session 5 – Regression and Time Series Spotlights
M. Lopes A Residual Bootstrap for High-Dimensional Regression with Near Low-Rank Designs B. McWilliams, G. Krummenacher, M. Lucic, J. Buhmann Fast and Robust Least Squares Estimation in Corrupted Linear Models M. Bahadori, R. Yu, Y.…
NIPS: Oral Session 6 – Nishant A. Mehta
From Stochastic Mixability to Fast Rates Empirical risk minimization (ERM) is a fundamental learning rule for statistical learning problems where the data is generated according to some unknown distribution P and returns a hypothesis f…
NIPS: Oral Session 7 – Odalric-Ambryn Maillard
In Reinforcement Learning (RL), state-of-the-art algorithms require a large number of samples per state-action pair to estimate the transition kernel p. In many problems, a good approximation of p is not needed. For instance, if…
NIPS: Spotlight Session 8 – GP, Kernal, Sampling, and Classification Spotlights
G. Patrini, R. Nock, T. Caetano, P. Rivera (Almost) No Label No Cry O. Koyejo, N. Natarajan, P. Ravikumar, I. Dhillon Consistent Binary Classification with Generalized Performance Metrics D. Steinberg, E. Bonilla Extended and Unscented…
NIPS: Oral Session 5 – Alexandros G. Dimakis
Sparse Polynomial Learning and Graph Sketching Let f:{−1,1}n→R be a polynomial with at most s non-zero real coefficients. We give an algorithm for exactly reconstructing f given random examples from the uniform distribution on {−1,1}n…
NIPS: Oral Session 5 – John Carlos Baez
Networks in Climate Science The El Niño is a powerful but irregular climate cycle that has huge consequences for agriculture and perhaps global warming. Predicting its arrival more than 6 months ahead of time has…
NIPS: Oral Session 8 – Brooks Paige
Asynchronous Anytime Sequential Monte Carlo We introduce a new sequential Monte Carlo algorithm we call the particle cascade. The particle cascade is an asynchronous, anytime alternative to traditional sequential Monte Carlo algorithms that is amenable…
NIPS: Oral Session 1 – Yurii Nesterov
Subgradient Methods for Huge-Scale Optimization Problems We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity…
NIPS: Oral Session 2 – Anshumali Shrivastava
Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS) We present the first provably sublinear time hashing algorithm for approximate \emph{Maximum Inner Product Search} (MIPS). Searching with (un-normalized) inner product as the underlying…
NIPS: Spotlight Session 1 – Optimization Spotlights
W. Su, S. Boyd, E. Candes A Differential Equation for Modeling Nesterov’s Accelerated Gradient Method: Theory and Insights V. Srikumar, C. Manning Learning Distributed Representations for Structured Output Prediction J. Hernández-Lobato, M. Hoffman, Z. Ghahramani…