Abstract:
Stochastic convex optimization is a model for optimization in a stochastic environment that is also motivated by the theory of machine learning. Empirical risk minimization (ERM) is a fundamental algorithmic principle. We study the sample complexity of ERM in stochastic convex optimization in the standard Euclidean setup in d-dimensional space with accuracy parameter ϵ>0. A folklore example shows that the sample complexity is at least Ω(1ϵ2). Shalev-Shwartz, Shamir, Srebro and Sridharan proved that the complexity is at least Ω(log(d)) for ϵ=12.Later on, Feldman proved that it is at least Ω(dϵ). Proving asymptotically stronger lower bounds was left as an open problem. We prove that, in fact, the sample complexity is at most ˜O(dϵ+1ϵ2). The proof builds a mechanism for controlling stochastic convexoptimization problems.
Chat is not available.