Skip to yearly menu bar Skip to main content


General Tail Bounds for Non-Smooth Stochastic Mirror Descent

Khaled Eldowa · Andrea Paudice

MR1 & MR2 - Number 123
[ ]
Sat 4 May 6 a.m. PDT — 8:30 a.m. PDT


In this paper, we provide novel tail bounds on the optimization error of Stochastic Mirror Descent for convex and Lipschitz objectives. Our analysis extends the existing tail bounds from the classical light-tailed Sub-Gaussian noise case to heavier-tailed noise regimes. We study the optimization error of the last iterate as well as the average of the iterates. We instantiate our results in two important cases: a class of noise with exponential tails and one with polynomial tails. A remarkable feature of our results is that they do not require an upper bound on the diameter of the domain. Finally, we support our theory with illustrative experiments that compare the behavior of the average of the iterates with that of the last iterate in heavy-tailed noise regimes.

Live content is unavailable. Log in and register to view live content