Skip to yearly menu bar Skip to main content


Understanding Progressive Training Through the Framework of Randomized Coordinate Descent

Rafał Szlendak · Elnur Gasanov · Peter Richtarik

MR1 & MR2 - Number 151
[ ]
Fri 3 May 8 a.m. PDT — 8:30 a.m. PDT


We propose a Randomized Progressive Training algorithm (RPT) – a stochastic proxy for the well-known Progressive Training method (PT) (Karras et al., 2017). Originally designed to train GANs (Goodfellow et al., 2014), PT was proposed as a heuristic, with no convergence analysis even for the simplest objective functions. On the contrary, to the best of our knowledge, RPT is the first PT-type algorithm with rigorous and sound theoretical guarantees for general smooth objective functions. We cast our method into the established framework of Randomized Coordinate Descent (RCD) (Nesterov, 2012; Richtarik \& Takac, 2014), for which (as a by-product of our investigations) we also propose a novel, simple and general convergence analysis encapsulating strongly-convex, convex and nonconvex objectives. We then use this framework to establish a convergence theory for RPT. Finally, we validate the effectiveness of our method through extensive computational experiments.

Live content is unavailable. Log in and register to view live content