Skip to yearly menu bar Skip to main content


Poster

Wasserstein Distributional Learning via Majorization-Minimization

Chengliang Tang · Nathan Lenssen · Ying Wei · Tian Zheng

Auditorium 1 Foyer 110

Abstract: Learning function-on-scalar predictive models for conditional densities and identifying factors that influence the entire probability distribution are vital tasks in many data-driven applications. We present an efficient Majorization-Minimization optimization algorithm, Wasserstein Distributional Learning (WDL), that trains Semi-parametric Conditional Gaussian Mixture Models (SCGMM) for conditional density functions and uses the Wasserstein distance $W_2$ as a proper metric for the space of density outcomes. We further provide theoretical convergence guarantees and illustrate the algorithm using boosted machines. Experiments on the synthetic data and real-world applications demonstrate the effectiveness of the proposed WDL algorithm.

Live content is unavailable. Log in and register to view live content