Poster
Unconstrained MAP Inference, Exponentiated Determinantal Point Processes, and Exponential Inapproximability
Naoto Ohsaka
Keywords: [ Probabilistic Methods ] [ Approximate Inference ]
Abstract:
We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter $p$.
We prove the following complexity-theoretic hardness results that explain the difficulty in approximating unconstrained MAP inference and the normalizing constant for E-DPPs.
(1) Unconstrained MAP inference for an $n \times n$ matrix is NP-hard to approximate within a $2^{\beta n}$-factor, where $\beta = 10^{-10^{13}}$. This result improves upon a $(9/8-\epsilon)$-factor inapproximability given by Kulesza and Taskar (2012).
(2) The normalizing constant for E-DPPs of any (fixed) constant exponent $p \geq \beta^{-1} = 10^{10^{13}}$ is NP-hard to approximate within a $2^{\beta pn}$-factor. This gives a(nother) negative answer to open questions posed by Kulesza and Taskar (2012); Ohsaka and Matsuoka (2020).
Chat is not available.