A Scalable Lift-and-Project Differentiable Approach For the Maximum Cut Problem
Abstract
We propose a scalable framework for solving the Maximum Cut (MaxCut) problem in large graphs using projected gradient ascent on quadratic objectives. Notably, while our approach is differentiable and leverages GPUs for gradient-based optimization, it is not a machine learning method and does not require training data outside of the given problem formulation. Starting from a continuous relaxation of the classical quadratic binary formulation, we present a parallelized optimization strategy that explores multiple initialization vectors in batch, offering an efficient and memory-efficient alternative to traditional solvers. We analyze the relaxed objective, showing it is convex and has fixed-points that correspond to local optima—particularly at boundary values—highlighting a key challenge in non-convex optimization. To mitigate these limitations, we introduce a lifted quadratic formulation that over-parameterizes the optimization space, potentially enabling the algorithm to escape poor fixed-points. We theoretically characterize the lifted fixed-points. Finally, we propose DECO, a dimension-alternating algorithm that cycles between the unlifted and lifted formulations, combining their complementary strengths combined with degree-based initialization and hyper-parameter search. Evaluations across various graph families show that our methods either achieve comparable or superior results as compared to recent data-intensive, data-less, and GPU-accelerated sampling methods.