Follow

Scaled Proximal Gradient Methods for Multiobjective Optimization: Improved Linear Convergence and Nesterov's Acceleration arxiv.org/abs/2411.07253

Scaled Proximal Gradient Methods for Multiobjective Optimization: Improved Linear Convergence and Nesterov's Acceleration

Over the past two decades, descent methods have received substantial attention within the multiobjective optimization field. Nonetheless, both theoretical analyses and empirical evidence reveal that existing first-order methods for multiobjective optimization converge slowly, even for well-conditioned problems, due to the objective imbalances. To address this limitation, we incorporate curvature information to scale each objective within the direction-finding subproblem, introducing a scaled proximal gradient method for multiobjective optimization (SPGMO). We demonstrate that the proposed method achieves improved linear convergence, exhibiting rapid convergence in well-conditioned scenarios. Furthermore, by applying small scaling to linear objectives, we prove that the SPGMO attains improved linear convergence for problems with multiple linear objectives. Additionally, integrating Nesterov's acceleration technique further enhances the linear convergence of SPGMO. To the best of our knowledge, this advancement in linear convergence is the first theoretical result that directly addresses objective imbalances in multiobjective first-order methods. Finally, we provide numerical experiments to validate the efficiency of the proposed methods and confirm the theoretical findings.

arXiv.org
· · feed2toot · 0 · 0 · 0
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.