Zeroth-order Stochastic Cubic Newton Method with Low-rank Hessian EstimationThis paper focuses on the problem of minimizing a finite-sum loss $ \frac{1}{N}$ $ \sum_{ξ=1}^N f (\mathbf{x}; ξ) $, where only function evaluations of $ f (\cdot; ξ) $ is allowed. For a fixed $ ξ$, which represents a (batch of) training data, the Hessian matrix $ \nabla^2 f (\mathbf{x}; ξ) $ is usually low-rank. We develop a stochastic zeroth-order cubic Newton method for such problems, and prove its efficiency. More specifically, we show that when $ \nabla^2 f (\mathbf{x}; ξ) \in \mathbb{R}^{n\times n } $ is of rank-$r$, $ \mathcal{O}\left(\frac{n}{η^{\frac{7}{2}}}\right)+\widetilde{\mathcal{O}}\left(\frac{n^2 r^2 }{η^{\frac{5}{2}}}\right) $ function evaluations guarantee a second order $η$-stationary point with high probability. This result improves the dependence on dimensionality compared to the existing state-of-the-art. This improvement is achieved via a new Hessian estimation method, which can be efficiently computed by finite-difference operations, and does not require any incoherence assumptions. Numerical experiments are provided to demonstrate the effectiveness of our algorithm.
arXiv.org