reg4opt
A Python module for accelerating online optimization methods using operator regression.
view repo
This paper presents a new regularization approach – termed OpReg-Boost – to boost the convergence and lessen the asymptotic error of online optimization and learning algorithms. In particular, the paper considers online algorithms for optimization problems with a time-varying (weakly) convex composite cost. For a given online algorithm, OpReg-Boost learns the closest algorithmic map that yields linear convergence; to this end, the learning procedure hinges on the concept of operator regression. We show how to formalize the operator regression problem and propose a computationally-efficient Peaceman-Rachford solver that exploits a closed-form solution of simple quadratically-constrained quadratic programs (QCQPs). Simulation results showcase the superior properties of OpReg-Boost w.r.t. the more classical forward-backward algorithm, FISTA, and Anderson acceleration, and with respect to its close relative convex-regression-boost (CvxReg-Boost) which is also novel but less performing.
READ FULL TEXT
This paper considers an online proximal-gradient method to track the
min...
read it
In this paper we develop proximal methods for statistical learning. Prox...
read it
In this paper we analyze boosting algorithms in linear regression from a...
read it
Many statistical learning problems can be posed as minimization of a sum...
read it
The regression problem associated with finding a matrix approximation of...
read it
We consider the online linear regression problem, where the predictor ve...
read it
In this paper, we propose the first practical algorithm to minimize
stoc...
read it
A Python module for accelerating online optimization methods using operator regression.
Comments
There are no comments yet.