mirtorch.alg.POGM

class mirtorch.alg.POGM(f_grad: Callable, f_L: float, g_prox: Prox, max_iter: int = 10, restart=False, eval_func: Optional[Callable] = None)

Optimized Proximal Gradient Method (POGM) Ref: D. Kim and J. A. Fessler. “Adaptive restart of the optimized gradient method for convex optimization”. In: J. Optim. Theory Appl. 178.1 (July 2018), 240–63 (cit. on pp. 5.26, 5.29).

\[arg \min_x f(x) + g(x)\]

where grad(f(x)) is L-Lipschitz continuous and g is proximal-friendly function.

max_iter

number of iterations to run

Type:

int

f_grad

gradient of f

Type:

Callable

f_L

L-Lipschitz value of f_grad

Type:

float

g_prox

proximal operator g. For plain OGM, you could call Const() as a place-holder here

Type:

Prox

restart

restart strategy, not yet implemented

Type:

Union[…]

eval_func

user-defined function to calculate the loss at each iteration.

TODO: add the restart

__init__(f_grad: Callable, f_L: float, g_prox: Prox, max_iter: int = 10, restart=False, eval_func: Optional[Callable] = None)

Methods

__init__(f_grad, f_L, g_prox[, max_iter, ...])

run(x0)

Run the algorithm :param x0: initialization