mirtorch.alg.FBPD

class mirtorch.alg.FBPD(g_grad: Callable, f_prox: Prox, h_prox: Prox, g_L: float, G_norm: float, G: Optional[LinearMap] = None, tau: Optional[float] = None, max_iter: int = 10, eval_func: Optional[Callable] = None, p: int = 1)

Forward-backward primal dual (FBPD) algorithm.

Ref: L. Condat, A primal dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. Journal of Optimization Theory and Applications, 158(2):460-479, 2013.

The cost function is:

\[arg \min_x f(x) + g(x) + h(Gx)\]

where f and h are proper convex functions, and g is a convex function with a L-Lipschitz continuous gradient.

g_grad

Callable to calculate the gradient of g

f_prox

Prox: proximal operator of f

h_prox

Prox: proximal operator of h

g_L

float, Lipschitz value of g_grad

G_norm

float of the norm of G’G, can be solved by power_iter()

tau

float, step size

max_iter

int, number of iterations to run

eval_func

user-defined function to calculate the loss at each iteration.

__init__(g_grad: Callable, f_prox: Prox, h_prox: Prox, g_L: float, G_norm: float, G: Optional[LinearMap] = None, tau: Optional[float] = None, max_iter: int = 10, eval_func: Optional[Callable] = None, p: int = 1)

Methods

__init__(g_grad, f_prox, h_prox, g_L, G_norm)

run(x0)

Run the algorithm :param x0: tensor, initialization