mirtorch.dic.soup

mirtorch.dic.soup(Y, D0, X0, lambd, numiter, rnd=False, only_sp=False, alert=False)

Efficient patch-based dictionary learning algorithm according to: Ravishankar, S., Nadakuditi, R. R., & Fessler, J. A. (2017). Efficient sum of outer products dictionary learning (SOUP-DIL) and its application to inverse problems. IEEE transactions on computational imaging, 3(4), 694-709.

Generally, the algorithm solves the following problem:

\[arg \min_{D, X} \|Y-DX\|_2^2 + \lambda \|X\|_0.\]

(the ‘0-norm’ means the number of non-zero elements across the whole matrix)

Parameters:
  • Y – [len_atom, num_patch] data matrix with (self-)training signal as columns (real/complex, numpy matrix)

  • D0 – [len_atom, num_atom] initial dictionary (real/complex, numpy matrix)

  • X0 – [num_atom, num_patch] initial sparse code (real/complex, should be numpy (sparse) matrix)

  • rnd – when the atom update is non-unique, choose whether to use first column or a random column

  • lambd – the sparsity weight

  • numiter – number of iterations

  • only_sp – only update sparse code

Returns:

learned dictionary X: sparse code DX: estimated results

Return type:

D

Since torch.sparse is still very basic, we chose SciPy as the ad-hoc backend. Use Tensor.cpu().numpy() and torch.from_numpy to avoid memory relocation TODO(guanhuaw@umich.edu): Migrate back to torch when its CSR/CSC gets better. The algorithm involves frequent update of sparse data; using GPU may not necessarily accelerate it. 2021-06. Guanhua Wang, University of Michigan