# Iterative Solver¶

The iterative solver provides an iterative method to minimize an objective function that can be represented as a sum of functions in composite form

$\theta_{*} = \text{argmin}_{\theta \in R^p} K(\theta) = \text{argmin}_{\theta \in R^p} F(\theta) + M(\theta)$

where:

• $$F(\theta) = \sum_{i=1}^{n} F_i(\theta)$$, $$\theta \in R^p$$, where $$F_i(\theta): R^p \to R$$ is a convex, continuously differentiable $$F_i(\theta) \in C^{l \geq 1}$$ (smooth) functions, $$i = 1, \ldots, n$$

• $$M(\theta): R^p \to R$$ is a convex, non-differentiable (non-smooth) function

The Algorithmic Framework of an Iterative Solver

All solvers presented in the library follow a common algorithmic framework. Let $$S_t$$ be a set of intrinsic parameters of the iterative solver for updating the argument of the objective function. This set is the algorithm-specific and can be empty. The solver determines the choice of $$S_0$$.

To do the computations, iterate $$t$$ from $$1$$ until $$\text{nIterations}$$:

1. Choose a set of indices without replacement $$I = \{ i_1, \ldots, i_b \}$$, $$1 \leq i_j \leq n$$, $$j = 1, \ldots, b$$, where $$b$$ is the batch size.

2. Compute the gradient $$g(\theta_{t-1}) = \nabla F_I (\theta_{t-1})$$ where $$F_I (\theta_{t-1}) = \sum_{i \in I} F_i (\theta_{t-1})$$

3. Convergence check:

Stop if $$\frac {{|U|}_d} {\max (1, {|| \theta_{t-1} ||}_d )} < \epsilon$$ where $$U$$ is an algorithm-specific vector (argument or gradient) and d is an algorithm-specific power of Lebesgue space

4. Compute $$\theta_t$$ using the algorithm-specific transformation $$T$$ that updates the function’s argument:

$\theta_t = T(\theta_{t-1}, g(\theta_{t-1}), S_{t-1})$
5. Update $$S_t: S_t = U(S_{t-1})$$ where $$U$$ is an algorithm-specific update of the set of intrinsic parameters.

The result of the solver is the argument $$\theta_{.}$$ and a set of parameters $$S_{.}$$ after the exit from the loop.

Note

You can resume the computations to get a more precise estimate of the objective function minimum. To do this, pass to the algorithm the results $$\theta_{.}$$ and $$S_{.}$$ of the previous run of the optimization solver. By default, the solver does not return the set of intrinsic parameters. If you need it, set the optionalResultRequired flag for the algorithm.