# Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Algorithm#

The limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm [Byrd2015] follows the algorithmic framework of an iterative solver with the algorithm-specific transformation $$T$$ and set of intrinsic parameters $$S_t$$ defined for the memory parameter $$m$$, frequency of curvature estimates calculation $$L$$, and step-length sequence $$\alpha_t > 0$$, algorithm-specific vector $$U$$ and power $$d$$ of Lebesgue space defined as follows:

## Transformation#

$T(\theta_{t-1},g(\theta_{t-1}), S_{t-1})$
$\begin{split}\theta_{t} = \begin{cases} \theta_{t-1}-\alpha^{t}g(\theta_{t-1}), & t \leq 2 \\ \theta_{t-1}-\alpha^{t}Hg(\theta_{t-1}), & \mathrm{otherwise} \end{cases}\end{split}$

where $$H$$ is an approximation of the inverse Hessian matrix computed from m correction pairs by the Hessian Update Algorithm.

Convergence check: $$U=g\left(\theta_{t-1}\right), d=2$$

## Intrinsic Parameters#

For the LBFGS algorithm, the set of intrinsic parameters $$S_t$$ includes the following:

• Correction pairs $$(s_j , y_j)$$

• Correction index k in the buffer that stores correction pairs

• Index of last iteration t of the main loop from the previous run

• Average value of arguments for the previous L iterations $$\overline{\theta_{k-1}}$$

• Average value of arguments for the last L iterations $$\overline{\theta_k}$$

Below is the definition and update flow of the intrinsic parameters $$(s_j , y_j)$$. The index is set and remains zero for the first $$2L-1$$ iterations of the main loop. Starting with iteration $$2L$$, the algorithm executes the following steps for each of $$L$$ iterations of the main loop:

1. $$k:=k+1$$

2. Choose a set of indices without replacement: $$I_H = \{i_1, i_2, \ldots, i_{b_H}\}$$, $$1 \leq i_l < n$$, $$l \in \{1, \ldots, b_H\}$$, $$|I_H| = b_H = \mathrm{correctionPairBatchSize}$$.

3. Compute the sub-sampled Hessian

${\nabla }^{2}F\left(\overline{{\theta }_{k}}\right)=\frac{1}{{b}_{H}}\sum _{i\in {I}_{H}}{\nabla }^{2}{F}_{i}\left(\overline{{\theta }_{k}}\right)$

at the point $$\overline{{\theta }_{k}}=\frac{1}{L}\sum _{i=Lk}^{L\left(k+1\right)}{\theta }_{i}$$ for the objective function using Hessians of its terms

$\begin{split}{\nabla }^{2}{F}_{i}=\left[\begin{array}{ccc}\frac{\partial {F}_{i}}{\partial {\theta }_{0}\partial {\theta }_{0}}& \cdots & \frac{\partial {F}_{i}}{\partial {\theta }_{0}\partial {\theta }_{p}}\\ ⋮& \ddots & ⋮\\ \frac{\partial {F}_{i}}{\partial p\partial {\theta }_{0}}& \cdots & \frac{\partial {F}_{i}}{\partial {\theta }_{p}\partial {\theta }_{p}}\end{array}\right]\end{split}$
4. Compute the correction pairs $$(s_k , y_k)$$:

$${s}_{k}=\overline{{\theta }_{k}}-\overline{{\theta }_{k-1}}$$

$${y}_{k}={\nabla }^{2}F\left(\overline{{\theta }_{k}}\right){s}_{k}$$

Note

• The set $$S_k$$ of intrinsic parameters is updated once per $$L$$ iterations of the major loop and remains unchanged between iterations with the numbers that are multiples of $$L$$

• A cyclic buffer stores correction pairs. The algorithm fills the buffer with pairs one-by-one. Once the buffer is full, it returns to the beginning and overwrites the previous correction pairs.

## Hessian Update Algorithm#

This algorithm computes the approximation of the inverse Hessian matrix from the set of correction pairs [Byrd2015].

For a given set of correction pairs $$(s_j, y_j)$$, $$j = k - min(k, m) + 1, \ldots, k$$:

1. Set $$H={s}_{k}^{T}{y}_{k}/{y}_{k}^{T}{y}_{k}$$

2. Iterate $$j$$ from $$k - min (k, m) + 1$$ until $$k$$:

1. $${\rho }_{j}=1/{y}_{j}^{T}{y}_{j}$$

2. $$H:=\left(I-{\rho }_{j}{s}_{j}{y}_{j}^{T}\right)H\left(I-{\rho }_{j}{y}_{j}{s}_{j}^{T}\right)+{\rho }_{j}{s}_{j}{s}_{j}^{T}.$$

3. Return $$H$$

## Computation#

The limited-memory BFGS algorithm is a special case of an iterative solver. For parameters, input, and output of iterative solvers, see Computation.

### Algorithm Input#

In addition to the input of the iterative solver, the limited-memory BFGS algorithm accepts the following optional input:

Algorithm Input for Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Computation#

OptionalDataID

Input

correctionPairs

A numeric table of size $$2m \times p$$ where the rows represent correction pairs $$s$$ and $$y$$. The row correctionPairs[j], $$0 \leq j < m$$, is a correction vector $$s_j$$, and the row correctionPairs[j], $$m \leq j < 2m$$, is a correction vector $$y_j$$.

correctionIndices

A numeric table of size $$1 \times 2$$ with 32-bit integer indexes. The first value is the index of correction pair $$t$$, the second value is the index of last iteration $$k$$ from the previous run.

averageArgumentLIterations

A numeric table of size $$2 \times p$$, where row 0 represents average arguments for previous $$L$$ iterations, and row 1 represents average arguments for last $$L$$ iterations. These values are required to compute $$s$$ correction vectors in the next step.

### Algorithm Parameters#

In addition to parameters of the iterative solver, the limited-memory BFGS algorithm has the following parameters:

Algorithm Parameters for Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Computation#

Parameter

Default Value

Description

algorithmFPType

float

The floating-point type that the algorithm uses for intermediate computations. Can be float or double.

method

defaultDense

Performance-oriented computation method

batchIndices

NULL

The numeric table of size $$nIterations \times batchSize$$ with 32-bit integer indices of terms in the objective function to be used in step 2 of the limited-memory BFGS algorithm. If no indices are provided, the implementation generates random indices.

Note

This parameter can be an object of any class derived from NumericTable, except for PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable.

batchSize

$$10$$

The number of observations to compute the stochastic gradient. The implementation of the algorithm ignores this parameter if the batchIndices numeric table is provided.

If BatchSize equals the number of terms in the objective function, no random sampling is performed and all terms are used to calculate the gradient.

correctionPairBatchSize

$$100$$

The number of observations to compute the sub-sampled Hessian for correction pairs computation. The implementation of the algorithm ignores this parameter if the correctionPairIndices numeric table is provided.

If correctionPairBatchSize equals the number of terms in the objective function, no random sampling is performed and all terms are used to calculate the Hessian matrix.

correctionPairIndices

NULL

The numeric table of size $$(nIterations/L) \times correctionPairBatchSize$$ with 32-bit integer indices to be used instead of random values. If no indices are provided, the implementation generates random indices.

Note

This parameter can be an object of any class derived from NumericTable, except for PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable.

Note

If the algorithm runs with no optional input data, $$(nIterations / L - 1)$$ rows of the table are used. Otherwise, it can use one more row, $$(nIterations / L)$$ in total.

$$m$$

$$10$$

The memory parameter. The maximum number of correction pairs that define the approximation of the Hessian matrix.

$$L$$

$$10$$

The number of iterations between calculations of the curvature estimates.

stepLengthSequence

A numeric table of size $$1 \times 1$$ that contains the default step length equal to $$1$$.

The numeric table of size $$1 \times nIterations$$ or $$1 \times 1$$. The contents of the table depend on its size:

• $$size = 1 \times nIterations$$: values of the step-length sequence $$\alpha^k$$ for $$k = 1, \ldots, nIterations$$.

• $$size = 1 \times 1$$: the value of step length at each iteration $$\alpha^1 = \ldots = \alpha^{nIterations}$$

..note:

This parameter can be an object of any class derived from NumericTable,
except for PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable.


The recommended data type for storing the step-length sequence is the floating-point type, either float or double, that the algorithm uses in intermediate computations.

engine

SharePtr< engines:: mt19937:: Batch>()

Pointer to the random number generator engine that is used internally for random choosing terms from the objective function.

### Algorithm Output#

In addition to the output of the iterative solver, the limited-memory BFGS algorithm calculates the following optional results:

Algorithm Output for Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Computation#

OptionalDataID

Output

correctionPairs

A numeric table of size $$2m \times p$$ where the rows represent correction pairs $$s$$ and $$y$$. The row correctionPairs[j], $$0 \leq j < m$$, is a correction vector $$s_j$$, and the row correctionPairs[j], $$m \leq j < 2m$$, is a correction vector $$y_j$$.

correctionIndices

A numeric table of size $$1 \times 2$$ with 32-bit integer indexes. The first value is the index of correction pair $$t$$, the second value is the index of last iteration $$k$$ from the previous run.

averageArgumentLIterations

A numeric table of size $$2 \times p$$, where row 0 represents average arguments for previous $$L$$ iterations, and row 1 represents average arguments for last $$L$$ iterations. These values are required to compute $$s$$ correction vectors in the next step.

### Examples#

Batch Processing: