
Consequently the inversion formula is intimately connected with the theory of Schur complements.

The matrix is the Schur complement of in. In, however, we can write an arbitrary symmetric perturbation as, with symmetric but possibly indefinite, and obtain a symmetric formula. The Sherman–Morrison–Woodbury requires us to write the perturbation as, so the perturbation must be positive semidefinite.

To see one reason why this formula is useful, suppose that the matrix and its perturbation are symmetric and we wish to preserve symmetry in our formulas. ThenĪnd applying the above formula (appropriately renaming the blocks) gives, with denoting a block whose value does not matter, In the block we see the right-hand side of a Sherman–Morrison–Woodbury-like formula, but it is not immediately clear how this relates to. We will obtain a formula for by looking at. We will give a different derivation of an even more general formula using block matrices. Setting and gives the special case of the Sherman–Morrison–Woodbury formula with, and the general formula follows from. The associative law for matrix multiplication gives, or, which can be written as. How can the formula be derived in the first place? Consider any two matrices and such that and are both defined. The Sherman–Morrison–Woodbury formula is straightforward to verify, by showing that the product of the two sides is the identity matrix. If we have an LU factorization of then we can use it in conjunction with the Sherman–Morrison–Woodbury formula to solve in flops, as opposed to the flops required to factorize from scratch. In practice, of course, we rarely invert matrices, but rather exploit factorizations of them. The significance of this formula is that is, so if and is known then it is much cheaper to evaluate the right-hand side than to invert directly. Which is the Sherman–Morrison–Woodbury formula. If is nonsingular then is nonsingular and This perturbation has rank at most, and its rank is if and are both of rank. Now consider a perturbation, where and are. To illustrate, we consider the matrixĪs our analysis suggests, the entry is the most sensitive to perturbation. For an upper triangular matrix and are likely to give the maximum, which means that the inverse of an upper triangular matrix is likely to be most sensitive to perturbations in the element of the matrix.

If is sufficiently small then this quantity is approximately maximized for and such that the product of the norms of th column and th row of is maximized. It explicitly identifies the rank- change to the inverse.Īs an example, if we take and (where is the th column of the identity matrix) then, writing, we have This is known as the Sherman–Morrison formula. Inverting this equation and applying the previous result gives The condition that be nonsingular is (as can also be seen from, derived in What Is a Block Matrix?). So the product equals the identity matrix when. We might expect that for some (consider a binomial expansion of the inverse). We will begin with the simpler case of a rank- perturbation:, where and are -vectors, and we consider first the case where.
