KrylovMethods.bicgstbMethod

x,flag,err,iter,resvec = bicgstb(A,b,tol=1e-6,maxIter=100,M1=1.0,M2=1.0,x=[],out=0)

BiConjugate Gradient Stabilized Method applied to the linear system Ax=b.

Input:

A - computing A*x b - right hand side vector tol - error tolerance maxIter - maximum number of iterations M1,M2 - preconditioners, either matrices or functions computing M1\x or M2\x x - starting guess out - flag for output (-1: no output, 0 : only errors, 1 : final status, 2: relres at each iteration)

Output:

x - solution flag - exit flag ( 0 : desired tolerance achieved, -1 : maxIter reached without converging -2 : rho equal to zero -3 : norm(s)/bnrm2 < tol -4 : omega < 1e-16 -9 : right hand side was zero) err - error, i.e., norm(A*x-b)/norm(b) iter - number of iterations resvec - error at each iteration

KrylovMethods.blockBiCGSTBMethod

x,flag,err,iter,resvec = blockBiCGSTB(A,b,tol=1e-6,maxIter=100,M1=identity,M2=identity,x=[],out=0)

Block version of BiConjugate Gradient Stabilized Method applied to the linear system Ax=b with multiple right hand sides.

Based on the paper: El Guennouni, A., K. Jbilou, and H. Sadok. "A block version of BiCGSTAB for linear systems with multiple right-hand sides." Electronic Transactions on Numerical Analysis 16.129-142 (2003): 2.

Input:

A - computing A*x b - right hand side vector tol - error tolerance maxIter - maximum number of iterations M1,M2 - preconditioners, either matrices or functions computing M1\x or M2\x - If memory is reused in M1 and M2, make sure each has its own copy of return vector x - starting guess out - flag for output (-1: no output, 0 : only errors, 1 : final status, 2: relres at each iteration)

Output:

x - solution flag - exit flag ( 0 : desired tolerance achieved, -1 : maxIter reached without converging -2 : ||rho||_F equals to zero -3 : norm(s)/bnrm2 < tol -4 : omega < 1e-16 -9 : right hand side was zero) err - error, i.e., norm(A*x-b)/norm(b) iter - number of iterations resvec - error at each iteration

KrylovMethods.blockCGMethod

blockCG(A,B,X)

Preconditioned Conjugate Gradient Method for solving

A * X = B

Input:

A           - matrix or function computing A*x
B           - array of right hand sides
tol         - error tolerance
maxIter     - maximum number of iterations
M           - preconditioner, a function that computes M\x
X           - array of starting guesses (will be overwritten)
out         - flag for output (-1: no output, 0: only errors, 1: final status, 2: residual norm at each iteration)
ortho       - flag for re-orthogonalization (default: false)
pinvTol     - tolerance for pseudoinverse (default: eps(T)*size(B,1))
storeInterm - flag for storing iterates (default: false)

Output:

X - approximate solutions (2D array if !storeInterm, 3D array else) flag - exit flag ( 0 : desired tolerance achieved, -1 : maxIter reached without converging -9 : right hand side was zero) err - norm of relative residual, i.e., norm(A*X-B)/norm(B) iter - number of iterations resvec - norm of relative residual at each iteration

Reference:

O'Leary, D. P. (1980). The block conjugate gradient algorithm and related methods.
Linear Algebra and Its Applications, 29, 293–322. http://doi.org/10.1016/0024-3795(80)90247-5
KrylovMethods.blockFGMRESMethod

x,flag,err,iter,resvec = blockFGMRES(A,b,restrt,tol=1e-2,maxIter=100,M=1,x=[],out=0)

Block (flexible) Generalized Minimal residual ( (F)GMRES(m) ) method with restarts applied to A*x = b.

This is the "right preconditioning" version of blockGMRES: A*M^{-1}Mx = b.

Input:

A - function computing A*x b - right hand side matrix (set of vectors) restrt - number of iterations between restarts tol - error tolerance maxIter - maximum number of iterations M - preconditioner, function computing M\x x - starting guess out - flag for output (0 : only errors, 1 : final status, 2: error at each iteration)

Output:

x - approximate solution flag - exit flag ( 0 : desired tolerance achieved, -1 : maxIter reached without converging -9 : right hand side was zero) err - norm of relative residual, i.e., norm(A*x-b)/norm(b) iter - number of iterations resvec - norm of relative residual at each iteration

preconditioner M(r) must return a copy of a matrix. Cannot reuse memory of r.
KrylovMethods.cgMethod

x,flag,err,iter,resvec = cg(A,b,tol=1e-2,maxIter=100,M=1,x=[],out=0)

(Preconditioned) Conjugate Gradient applied to the linear system A*x = b, where A is assumed to be symmetric positive semi definite.

Input:

A - matrix or function computing A*x b - right hand side vector tol - error tolerance maxIter - maximum number of iterations M - preconditioner, a function that computes M\x x - starting guess out - flag for output (-1: no output, 0: only errors, 1: final status, 2: residual norm at each iteration)

Output:

x - approximate solution flag - exit flag ( 0 : desired tolerance achieved, -1 : maxIter reached without converging -2 : Matrix A is not positive definite -3 : cg stalled, i.e. two consecutive residuals have same norm -9 : right hand side was zero) err - norm of relative residual, i.e., norm(A*x-b)/norm(b) iter - number of iterations resvec - norm of relative residual at each iteration

KrylovMethods.cglsMethod

x,flag,err,iter,resvec = cg(A,b,tol=1e-2,maxIter=100,x=[],storeInterm=false,out=0)

CGLS Conjugate gradient algorithm applied implicitly to the normal equations

	(A'*A) x = A'*b.

Input:

A - function computing Ax = A(x,'F') and A'x = A(x,'T') b - right hand side vector tol - error tolerance, default 1e-2 maxIter - maximum number of iterations, default 100 x - starting guess storeInterm - flag for returning intermediate solutions (useful in inverse problems) out - flag for output (-1: no output, 0 : only errors, 1 : final status, 2: error at each iteration)

Output:

x - approximate solution (interm==0) or history of approximate solutions (interm==1) flag - exit flag ( 0 : desired tolerance achieved, -1 : maxIter reached without converging -2 : Matrix A is not positive definite -9 : right hand side was zero) eta - residual norm: norm(A*x-b) rho - norm of current iterate: norm(x)

KrylovMethods.cgs!Method

function KrylovMethods.cgs!

Inplace Classical Gram Schmidt orthogonalization.

Reference: page 254 in Golub and Van Loan, Matrix Computation, 4th edition.

Input:

V::Array  - m by n matrix of full rank m<=n

Output:

V::Array  - m-by-m unitary matrix
R::Array  - m-by-n upper triangular matrix
KrylovMethods.cgsMethod

KrylovMethods.cgs = KrylovMethods.cgs!(copy(V))

Classical Gram Schmidt orthogonalization.
KrylovMethods.fgmresMethod

x,flag,err,iter,resvec = fgmres(A,b,restrt,tol=1e-2,maxIter=100,M=1,x=[],out=0)

(flexible) Generalized Minimal residual ( (F)GMRES(m) ) method with restarts applied to A*x = b.

This is the "right preconditioning" version of GMRES: A*M^{-1}Mx = b. See gmres.jl for "left preconditioning".

Input:

A - function computing A*x b - right hand side vector restrt - number of iterations between restarts tol - error tolerance maxIter - maximum number of iterations M - preconditioner, function computing M\x x - starting guess out - flag for output (0 : only errors, 1 : final status, 2: error at each iteration)

Output:

x - approximate solution flag - exit flag ( 0 : desired tolerance achieved, -1 : maxIter reached without converging -9 : right hand side was zero) err - norm of relative residual, i.e., norm(A*x-b)/norm(b) iter - number of iterations resvec - norm of relative residual at each iteration

preconditioner M(r) must return a copy of a vector. Cannot reuse memory of r.
KrylovMethods.gmresMethod

x,flag,err,iter,resvec = gmres(A,b,restrt,tol=1e-2,maxIter=100,M=1,x=[],out=0)

Generalized Minimal residual ( GMRESm ) method with restarts applied to A*x = b.

Input:

A - function computing A*x b - right hand side vector restrt - number of iterations between restarts tol - error tolerance maxIter - maximum number of iterations M - preconditioner, function computing M\x x - starting guess out - flag for output (0 : only errors, 1 : final status, 2: error at each iteration)

Output:

x - approximate solution flag - exit flag ( 0 : desired tolerance achieved, -1 : maxIter reached without converging -9 : right hand side was zero) err - norm of relative residual, i.e., norm(A*x-b)/norm(b) iter - number of iterations resvec - norm of relative residual at each iteration

KrylovMethods.lanczosBidiagMethod

U, B, V = lanczosBidiag(A,p::Vector,k::Int)

Lanczos bidiagonalization of matrix A.

Input:

A - function computing Ax = A(x,'F') and A'x = A(x,'T') p - starting vector k - dimension of subspace

Output:

U,B,V - Lanczos vectors

KrylovMethods.lanczosTridiagMethod

T,V = lanczosTridiag(A,b,k;tol,doReorth)

Lanczos method for getting a factorization of

A = Vk Tk Vk'

where A is a real symmetric n by n matrix, Tk is a tridiagonal k by k matrix and the columns of the n by k matrix Vk are orthogonal.

Implementation follows:

Paige, C. C. (1972). Computational variants of the Lanczos method for the eigenproblem. IMA Journal of Applied Mathematics.

Required input:

A - function computing Ax, e.g., x -> Ax b - right hand side vector k - dimension of Krylov subspace

Optional input:

tol - stopping tolerance doReorth - (default=false) set to true to perform full reorthogonalization

Output:

Tk - sparse tridiagonal matrix Vk - basis vectors

KrylovMethods.lsqrMethod

x,flag,err,iter,resvec = lsqr(A,b,;x=[],maxIter=20,atol=1e-10,btol=1e-10,condlim=1e6,out=1,doBidiag=false, storeInterm::Bool=false)

Least Squares QR method (LSQR) for solving

min_x || A x - b ||,

where A is assumed is either over or underdetermined.

Implementation follows and comments refer to Algorithm LSQR in:

Paige, C. C., & Saunders, M. A. (1982). LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares. ACM Transactions on Mathematical Software (TOMS), 8(1), 43–71. doi:10.1145/355984.355989

Required input:

A - function in (flag,v,alpha,x). if flag=='F' x <- Av + alphax (overwrite x) elseif flag=='T' x <- A'v + alphax (overwriting x) end b - right hand side vector

Optional input:

x - starting guess (optional) maxIter - maximum number of iterations atol - error tolerance for (estimated) residual norm (for compatible systems) btol - error tolerance for (estimated) gradient norm (for incompatible systems) condlim - for stopping based on (estimated) condition number out - flag for output (-1: no output, 0: only errors, 1: final status, 2: residual norm at each iteration) doBidiag - returns the bidiagonalization of A at the final iteration storeInterm - store and return intermediate iterates

Output:

x - approximate solution flag - exit flag ( 0 : stopping based on atol or btol, -1 : maxIter reached without converging -2 : condition number became too large -9 : right hand side was zero) his - status at each iteration U,S,V - bidiagonalization (only build if doBidiag==true)

KrylovMethods.mgs!Method

function KrylovMethods.mgs!

Inplace Modified Gram Schmidt orthogonalization.

Reference: page 255 in Golub and Van Loan, Matrix Computation, 4th edition.

Input:

V::Array  - m by n matrix of full rank m<=n

Output:

V::Array  - m-by-m unitary matrix
R::Array  - m-by-n upper triangular matrix
KrylovMethods.mgsMethod

KrylovMethods.mgs = KrylovMethods.mgs!(copy(V))

Modified Gram Schmidt orthogonalization.
KrylovMethods.minresMethod

x,flag,err,iter,resvec = minres(A,b,;x=[],maxIter=20,tol=1e-10,out=1)

Minimal Residual Method (MINRES) for solving

A x = b,

where A is symmetric and possibly indefinite.

Implementation follows page 26 of

Choi, S.-C. T. (2006). Iterative Methods for Singular Linear Equations and Least-squares Problems. Phd thesis, Stanford University.

See also:

Paige, C. C., & Saunders, M. A. (1975). Solution of sparse indefinite systems of linear equations. SIAM Journal on Numerical Analysis.

Required input:

A - function that performs matrix-vector product, e.g., x -> A*x b - right hand side vector

Optional input:

x - starting guess (optional) maxIter - maximum number of iterations btol - error tolerance for Lanczos step rtol - error tolerance for estimated relative residual gtol - error tolerance for estimated gradient norm condlim - limit on condition number esitmate of A out - flag for output (-1: no output, 0: only errors, 1: final status, 2: residual norm at each iteration)

Output:

x - approximate solution flag - exit flag (0: converged, -1: maxIter, -2: Lanczos, -3: condition) relres - estimated relative residual at final iteration nrmG - estimated gradient norm at final iteration nrmA - estimated Frobenius norm of A conA - condition number estimate phi - relres for each iteration

KrylovMethods.ssorMethod

x,flag,err,iter,resvec = ssor(A::SparseMatrixCSC, b::Vector; omega::Real=2/3, tol::Real=1e-2, maxIter::Int=1,out::Int=0)

Symmetric successive over-relaxation applied to the linear system A*x = b.

!! Note that upper/lower triangular matrix are never build to save memory !!

Input:

A       - sparse matrix CSC
b       - right hand side vector
omega   - relaxation parameter (omega=1.0: Gauss Seidel)
tol     - error tolerance (default=1e-2)
maxIter - maximum number of iterations (default=1)
out     - flag for output (-1 : no output, 0 : only errors, 1 : final status, 2: error at each iteration)

Output:

x       - solution
flag    - exit flag (  0 : desired tolerance achieved,
                      -1 : maxIter reached without converging)
err     - error, i.e., norm(A*x-b)/norm(b)
iter    - number of iterations
resvec  - error at each iteration
KrylovMethods.ssorPrecTrans!Function

x = ssorPrecTrans!(A::SparseMatrixCSC,X::Array,B::Array,OmInvD::Vector=1 ./diag(A))

Applies SSOR steps to the linear systems A*X = B, for symmetric A. For efficiency, it works on the transpose of the matrix A.

Input:

A - sparse matrix CSC, assumed to be symmetric here X - current iterates R - residuals OmInvD - weighted diagional, that is, omega./diag(A)

Output: This method changes X and on the fly. The Array B is unchanged.

This version is well-suited as a preconditioner, for example, for CG.

Example for solving A*X=B, when A is Sparse Symmtric Positive definite: omega = 1.2; d = omega./diag(A); x = zeros(length(d)) # pre allocation for the preconditioner result. PC(r) = (x[:]=0.0; return ssorPrecTrans!(A,x,r,d)); y = KrylovMethods.cg(A,b,tol=1e-12,maxIter=200,M=PC,out=1)[1]

KrylovMethods.LanczosStep!Method

alpha,beta,vk,vkm1 = LanczosStep!(A::Function,vk,vkm1,beta;sigma=0.0,tol=1e-10)

Performs a Lanczos step

Implementation is based on Table 2.1 in

Choi, S.-C. T. (2006). Iterative Methods for Singular Linear Equations and Least-squares Problems. Phd thesis, Stanford University.

KrylovMethods.ssorPrec!Function

ssorPrec!(A::SparseMatrixCSC,x::Vector,r::Vector,OmInvD::Vector=1./diag(A))

Applies one SSOR step to A*x = r and updates the residual.

Input:

A       - sparse matrix CSC
x       - current iterate
r       - residual 
OmInvD  - weighted diagional, that is, omega./diag(A)

Output: This method changes x and r on the fly.

see also sor, which uses ssorPrec! in each iteration. WARNING: DO NOT USE AS PRECONDITIONER, SINCE THIS METHOD OVEWRITES THE VECTOR r

KrylovMethods.symOrthoMethod

c,s,r = SymOrtho(a,b)

Computes a Givens rotation

Implementation is based on Table 2.9 in

Choi, S.-C. T. (2006). Iterative Methods for Singular Linear Equations and Least-squares Problems. Phd thesis, Stanford University.