Those functions are available mainly for dense matrices. When it is
available for sparse matrices, it will be specified and the use detailed.
In the case of dense matrices, Lapack subroutines are called
and you will need to define SELDON_WITH_CBLAS
and
SELDON_WITH_LAPACK
.
GetLU | performs a LU (or LDL^t) factorization |
SolveLU | solve linear system by using LU factorization |
RefineSolutionLU | improves solution computed by SolveLU |
ReciprocalConditionNumber | computes the inverse of matrix condition number |
GetScalingFactors | computes row and column scalings to equilibrate a matrix |
GetInverse | computes the matrix inverse |
GetQR | QR factorization of matrix |
GetLQ | LQ factorization of matrix |
GetQ_FromQR | Forms explicitely Q from QR factorization |
MltQ_FromQR | multiplies vector by Q |
SolveQR | solves least-square problems by using QR factorization |
SolveLQ | solves least-square problems by using LQ factorization |
GetEigenvalues | computes eigenvalues |
GetEigenvaluesEigenvectors | computes eigenvalues and eigenvectors |
GetSVD | performs singular value decomposition (SVD) |
void GetLU(Matrix&, Vector<int>& pivot); void SolveLU(const Matrix&, const Vector<int>& pivot, Vector&); void GetLU(Matrix&, MatrixMumps&); void SolveLU(MatrixMumps&, Vector&); void GetLU(Matrix&, MatrixSuperLU&); void SolveLU(MatrixSuperLU&, Vector&); void GetLU(Matrix&, MatrixUmfPack&); void SolveLU(MatrixUmfPack&, Vector&);
SolveLU
performs a LU factorization or LDL^t factorization (for
symmetric matrices) of the provided matrix.
This function is implemented both for dense and sparse matrices.
In the case of sparse matrices, Seldon is interfaced with external
librairies, i.e. MUMPS, UMFPACK or
SUPERLU. You need to define SELDON_WITH_MUMPS,
SELDON_WITH_SUPERLU and/or SELDON_WITH_UMFPACK if you want to
factorize a sparse matrix.
After a call to GetLU, you can call SolveLU to solve a linear system
by using the computed factorization.
// lapack for dense matrices #define SELDON_WITH_LAPACK // external libraries for sparse matrices #define SELDON_WITH_MUMPS #define SELDON_WITH_SUPERLU #define SELDON_WITH_UMFPACK #include "Seldon.hxx" #include "Solver.hxx" int main() { // dense matrices Matrix<double> A(3, 3); Vector<int> pivot; // factorization GetLU(A, pivot); // now you can solve the linear system A x = b Vector<double> X(3), B(3); B.Fill(); X = B; SolveLU(A, pivot, X); // sparse matrices, use of Mumps for example MatrixMumps<double> mat_lu; Matrix<double, General, ArrayRowSparse> Asp(n, n); // you add all non-zero entries to matrix Asp // then you call GetLU to factorize the matrix GetLU(Asp, mat_lu); // Asp is empty after GetLU // you can solve Asp x = b X = B; SolveLU(mat_lu, X); return 0; }
Direct solvers for sparse linear systems
Lapack_LinearEquations.cxx
Mumps.cxx
SuperLU.cxx
UmfPack.cxx
T ReciprocalConditionNumber(const Matrix&, const Vector<int>&, SeldonNorm1, T&); T ReciprocalConditionNumber(const Matrix&, const Vector<int>&, SeldonNormInf, T&);
This function estimates the reciprocal of the condition number of a matrix A, in either the 1-norm or the infinity-norm, using the LU factorization computed by GetLU.
Matrix<double> A(3, 3); // initialization of A A.Fill(); // computation of 1-norm and infinity-norm of matrix double anorm_one = Norm1(A); double anorm_inf = NormInf(A); // factorization of A Vector<int> pivot; GetLU(mat_lu, pivot); // computation of reciprocal of condition number in 1-norm double rcond = ReciprocalConditionNumber(A, pivot, SeldonNorm1, anorm_one); // or infinity norm rcond = ReciprocalConditionNumber(A, pivot, SeldonNormInf, anorm_inf);
void RefineSolutionLU(const Matrix&, const Matrix&, Vector<int>&, Vector&, const Vector&, T&, T&);
RefineSolutionLU
improves the computed solution to a system of linear
equations and provides error bounds and backward error estimates for
the solution. This function should be called after GetLU and
SolveLU.
Matrix<double> A(3, 3); Matrix<double> mat_lu(3, 3); // initialization of A A.Fill(); // factorization of A mat_lu = A; Vector<int> pivot; GetLU(mat_lu, pivot); // solution of linear system Vector<double> X(3), B(3); B.Fill(); X = B; GetLU(mat_lu, pivot, X); // now we can call RefineSolutionLU // backward error double berr; // forward error double ferr; RefineSolutionLU(A, mat_lu, pivot, X, B, ferr, berr);
void GetInverse(Matrix&);
This function overwrites a dense matrix with its inverse.
Matrix<double> A(3, 3); // initialization of A A.Fill(); // computation of the inverse GetInverse(A);
void GetScalingFactors(const Matrix&, Vector&, Vector&, T&, T&, T&);
This function computes a row and column scaling that reduce the condition number of the matrix. This function is only defined for storages RowMajor and ColMajor (unsymmetric dense matrices).
Matrix<double> A(5, 3); // initialization of A A.Fill(); // computation of scaling int m = A.GetM(), n = A.GetN(); Vector<double> row_scale(m), col_scale(n); double row_condition_number, col_condition_number; GetScalingFactors(A, row_scale, col_scale, row_condition_number, col_condition_number, amax);
void GetQR(Matrix&, Vector&); void SolveQR(Matrix&, Vector&, Vector&);
GetQR
computes the QR factorization of a rectangular
matrix, while SolveQR
exploits this factorization to
solve a least-squares problem. This is only defined for
storages RowMajor and ColMajor (unsymmetric dense matrices).
Matrix<double> A(5, 3); // initialization of A A.Fill(); // QR Factorization int m = A.GetM(), n = A.GetN(); Vector<double> tau; GetQR(A, tau); // Solving Least squares problem QR X = B (m ≥ n) Vector<double> X, B(m); B.Fill(); SolveQR(A, X);
void GetLQ(Matrix<T>&, Vector<T>&); void SolveLQ(Matrix<T>&, Vector<T>&, Vector<T>&);
GetLQ
computes the LQ factorization of a rectangular
matrix, while SolveLQ
exploits this factorization to
solve a least-squares problem. This is only defined for
storages RowMajor and ColMajor (unsymmetric dense matrices).
Matrix<double> A(3, 5); // initialization of A A.Fill(); // LQ Factorization int m = A.GetM(), n = A.GetN(); Vector<double> tau; GetLQ(A, tau); // Solving Least squares problem LQ X = B (m ≤ n) Vector<double> X, B(m); B.Fill(); SolveLQ(A, X);
void MltQ_FromQR(const Side&, const Trans&, const Matrix&, const Vector&, Matrix&);
This function multiplies a matrix by Q, where Q is the orthogonal matrix computed during QR factorization. This is only defined for storages RowMajor and ColMajor (unsymmetric dense matrices).
Matrix<double> A(5, 3); // initialization of A A.Fill(); // QR Factorization int m = A.GetM(), n = A.GetN(); Vector<double> tau; GetQR(A, tau); // computation of Q*C Matrix<double> C(m, m); MltQ_FromQR(SeldonLeft, SeldonNoTrans, A, tau, C); // you can compute C*transpose(Q) MltQ_FromQR(SeldonRight, SeldonTrans, A, tau, C); // for complex numbers, you have SeldonConjTrans
void GetQ_FromQR(Matrix&, Vector&);
This functions overwrites the QR factorization by matrix Q. This is only defined for storages RowMajor and ColMajor (unsymmetric dense matrices).
Matrix<double> A(5, 3); // initialization of A A.Fill(); // QR Factorization int m = A.GetM(), n = A.GetN(); Vector<double> tau; GetQR(A, tau); Matrix<double> Q = A; GetQ_FromQR(Q, tau);
void GetEigenvalues(Matrix&, Vector&); void GetEigenvalues(Matrix&, Vector&, Vector&); void GetEigenvalues(Matrix&, Matrix&, Vector&); void GetEigenvalues(Matrix&, Matrix&, Vector&, Vector&); void GetEigenvalues(Matrix&, Matrix&, Vector&, Vector&, Vector&);
This function computes the eigenvalues of a matrix. The matrix is modified after the call to this function.
Matrix<double> A(5, 5); Vector<double> lambda_real, lambda_imag; // initialization of A A.Fill(); // computing eigenvalues (real part and imaginary part) GetEigenvalues(A, lambda_real, lambda_imag); // for symmetric matrices, eigenvalues are real Matrix<double, Symmetric, RowSymPacked> B(5, 5); Vector<double> lambda; // initialization of B B.Fill(); GetEigenvalues(B, lambda); // for hermitian matrices too Matrix<complex<double>, General, RowHermPacked> C(5, 5); // initialization of C C.Fill(); GetEigenvalues(C, lambda); // other complex matrices -> complex eigenvalues Matrix<complex<double>, Symmetric, RowSymPacked> D(5, 5); Vector<complex<double> > lambda_cpx; // initialization of D D.Fill(); GetEigenvalues(D, lambda_cpx);
The function can also solve a generalized eigenvalue problem, as detailed in the following example.
// symmetric matrices A, B Matrix<double, Symmetric, RowSymPacked> A(5, 5), B(5, 5); Vector<double> lambda; // initialization of A and B as you want // B has to be positive definite A.FillRand(); B.SetIdentity(); // we solve generalized eigenvalue problem // i.e. seeking lambda so that A x = lambda B x // the function assumes that B is POSITIVE DEFINITE GetEigenvalues(A, B, lambda); // same use for hermitian matrices Matrix<complex<double>, General, RowHermPacked> Ah(5, 5), Bh(5,5); // initialize Ah and Bh as you want // Bh has to be positive definite // as a result, eigenvalues are real and you compute them GetEigenvalues(Ah, Bh, lambda); // other complex matrices // provide complex eigenvalues, potentially infinite if B is indefinite Matrix<complex<double> > C(5, 5), D(5, 5); Vector<complex<double> > alphac, betac; // eigenvalues are written in the form lambda = alphac/betac GetEigenvalues(C, D, alphac, betac); // for unsymmetric real matrices, real part and imaginary are stored // in different vectors Matrix<double> Ar(5, 5), Br(5, 5); Vector<double> alpha_real, alpha_imag, beta; // lambda are written in the form lambda = (alpha_real,alpha_imag)/beta GetEigenvalues(Ar, Br, alpha_real, alpha_imag, beta);
void GetEigenvaluesEigenvectors(Matrix&, Vector&, Matrix&); void GetEigenvaluesEigenvectors(Matrix&, Vector&, Vector&, Matrix&); void GetEigenvaluesEigenvectors(Matrix&, Matrix&, Vector&, Matrix&); void GetEigenvaluesEigenvectors(Matrix&, Matrix&, Vector&, Vector&, Matrix&);
This function computes the eigenvalues and eigenvectors of a matrix. The matrix is modified after the call to this function. Each eigenvector is stored in a column. When real unsymmetric matrices are selected, you need to compute real and imaginary part of eigenvalues, then if the j-th eigenvalue is real, the j-th column is the eigenvector associated. If j-th and j+1-th are complex conjugate eigenvalues, the j-th and j+1-the associated columns are vectors A et B such that A+iB and A-iB are the complex conjugate eigenvectors of the initial matrix.
Matrix<double> A(5, 5); Vector<double> lambda_real, lambda_imag; Matrix<double> eigen_vectors; // initialization of A A.Fill(); // computing eigenvalues and eigenvectors GetEigenvalues(A, lambda_real, lambda_imag, eigen_vectors); // for symmetric matrices, eigenvalues are real Matrix<double, Symmetric, RowSymPacked> B(5, 5); Vector<double> lambda; // initialization of B B.Fill(); GetEigenvalues(B, lambda); // for hermitian matrices too Matrix<complex<double>, General, RowHermPacked> C(5, 5); // initialization of C C.Fill(); GetEigenvalues(C, lambda); // other complex matrices -> complex eigenvalues Matrix<complex<double>, Symmetric, RowSymPacked> D(5, 5); Vector<complex<double> > lambda_cpx; // initialization of D D.Fill(); GetEigenvalues(D, lambda_cpx);
As for GetEigenvalues, this function can be used to solve generalized eigenvalues problems as detailed in the following example.
// symmetric matrices A, B Matrix<double, Symmetric, RowSymPacked> A(5, 5), B(5, 5); Vector<double> lambda; Matrix<double> eigen_vectors; // initialization of A and B as you want // B has to be positive definite A.FillRand(); B.SetIdentity(); // we solve generalized eigenvalue problem // i.e. seeking lambda so that A x = lambda B x // the function assumes that B is POSITIVE DEFINITE GetEigenvalues(A, B, lambda, eigen_vectors); // same use for hermitian matrices Matrix<complex<double>, General, RowHermPacked> Ah(5, 5), Bh(5,5); // initialize Ah and Bh as you want // Bh has to be positive definite // as a result, eigenvalues are real and you compute them Matrix<complex<double> > eigen_vec_cpx; GetEigenvalues(Ah, Bh, lambda, eigen_vec_cpx); // other complex matrices // provide complex eigenvalues, potentially infinite if B is indefinite Matrix<complex<double> > C(5, 5), D(5, 5); Vector<complex<double> > alphac, betac; // eigenvalues are written in the form lambda = alphac/betac GetEigenvalues(C, D, alphac, betac, eigen_vec_cpx); // for unsymmetric real matrices, real part and imaginary are stored // in different vectors Matrix<double> Ar(5, 5), Br(5, 5); Vector<double> alpha_real, alpha_imag, beta; // lambda are written in the form lambda = (alpha_real,alpha_imag)/beta GetEigenvalues(Ar, Br, alpha_real, alpha_imag, beta, eigen_vector);
void GetSVD(Matrix&, Vector&, Matrix&, Matrix&);
This function computes the singular value decomposition of a rectangular matrix. As a result, this function is defined only for storages RowMajor and ColMajor.
Matrix<double> A(10, 5); Vector<double> lambda; Matrix<double> U, V; // initialization of A A.Fill(); // computing singular value decomposition // A = U diag(lambda) V GetSVD(A, lambda, U, V);