Warning: this documentation for the development version is under construction.
Seldon is interfaced with libraries performing the direct resolution of very large sparse linear systems : MUMPS, SuperLU, UmfPack and Pastix. An example file is located in test/program/direct_test.cpp. If you want to test the interface with MUMPS, assuming that MUMPS has been compiled in directory MumpsDir
, you can compile this file by typing :
g++ -DSELDON_WITH_MUMPS direct_test.cpp -IMumpsDir/include -IMumpsDir/libseq \ -IMetisDir/Lib -LMumpsDir/lib -ldmumps -lzmumps -lmumps_common -lpord \ -LMumpsDir/libseq -lmpiseq -LScotchDir/lib -lesmumps -lscotch \ -lscotcherr -lgfortran -lm -lpthread -LMetisDir -lmetis -llapack -lblas
You can simplify the last command, if you didn't install Metis and Scotch and didn't compile MUMPS with those libraries. For linking with Mumps in parallel compilation, the command line would be :
g++ -DSELDON_WITH_MUMPS -DSELDON_WITH_MPI direct_test.cpp -IMumpsDir/include -IMumpsDir/libseq \ -IMetisDir/Lib -LMumpsDir/lib -ldmumps -lzmumps -lmumps_common -lpord \ -lscalapack -lblacs -LScotchDir/lib -lesmumps -lscotch \ -lscotcherr -lgfortran -lm -lpthread -LMetisDir -lmetis -llapack -lblas
For UmfPack, if UmfPackDir
denotes the directory where UmfPack has been installed, you have to type :
g++ -DSELDON_WITH_UMFPACK direct_test.cpp -IUmfPackDir/AMD/Include -IUmfPackDir/UMFPACK/Include \ -IUmfPackDir/UMFPACK/UFconfig -LUmfPackDir/UMFPACK/Lib \ -lumfpack -LUmfPackDir/AMD/Lib -lamd -llapack -lblas
For SuperLU, the compilation line reads (SuperLUdir
is the directory where SuperLU is located) :
g++ -DSELDON_WITH_SUPERLU direct_test.cpp -ISuperLUdir/SRC -LSuperLUdir -lsuperlu
The interface with Pastix can only be compiled in parallel. When compiling PastiX, you can select usual integers (32 bits) or long integers (64 bits). It is advised to compile it with ptscotch (and flag -DDISTRIBUTED), so that you can use it in parallel with a distributed matrix. The compilation line reads (PastiXdir
is the directory where PastiX is located) :
g++ -DSELDON_WITH_PASTIX -DSELDON_WITH_MPI direct_test.cpp -IPastiXdir/install -lpastix -LScotchDir -lptscotch -lptscotcherr -lptscotchparmetis -lrt
All in all, MUMPS seems more efficient, and includes more functionnalities than the other libraries.
The syntax of all direct solvers is similar
void GetLU(Matrix&, MatrixMumps&); void SolveLU(MatrixMumps&, Vector&);
The interface has been done only for double precision (real or complex numbers), since single precision is not accurate enough when very large sparse linear systems are solved.
We provide an example of direct resolution using SuperLU.
// first we construct a sparse matrix int n = 1000; // size of linear system // we assume that you know how to fill arrays values, ptr, ind Matrix<double, General, RowSparse> A(n, n, nnz, values, ptr, ind); // then we declare vectors Vector<double> b(n), x(n); // you fill right-hand side b.Fill(); // you perform the factorization (real matrix) MatrixSuperLU<double> mat_lu; GetLU(A, mat_lu); // then you can solve as many linear systems as you want x = b; SolveLU(mat_lu, x);
If you are hesitating about which direct solver to use, or if you prefer to choose the direct solver in an input file for example, a class SparseDirectSolver has been implemented. This class regroups all the direct solvers interfaced by Seldon, it provides also a default sparse solver (but slow). Here an example how to use this class :
// first we construct a sparse matrix int n = 1000; // size of linear system // we assume that you know how to fill arrays values, ptr, ind Matrix<double, General, RowSparse> A(n, n, nnz, values, ptr, ind); // declaring the sparse solver // this solver will try to use in order of preference // Pastix, then Mumps, then UmfPack, then SuperLU // if finally the default sparse solver if none of the previous // libraries were available SparseDirectSolver<double> mat_lu; // displaying messages if you want mat_lu.ShowMessages(); // completing factorization of linear system mat_lu.Factorize(A); // then we declare vectors Vector<double> b(n), x(n); // you fill right-hand side b.Fill(); // then you can solve as many linear systems as you want x = b; mat_lu.Solve(x); // you can also force a direct solver : mat_lu.SelectDirectSolver(mat_lu.SUPERLU); // and an ordering mat_lu.SelectMatrixOrdering(SparseMatrixOrdering::SCOTCH);
HideMessages | Hides all messages of the direct solver |
ShowMessages | Shows a reasonable amount of the messages of the direct solver |
ShowFullHistory | Shows all the messages that the direct solver can display |
GetN | Returns the number of columns of the factorized linear system |
GetM | Returns the number of rows of the factorized linear system |
GetTypeOrdering | Returns the ordering to use when the matrix will be reordered |
SelectOrdering | Sets the ordering to use when the matrix will be reordered |
SetPermutation | Provides manually the permutation array used to reorder the matrix |
SetNbThreadPerNode | Sets the number of threads per node (relevant for Pastix only) |
SelectDirectSolver | Sets the direct solver to use (e.g. Mumps, Pastix, SuperLU) |
GetDirectSolver | Returns the direct solver that will be used during the factorization |
GetInfoFactorization | Returns an error code associated with the factorization (0 if successful) |
Factorize | Performs the factorization of a sparse matrix |
Solve | Solves A x = b or AT x = b, assuming that Factorize has been called |
FactorizeDistributed | Performs the factorization of a distributed matrix (parallel execution) |
SolveDistributed | Solves A x = b or AT x = b, assuming that FactorizeDistributed has been called |
Clear | Releases memory used by factorization |
Clear | Releases memory used by factorization |
SelectOrdering | Sets the ordering to use during factorization |
SetPermutation | Provides manually the permutation array to use when reordering the matrix |
HideMessages | Hides messages of direct solver |
ShowMessages | Shows messages of direct solver |
ShowFullHistory | Shows all possible messages of direct solver |
EnableOutOfCore | Enables out-of-core computations |
DisableOutOfCore | Disable out-of-core computations |
RefineSolution | Refines the solution when calling solve |
DoNotRefineSolution | Does not refine the solution when calling solve |
GetInfoFactorization | returns the error code generated by the factorization |
SetNbThreadPerNode | Sets the number of threads per node (relevant for Pastix only) |
FindOrdering | computes a new ordering of rows and columns |
FactorizeMatrix | performs LU factorization |
PerformAnalysis | Performs an analysis of linear system to factorize |
PerformFactorization | Performs a factorization of linear system, assuming that PerformAnalysis has been called |
GetSchurMatrix | forms Schur complement |
Solve | uses LU factorization to solve a linear system |
FactorizeDistributedMatrix | Performs the factorization of a distributed matrix (parallel execution) |
SolveDistributed | Solves A x = b or AT x = b, assuming that FactorizeDistributed has been called |
GetLU | performs a LU factorization |
SolveLU | uses LU factorization to solve a linear system |
GetSchurMatrix | forms Schur complement |
void ShowMessages(); void HideMessages(); void ShowFullHistory();
ShowMessages
allows the direct solver to display informations about the factorization and resolution phases, while HideMessages
forbids any message to be displayed. ShowFullHistory
displays all the possible messages the direct solver is able to give. By default, no messages are displayed.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A; // you declare a Mumps structure MatrixMumps<double> mat_lu; // you can display messages for the factorization : mat_lu.ShowMessages(); GetLU(A, mat_lu); // then hide messages for each resolution mat_lu.HideMessages(); SolveLU(mat_lu, x);
Mumps.cxx
UmfPack.cxx
SuperLU.cxx
Pastix.cxx
SparseSolver.cxx
int GetM(); int GetN();
This method returns the number of rows (or columns) of the matrix. For direct solvers, we consider only square matrices.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A; // you declare a sparse Solver SparseDirectSolver<double> mat_lu; // you factorize the matrix mat_lu.Factorize(A); // then you can use GetM, since A has been cleared int n = mat_lu.GetM(); cout << "Number of rows : " << n << endl;
Mumps.cxx
UmfPack.cxx
SuperLU.cxx
Pastix.cxx
SparseSolver.cxx
int GetTypeOrdering();
This method returns the ordering algorithm used by the direct solver. It is only available in class SparseDirectSolver.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A; // you declare a sparse Solver SparseDirectSolver<double> mat_lu; // ordering to be used ? int type_ordering = mat_lu.GetTypeOrdering();
void SelectOrdering(int type);
This method sets the ordering algorithm used by the direct solver. For Mumps, Pastix, SuperLU and UmfPack, the available orderings (and their numbers) are detailed in the documentation of each direct solver. For class SparseDirectSolver, the ordering is listed in SparseMatrixOrdering :
AUTO is the default ordering, and means that the code will select the more "natural" ordering for the specified direct solver (e.g. SCOTCH with Pastix, COLAMD with UmfPack). USER means that the code assumes that the user provides manually the permutation array through SetPermutation method.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A; // you declare a sparse Solver SparseDirectSolver<double> mat_lu; // you specify an ordering : mat_lu.SelectOrdering(SparseMatrixOrdering::QAMD); // then you can factorize with this ordering algorithm mat_lu.Factorize(A);
void SetPermutation(IVect& );
This method sets the ordering permutation array used by the direct solver. It is up to the user to check that this array is valid.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A(n, n); // you declare a sparse Solver SparseDirectSolver<double> mat_lu; // you specify how A will be reordered by giving directly new row numbers IVect permutation(n); permutation.Fill(); mat_lu.SetPermutation(permutation); // then you can factorize with this ordering mat_lu.Factorize(A);
Mumps.cxx
UmfPack.cxx
SuperLU.cxx
Pastix.cxx
SparseSolver.cxx
void SetNbThreadPerNode(int n);
This method sets the number of threads for each node. It is only relevant for Pastix.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A(n, n); // you declare a sparse Solver SparseDirectSolver<double> mat_lu; // 8 threads per node if each node contains 8 cores mat_lu.SetNbThreadPerNode(8); // then you can factorize with this ordering mat_lu.Factorize(A);
void SelectDirectSolver(int type);
This method sets the direct solver to use, you can choose between :
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A(n, n); // you declare a sparse Solver SparseDirectSolver<double> mat_lu; // you select which solver you want to use mat_lu.SelectDirectSolver(mat_lu.MUMPS); // then you can factorize with this solver mat_lu.Factorize(A);
int GetDirectSolver();
This method returns the direct solver used.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A(n, n); // you declare a sparse Solver SparseDirectSolver<double> mat_lu; // which solver used by default ? int type_solver = mat_lu.GetDirectSolver(); if (type_solver == mat_lu.SELDON_SOLVER) cout << "Warning : this solver is slow" << endl;
void GetInfoFactorization();
This method returns the error code provided by the used direct solver. For SparseDirectSolver, the error codes are listed in an enum attribute, and can be :
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A; // you declare a sparse Solver SparseDirectSolver<double> mat_lu; // you factorize the matrix mat_lu.Factorize(A); // to know if there is a problem int info = mat_lu.GetInfoFactorization(); if (info == mat_lu.OUT_OF_MEMORY) { cout << "The matrix is too large, not enough memory" << endl; }
Mumps.cxx
UmfPack.cxx
SuperLU.cxx
Pastix.cxx
SparseSolver.cxx
void Factorize(Matrix&); void Factorize(Matrix&, bool); void FactorizeMatrix(Matrix&); void FactorizeMatrix(Matrix&, bool);
This method factorizes the given matrix. FactorizeMatrix is denoted Factorize for SparseDirectSolver. In parallel execution, this method will consider that the linear system to solve is restricted to the current processor. For example, you can use this method to factorize independant linear systems. If the matrix is distributed overall severall processors, you should call FactorizeDistributed.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A(n, n); // you declare a sparse Solver SparseDirectSolver<double> mat_lu; // you factorize the matrix mat_lu.Factorize(A); // to know if there is a problem int info = mat_lu.GetInfoFactorization(); if (info == mat_lu.OUT_OF_MEMORY) { cout << "The matrix is too large, not enough memory" << endl; } // once the matrix is factorized, you can solve systems Vector<double> x(n); mat_lu.Solve(x);
Mumps.cxx
UmfPack.cxx
SuperLU.cxx
Pastix.cxx
SparseSolver.cxx
void Solve(Vector&);
This method computes the solution of A x = b
or AT x = b
, assuming that GetLU
(or Factorize/FactorizeMatrix) has been called before. This is equivalent to use function SolveLU.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A(n, n); // you declare a sparse Solver SparseDirectSolver<double> mat_lu; // you factorize the matrix mat_lu.Factorize(A); // once the matrix is factorized, you can solve systems Vector<double> x(n), b(n); b.Fill(); x = b; mat_lu.Solve(x); // to solve A^T x = b : x = b; mat_lu.Solve(SeldonTrans, x);
Mumps.cxx
UmfPack.cxx
SuperLU.cxx
Pastix.cxx
SparseSolver.cxx
void FactorizeDistributed(MPI::Comm& comm, Vector& Ptr, Vector& Ind, Vector& Val, IVect& glob_num, bool sym, bool keep_matrix);
void FactorizeDistributed(MPI::Comm& comm, Vector& Ptr, Vector& Ind, Vector& Val, IVect& glob_num, bool sym);
This method factorizes a distributed matrix, which is given through arrays Ptr, Ind, Val (CSC format). glob_num is a local to global array (glob_num(i) is the global row number of local row i). Each column of the global system is assumed to be distributed to one processor and only one. Each processor is assumed to have at least one column affected to its-self. Arrays Ptr and Ind may consist of 64-bit integers (in order to be compatible with 64-bit versions of Pastix). If sym is true, the matrix is symmetric, and we assume that Ptr, Ind, Val are representing the lower part of the matrix. If sym is false, the matrix is considered non-symmetric. The communicator given in argument regroup all the processors involved in the factorization. This method is working only if you have selected Pastix or Mumps.
// initialization of MPI_Init_thread needed if Pastix has been compiled // with threads, otherwise you can use MPI_Init int required = MPI_THREAD_MULTIPLE; int provided = -1; MPI_Init_thread(&argc, &argv, required, &provided); // declaration of the sparse solve SparseDirectSolver<double> mat_lu; // considered global matrix // A = |1.5 1 0 0 -0.3 | // | 1 2.0 0 -1.0 0 | // | 0 0 2.0 0 0.8 | // | 0 -1.0 0 3.0 1.2 | // | -0.3 0.0 0.8 1.2 3.0 | // global right hand side (solution is 1, 2, 3, 4, 5) // B = |2 1 10 16 21.9| if (MPI::COMM_WORLD.Get_rank() == 0) { // on first processor, we provide columns 1, 2 and 5 // only lower part since matrix is symmetric int n = 3; Matrix<double, General, ArrayColSparse> A(5, n); A(0, 0) = 1.5; A(1,0) = 1.0; A(4,0) = -0.3; A(1,1) = 2.0; A(3,1) = -1.0; A(4,2) = 3.0; Vector<double> b(n); b(0) = 2; b(1) = 1; b(2) = 21.9; IVect num_loc(n); num_loc(0) = 0; num_loc(1) = 1; num_loc(2) = 4; // converting to csc format IVect Ptr, IndRow; Vector<double> Val; General prop; ConvertToCSC(A, prop, Ptr, IndRow, Val); // factorization mat_lu.FactorizeDistributed(MPI::COMM_WORLD, Ptr, IndRow, Val, num_loc, true); // then resolution mat_lu.SolveDistributed(MPI::COMM_WORLD, SeldonNoTrans, b, num_loc); DISP(b); } else { // on second processor, we provide columns 3 and 4 // only lower part since matrix is symmetric int n = 2; Matrix<double, General, ArrayColSparse> A(5, n); A(2, 0) = 2.0; A(4, 0) = 0.8; A(3,1) = 3.0; A(4,1) = 1.2; Vector<double> b(n); b(0) = 10; b(1) = 16; IVect num_loc(n); num_loc(0) = 2; num_loc(1) = 3; // converting to csc format IVect Ptr, IndRow; Vector<double> Val; General prop; ConvertToCSC(A, prop, Ptr, IndRow, Val); // factorization mat_lu.FactorizeDistributed(MPI::COMM_WORLD, Ptr, IndRow, Val, num_loc, true); // then resolution mat_lu.SolveDistributed(MPI::COMM_WORLD, SeldonNoTrans, b, num_loc); DISP(b); } MPI_Finalize();
Mumps.cxx
Pastix.cxx
SparseSolver.cxx
void FactorizeDistributed(MPI::Comm& comm, Vector& x, IVect& glob_num);
void FactorizeDistributed(SeldonTrans, MPI::Comm& comm, Vector& x, IVect& glob_num);
This method solves distributed linear system (or its transpose), once FactorizeDistributed has been called. This method is working only if you have selected Pastix or Mumps.
// initialization of MPI_Init_thread needed if Pastix has been compiled // with threads, otherwise you can use MPI_Init int required = MPI_THREAD_MULTIPLE; int provided = -1; MPI_Init_thread(&argc, &argv, required, &provided); // declaration of the sparse solve SparseDirectSolver<double> mat_lu; // considered global matrix // A = |1.5 1 0 0 -0.3 | // | 1 2.0 0 -1.0 0 | // | 0 0 2.0 0 0.8 | // | 0 -1.0 0 3.0 1.2 | // | -0.3 0.0 0.8 1.2 3.0 | // global right hand side (solution is 1, 2, 3, 4, 5) // B = |2 1 10 16 21.9| if (MPI::COMM_WORLD.Get_rank() == 0) { // on first processor, we provide columns 1, 2 and 5 // only lower part since matrix is symmetric int n = 3; Matrix<double, General, ArrayColSparse> A(5, n); A(0, 0) = 1.5; A(1,0) = 1.0; A(4,0) = -0.3; A(1,1) = 2.0; A(3,1) = -1.0; A(4,2) = 3.0; Vector<double> b(n); b(0) = 2; b(1) = 1; b(2) = 21.9; IVect num_loc(n); num_loc(0) = 0; num_loc(1) = 1; num_loc(2) = 4; // converting to csc format IVect Ptr, IndRow; Vector<double> Val; General prop; ConvertToCSC(A, prop, Ptr, IndRow, Val); // factorization mat_lu.FactorizeDistributed(MPI::COMM_WORLD, Ptr, IndRow, Val, num_loc, true); // then resolution mat_lu.SolveDistributed(MPI::COMM_WORLD, SeldonNoTrans, b, num_loc); DISP(b); } else { // on second processor, we provide columns 3 and 4 // only lower part since matrix is symmetric int n = 2; Matrix<double, General, ArrayColSparse> A(5, n); A(2, 0) = 2.0; A(4, 0) = 0.8; A(3,1) = 3.0; A(4,1) = 1.2; Vector<double> b(n); b(0) = 10; b(1) = 16; IVect num_loc(n); num_loc(0) = 2; num_loc(1) = 3; // converting to csc format IVect Ptr, IndRow; Vector<double> Val; General prop; ConvertToCSC(A, prop, Ptr, IndRow, Val); // factorization mat_lu.FactorizeDistributed(MPI::COMM_WORLD, Ptr, IndRow, Val, num_loc, true); // then resolution mat_lu.SolveDistributed(MPI::COMM_WORLD, SeldonNoTrans, b, num_loc); DISP(b); } MPI_Finalize();
Mumps.cxx
Pastix.cxx
SparseSolver.cxx
void Clear();
This method releases the memory used by the factorization. It is available for every direct solver.
Matrix<double, General, ArrayRowSparse> A; MatrixUmfPack<double> mat_lu; // you fill A as you want // then a first factorization is achieved GetLU(A, mat_lu); // then solve needed linear systems SolveLU(mat_lu, x); // and if you need to spare memory, you can clear factorization mat_lu.Clear();
Mumps.cxx
UmfPack.cxx
SuperLU.cxx
Pastix.cxx
SparseSolver.cxx
void EnableOutOfCore(); void DisableOutOfCore();
This method allows the direct solver to write a part of the matrix on the disk. This option is enabled only for Mumps.
void RefineSolution(); void DoNotRefineSolution();
This method is avaible for Pastix only, it includes (or not) an additional refinement step in the resolution phase. The obtained solution should be more accurate, but the resolution cost should be twice higher.
void PerformAnalysis(Matrix& A); void PerformFactorization(Matrix& A);
The method "PerformAnalysis" should reorder matrix, and perform a "symbolic" factorization of the matrix, whereas the method "PerformFactorization" should perform only numerical factorization. The aim here is to reduce the amount of work when we consider a family of linear systems which have the same pattern. In that configuration, the input matrix is not cleared.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A(n, n); // you declare a Mumps solver MatrixMumps<double> mat_lu; // symbolic factorization mat_lu.PerformAnalysis(A); // then factorization mat_lu.PerformFactorization(A); // you can solve a system mat_lu.Solve(x); // then construct another matrix A, but with the same // pattern. For example FillRand will modify the values A.FillRand(); // and solve the new system mat_lu.PerformFactorization(A); mat_lu.Solve(x);
void FindOrdering(Matrix&, Vector<int>&); void FindOrdering(Matrix&, Vector<int>&, bool);
This method computes the new row numbers of the matrix by using available algorithms in Mumps/Pastix. It is currently not implemented for UmfPack/SuperLU.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A; // you declare a Mumps structure MatrixMumps<double> mat_lu; IVect permutation; // we find the best numbering of A // by default, the matrix A is erased mat_lu.FindOrdering(A, permutation); // if last argument is true, A is not modified mat_lu.FindOrdering(A, permutation, true);
void GetSchurMatrix(Matrix&, Vector<int>&, Matrix&); void GetSchurMatrix(Matrix&, Vector<int>&, Matrix&, bool);
This method computes the schur complement when a matrix and row numbers of the Schur matrix are provided. It is equivalent to use the function GetSchurMatrix.
// you fill a sparse matrix Matrix<double, General, ArrayRowSparse> A(n, n); // you declare a Mumps structure MatrixMumps<double> mat_lu; // then you set some row numbers num that will be associated // with a sub-matrix A22 : A = [A11 A12; A21 A22] IVect num(3); num(0) = 4; num(1) = 10; num(2) = 12; // computation of Schur complement : A22 - A21 A11^-1 A12 Matrix<double> schur_cplt; mat_lu.GetSchurMatrix(A, num, schur_cplt); // the size of matrix schur_cplt should be the same as the size of num
void GetLU(Matrix&, MatrixMumps&); void GetLU(Matrix&, MatrixUmfPack&); void GetLU(Matrix&, MatrixSuperLU&); void GetLU(Matrix&, MatrixMumps&, bool); void GetLU(Matrix&, MatrixUmfPack&, bool); void GetLU(Matrix&, MatrixSuperLU&, bool);
This method performs a LU factorization. The last argument is optional. When omitted, the initial matrix is erased, when equal to true, the initial matrix is not modified. This function is not defined for SparseDirectSolver, you have to use method Factorize instead.
// sparse matrices, use of Mumps for example MatrixMumps<double> mat_lu; Matrix<double, General, ArrayRowSparse> Asp(n, n); // you add all non-zero entries to matrix Asp // then you call GetLU to factorize the matrix GetLU(Asp, mat_lu); // Asp is empty after GetLU // you can solve Asp x = b X = B; SolveLU(mat_lu, X); // if you want to solve Asp^T x = b X = B; SolveLU(SeldonTrans, mat_lu, X); // if you want to keep initial matrix GetLU(Asp, mat_lu, true);
Mumps.cxx
UmfPack.cxx
Pastix.cxx
SuperLU.cxx
void SolveLU(MatrixMumps&, Vector&); void SolveLU(MatrixUmfPack&, Vector&); void SolveLU(MatrixSuperLU&, Vector&); void SolveLU(SeldonTrans, MatrixSuperLU&, Vector&);
This method uses the LU factorization computed by GetLU
in order to solve a linear system or its transpose. The right hand side is overwritten by the result. This function is not defined for SparseDirectSolver, you have to use method Solve instead.
Mumps.cxx
UmfPack.cxx
Pastix.cxx
SuperLU.cxx
void GetSchurMatrix(Matrix&, MatrixMumps&, Vector<int>&, Matrix&);
This method computes the so-called Schur matrix (or Schur complement) from a given matrix.
MatrixMumps<double> mat_lu; Matrix<double, General, ArrayRowSparse> A(n, n); // you add all non-zero entries to matrix A // then you specify row numbers for schur matrix IVect num(5); num.Fill(); // somehow, A can be written under the form A = [A11 A12; A21 A22] // A22 is related to row numbers of the Schur matrix // Schur matrix is dense Matrix<double> mat_schur(5, 5); GetSchurMatrix(A, mat_lu, num, mat_schur); // then you should obtain A22 - A21*inv(A11)*A12 -> mat_schur