This library provides functions operating on matrices. Below, the functions are ordered according to categories and a typical call of the respective function is shown. Most functions are solely an interface to the external LAPACK library.
Note: A' is a short hand notation of transpose(A):
Basic Information
Linear Equations
Matrix Factorizations
Matrix Properties
Matrix Exponentials
Matrix Equations
Matrix Manipulation
Name | Description |
---|---|
Examples | Examples demonstrating the usage of the Math.Matrices functions |
toString | Convert a matrix into its string representation |
isEqual | Compare whether two Real matrices are identical |
solve | Solve real system of linear equations A*x=b with a b vector (Gaussian elimination with partial pivoting) |
solve2 | Solve real system of linear equations A*X=B with a B matrix (Gaussian elimination with partial pivoting) |
leastSquares | Solve linear equation A*x = b (exactly if possible, or otherwise in a least square sense; A may be non-square and may be rank deficient) |
leastSquares2 | Solve linear equation A*X = B (exactly if possible, or otherwise in a least square sense; A may be non-square and may be rank deficient) |
equalityLeastSquares | Solve a linear equality constrained least squares problem |
LU | LU decomposition of square or rectangular matrix |
LU_solve | Solve real system of linear equations P*L*U*x=b with a b vector and an LU decomposition (from LU(..)) |
LU_solve2 | Solve real system of linear equations P*L*U*X=B with a B matrix and an LU decomposition (from LU(..)) |
eigenValues | Return eigenvalues and eigenvectors for a real, nonsymmetric matrix in a Real representation |
eigenValueMatrix | Return real valued block diagonal matrix J of eigenvalues of matrix A (A=V*J*Vinv) |
singularValues | Return singular values and left and right singular vectors |
QR | Return the QR decomposition of a square matrix with optional column pivoting (A(:,p) = Q*R) |
hessenberg | Return upper Hessenberg form of a matrix |
realSchur | Return the real Schur form (rsf) S of a square matrix A, A=QZ*S*QZ' |
cholesky | Return the Cholesky factorization of a symmetric positive definite matrix |
balance | Return a balanced form of matrix A to improve the condition of A |
trace | Return the trace of matrix A, i.e., the sum of the diagonal elements |
det | Return determinant of a matrix (computed by LU decomposition; try to avoid det(..)) |
inv | Return inverse of a matrix (try to avoid inv(..)) |
rank | Return rank of a rectangular matrix (computed with singular values) |
conditionNumber | Return the condition number norm(A)*norm(inv(A)) of a matrix A |
rcond | Return the reciprocal condition number of a matrix |
norm | Return the p-norm of a matrix |
frobeniusNorm | Return the Frobenius norm of a matrix |
nullSpace | Return the orthonormal nullspace of a matrix |
exp | Return the exponential of a matrix by adaptive Taylor series expansion with scaling and balancing |
integralExp | Return the exponential and the integral of the exponential of a matrix |
integralExpT | Return the exponential, the integral of the exponential, and time-weighted integral of the exponential of a matrix |
continuousLyapunov | Return solution X of the continuous-time Lyapunov equation X*A + A'*X = C |
continuousSylvester | Return solution X of the continuous-time Sylvester equation A*X + X*B = C |
continuousRiccati | Return solution X of the continuous-time algebraic Riccati equation A'*X + X*A - X*B*inv(R)*B'*X + Q = 0 (care) |
discreteLyapunov | Return solution X of the discrete-time Lyapunov equation A'*X*A + sgn*X = C |
discreteSylvester | Return solution of the discrete-time Sylvester equation A*X*B + sgn*X = C |
discreteRiccati | Return solution of discrete-time algebraic Riccati equation A'*X*A - X - A'*X*B*inv(R + B'*X*B)*B'*X*A + Q = 0 (dare) |
sort | Sort the rows or columns of a matrix in ascending or descending order |
flipLeftRight | Flip the columns of a matrix in left/right direction |
flipUpDown | Flip the rows of a matrix in up/down direction |
LAPACK | Interface to LAPACK library (should usually not directly be used but only indirectly via Modelica.Math.Matrices) |
Utilities | Utility functions that should not be directly utilized by the user |
Matrices.toString(A); Matrices.toString(A, name="", significantDigits=6);
The function call "Matrices.toString(A)
" returns the
string representation of matrix A.
With the optional arguments "name" and "significantDigits", a name and the number of the digits are defined.
The default values of name and significantDigits are "" and 6 respectively. If name=="" then the
prefix "<name> =" is leaved out.
A = [2.12, -4.34; -2.56, -1.67]; toString(A); // = " // 2.12 -4.34 // -2.56 -1.67"; toString(A,"A",1); // = "A = // 2 -4 // -3 -2"
Name | Description |
---|---|
M[:, :] | Real matrix |
name | Independent variable name used for printing |
significantDigits | Number of significant digits that are shown |
Name | Description |
---|---|
s | String expression of matrix M |
Matrices.isEqual(M1, M2); Matrices.isEqual(M1, M2, eps=0);
The function call "Matrices.isEqual(M1, M2)
" returns true,
if the two Real matrices M1 and M2 have the same dimensions and
the same elements. Otherwise the function
returns false. Two elements e1 and e2 of the two matrices
are checked on equality by the test "abs(e1-e2) ≤ eps", where "eps"
can be provided as third argument of the function. Default is "eps = 0".
Real A1[2,2] = [1,2; 3,4]; Real A2[3,2] = [1,2; 3,4; 5,6]; Real A3[2,2] = [1,2, 3,4.0001]; Boolean result; algorithm result := Matrices.isEqual(M1,M2); // = false result := Matrices.isEqual(M1,M3); // = false result := Matrices.isEqual(M1,M1); // = true result := Matrices.isEqual(M1,M3,0.1); // = true
Name | Description |
---|---|
M1[:, :] | First matrix |
M2[:, :] | Second matrix (may have different size as M1) |
eps | Two elements e1 and e2 of the two matrices are identical if abs(e1-e2) <= eps |
Name | Description |
---|---|
result | = true, if matrices have the same size and the same elements |
Matrices.solve(A,b);
This function call returns the solution x of the linear system of equations
A*x = b
If a unique solution x does not exist (since A is singular), an assertion is triggered. If this is not desired, use instead Matrices.leastSquares and inquire the singularity of the solution with the return argument rank (a unique solution is computed if rank = size(A,1)).
Note, the solution is computed with the LAPACK function "dgesv", i.e., by Gaussian elimination with partial pivoting.
Real A[3,3] = [1,2,3; 3,4,5; 2,1,4]; Real b[3] = {10,22,12}; Real x[3]; algorithm x := Matrices.solve(A,b); // x = {3,2,1}
Name | Description |
---|---|
A[:, size(A, 1)] | Matrix A of A*x = b |
b[size(A, 1)] | Vector b of A*x = b |
Name | Description |
---|---|
x[size(b, 1)] | Vector x such that A*x = b |
Matrices.solve2(A,b);
This function call returns the solution X of the linear system of equations
A*X = B
If a unique solution X does not exist (since A is singular), an assertion is triggered. If this is not desired, use instead Matrices.leastSquares2 and inquire the singularity of the solution with the return argument rank (a unique solution is computed if rank = size(A,1)).
Note, the solution is computed with the LAPACK function "dgesv", i.e., by Gaussian elimination with partial pivoting.
Real A[3,3] = [1,2,3; 3,4,5; 2,1,4]; Real B[3,2] = [10, 20; 22, 44; 12, 24]; Real X[3,2]; algorithm X := Matrices.solve2(A, B); /* X = [3, 6; 2, 4; 1, 2] */
Matrices.LU, Matrices.LU_solve2, Matrices.leastSquares2.
Extends from Modelica.Icons.Function (Icon for functions).
Name | Description |
---|---|
A[:, size(A, 1)] | Matrix A of A*X = B |
B[size(A, 1), :] | Matrix B of A*X = B |
Name | Description |
---|---|
X[size(B, 1), size(B, 2)] | Matrix X such that A*X = B |
x = Matrices.leastSquares(A,b);
Returns a solution of equation A*x = b in a least square sense (A may be rank deficient):
minimize | A*x - b |
Several different cases can be distinguished (note, rank is an output argument of this function):
size(A,1) = size(A,2)
A solution is returned for a regular, as well as a singular matrix A:
size(A,1) > size(A,2):
The equation A*x = b has no unique solution. The solution x is selected such that |A*x - b| is as small as possible. If rank = size(A,2), this minimum norm solution is unique. If rank < size(A,2), there are an infinite number of solutions leading to the same minimum value of |A*x - b|. From these infinite number of solutions, the one with the minimum norm |x| is selected. This gives a unique solution that minimizes both |A*x - b| and |x|.
size(A,1) < size(A,2):
Note, the solution is computed with the LAPACK function "dgelsx", i.e., QR or LQ factorization of A with column pivoting.
The function first computes a QR factorization with column pivoting:
A * P = Q * [ R11 R12 ] [ 0 R22 ]
with R11 defined as the largest leading submatrix whose estimated condition number is less than 1/rcond. The order of R11, rank, is the effective rank of A.
Then, R22 is considered to be negligible, and R12 is annihilated by orthogonal transformations from the right, arriving at the complete orthogonal factorization:
A * P = Q * [ T11 0 ] * Z [ 0 0 ]
The minimum-norm solution is then
x = P * Z' [ inv(T11)*Q1'*b ] [ 0 ]
where Q1 consists of the first "rank" columns of Q.
Matrices.leastSquares2
(same as leastSquares, but with a right hand side matrix),
Matrices.solve
(for square, regular matrices A)
Name | Description |
---|---|
A[:, :] | Matrix A |
b[size(A, 1)] | Vector b |
rcond | Reciprocal condition number to estimate the rank of A |
Name | Description |
---|---|
x[size(A, 2)] | Vector x such that min|A*x-b|^2 if size(A,1) >= size(A,2) or min|x|^2 and A*x=b, if size(A,1) < size(A,2) |
rank | Rank of A |
X = Matrices.leastSquares2(A,B);
Returns a solution of equation A*X = B in a least square sense (A may be rank deficient):
minimize | A*X - B |
Several different cases can be distinguished (note, rank is an output argument of this function):
size(A,1) = size(A,2)
A solution is returned for a regular, as well as a singular matrix A:
size(A,1) > size(A,2):
The equation A*X = B has no unique solution. The solution X is selected such that |A*X - B| is as small as possible. If rank = size(A,2), this minimum norm solution is unique. If rank < size(A,2), there are an infinite number of solutions leading to the same minimum value of |A*X - B|. From these infinite number of solutions, the one with the minimum norm |X| is selected. This gives a unique solution that minimizes both |A*X - B| and |X|.
size(A,1) < size(A,2):
Note, the solution is computed with the LAPACK function "dgelsx", i.e., QR or LQ factorization of A with column pivoting.
The function first computes a QR factorization with column pivoting:
A * P = Q * [ R11 R12 ] [ 0 R22 ]
with R11 defined as the largest leading submatrix whose estimated condition number is less than 1/rcond. The order of R11, rank, is the effective rank of A.
Then, R22 is considered to be negligible, and R12 is annihilated by orthogonal transformations from the right, arriving at the complete orthogonal factorization:
A * P = Q * [ T11 0 ] * Z [ 0 0 ]
The minimum-norm solution is then
X = P * Z' [ inv(T11)*Q1'*B ] [ 0 ]
where Q1 consists of the first "rank" columns of Q.
Matrices.leastSquares
(same as leastSquares2, but with a right hand side vector),
Matrices.solve2
(for square, regular matrices A)
Name | Description |
---|---|
A[:, :] | Matrix A |
B[size(A, 1), :] | Matrix B |
rcond | Reciprocal condition number to estimate rank of A |
Name | Description |
---|---|
X[size(A, 2), size(B, 2)] | Matrix X such that min|A*X-B|^2 if size(A,1) >= size(A,2) or min|X|^2 and A*X=B, if size(A,1) < size(A,2) |
rank | Rank of A |
x = Matrices.equalityLeastSquares(A,a,B,b);
This function returns the solution x of the linear equality-constrained least squares problem:
min|A*x - a|^2 over x, subject to B*x = b
It is required that the dimensions of A and B fulfill the following relationship:
size(B,1) ≤ size(A,2) ≤ size(A,1) + size(B,1)
Note, the solution is computed with the LAPACK function "dgglse" using the generalized RQ factorization under the assumptions that B has full row rank (= size(B,1)) and the matrix [A;B] has full column rank (= size(A,2)). In this case, the problem has a unique solution.
Extends from Modelica.Icons.Function (Icon for functions).
Name | Description |
---|---|
A[:, :] | Minimize |A*x - a|^2 |
a[size(A, 1)] | |
B[:, size(A, 2)] | subject to B*x=b |
b[size(B, 1)] |
Name | Description |
---|---|
x[size(A, 2)] | solution vector |
(LU, pivots) = Matrices.LU(A); (LU, pivots, info) = Matrices.LU(A);
This function call returns the LU decomposition of a "Real[m,n]" matrix A, i.e.,
P*L*U = A
where P is a permutation matrix (implicitly
defined by vector pivots
),
L is a lower triangular matrix with unit
diagonal elements (lower trapezoidal if m > n), and
U is an upper triangular matrix (upper trapezoidal if m < n).
Matrices L and U are stored in the returned
matrix LU
(the diagonal of L is not stored).
With the companion function
Matrices.LU_solve,
this decomposition can be used to solve
linear systems (P*L*U)*x = b with different right
hand side vectors b. If a linear system of equations with
just one right hand side vector b shall be solved, it is
more convenient to just use the function
Matrices.solve.
The optional third (Integer) output argument has the following meaning:
info = 0: | successful exit |
info > 0: | if info = i, U[i,i] is exactly zero. The factorization
has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. |
The LU factorization is computed with the LAPACK function "dgetrf", i.e., by Gaussian elimination using partial pivoting with row interchanges. Vector "pivots" are the pivot indices, i.e., for 1 ≤ i ≤ min(m,n), row i of matrix A was interchanged with row pivots[i].
Real A[3,3] = [1,2,3; 3,4,5; 2,1,4]; Real b1[3] = {10,22,12}; Real b2[3] = { 7,13,10}; Real LU[3,3]; Integer pivots[3]; Real x1[3]; Real x2[3]; algorithm (LU, pivots) := Matrices.LU(A); x1 := Matrices.LU_solve(LU, pivots, b1); // x1 = {3,2,1} x2 := Matrices.LU_solve(LU, pivots, b2); // x2 = {1,0,2}
Matrices.LU_solve, Matrices.solve,
Extends from Modelica.Icons.Function (Icon for functions).
Name | Description |
---|---|
A[:, :] | Square or rectangular matrix |
Name | Description |
---|---|
LU[size(A, 1), size(A, 2)] | L,U factors (used with LU_solve(..)) |
pivots[min(size(A, 1), size(A, 2))] | pivot indices (used with LU_solve(..)) |
info | Information |
Matrices.LU_solve(LU, pivots, b);
This function call returns the solution x of the linear systems of equations
P*L*U*x = b;
where P is a permutation matrix (implicitly
defined by vector pivots
),
L is a lower triangular matrix with unit
diagonal elements (lower trapezoidal if m > n), and
U is an upper triangular matrix (upper trapezoidal if m < n).
The matrices of this decomposition are computed with function
Matrices.LU that
returns arguments LU
and pivots
used as input arguments of Matrices.LU_solve
.
With Matrices.LU
and Matrices.LU_solve
it is possible to efficiently solve linear systems
with different right hand side vectors. If a linear system of equations with
just one right hand side vector shall be solved, it is
more convenient to just use the function
Matrices.solve.
If a unique solution x does not exist (since the LU decomposition is singular), an exception is raised.
The LU factorization is computed with the LAPACK function "dgetrf", i.e., by Gaussian elimination using partial pivoting with row interchanges. Vector "pivots" are the pivot indices, i.e., for 1 ≤ i ≤ min(m,n), row i of matrix A was interchanged with row pivots[i].
Real A[3,3] = [1,2,3; 3,4,5; 2,1,4]; Real b1[3] = {10,22,12}; Real b2[3] = { 7,13,10}; Real LU[3,3]; Integer pivots[3]; Real x1[3]; Real x2[3]; algorithm (LU, pivots) := Matrices.LU(A); x1 := Matrices.LU_solve(LU, pivots, b1); // x1 = {3,2,1} x2 := Matrices.LU_solve(LU, pivots, b2); // x2 = {1,0,2}
Name | Description |
---|---|
LU[:, size(LU, 1)] | L,U factors of Matrices.LU(..) for a square matrix |
pivots[size(LU, 1)] | Pivots indices of Matrices.LU(..) |
b[size(LU, 1)] | Right hand side vector of P*L*U*x=b |
Name | Description |
---|---|
x[size(b, 1)] | Solution vector such that P*L*U*x = b |
Matrices.LU_solve(LU, pivots, B);
This function call returns the solution X of the linear systems of equations
P*L*U*X = B;
where P is a permutation matrix (implicitly
defined by vector pivots
),
L is a lower triangular matrix with unit
diagonal elements (lower trapezoidal if m > n), and
U is an upper triangular matrix (upper trapezoidal if m < n).
The matrices of this decomposition are computed with function
Matrices.LU that
returns arguments LU
and pivots
used as input arguments of Matrices.LU_solve2
.
With Matrices.LU
and Matrices.LU_solve2
it is possible to efficiently solve linear systems
with different right hand side matrices. If a linear system of equations with
just one right hand side matrix shall be solved, it is
more convenient to just use the function
Matrices.solve2.
If a unique solution X does not exist (since the LU decomposition is singular), an exception is raised.
The LU factorization is computed with the LAPACK function "dgetrf", i.e., by Gaussian elimination using partial pivoting with row interchanges. Vector "pivots" are the pivot indices, i.e., for 1 ≤ i ≤ min(m,n), row i of matrix A was interchanged with row pivots[i].
Real A[3,3] = [1,2,3; 3,4,5; 2,1,4]; Real B1[3] = [10, 20; 22, 44; 12, 24]; Real B2[3] = [ 7, 14; 13, 26; 10, 20]; Real LU[3,3]; Integer pivots[3]; Real X1[3,2]; Real X2[3,2]; algorithm (LU, pivots) := Matrices.LU(A); X1 := Matrices.LU_solve2(LU, pivots, B1); /* X1 = [3, 6; 2, 4; 1, 2] */ X2 := Matrices.LU_solve2(LU, pivots, B2); /* X2 = [1, 2; 0, 0; 2, 4] */
Name | Description |
---|---|
LU[:, size(LU, 1)] | L,U factors of Matrices.LU(..) for a square matrix |
pivots[size(LU, 1)] | Pivots indices of Matrices.LU(..) |
B[size(LU, 1), :] | Right hand side matrix of P*L*U*X=B |
Name | Description |
---|---|
X[size(B, 1), size(B, 2)] | Solution matrix such that P*L*U*X = B |
eigenvalues = Matrices.eigenValues(A); (eigenvalues, eigenvectors) = Matrices.eigenValues(A);
This function call returns the eigenvalues and optionally the (right) eigenvectors of a square matrix A. The first column of "eigenvalues" contains the real and the second column contains the imaginary part of the eigenvalues. If the i-th eigenvalue has no imaginary part, then eigenvectors[:,i] is the corresponding real eigenvector. If the i-th eigenvalue has an imaginary part, then eigenvalues[i+1,:] is the conjugate complex eigenvalue and eigenvectors[:,i] is the real and eigenvectors[:,i+1] is the imaginary part of the eigenvector of the i-th eigenvalue. With function Matrices.eigenValueMatrix, a real block diagonal matrix is constructed from the eigenvalues such that
A = eigenvectors * eigenValueMatrix(eigenvalues) * inv(eigenvectors)
provided the eigenvector matrix "eigenvectors" can be inverted (an inversion is possible, if all eigenvalues are different; in some cases, an inversion is also possible if some eigenvalues are the same).
Real A[3,3] = [1,2,3; 3,4,5; 2,1,4]; Real eval; algorithm eval := Matrices.eigenValues(A); // eval = [-0.618, 0; // 8.0 , 0; // 1.618, 0];
i.e., matrix A has the 3 real eigenvalues -0.618, 8, 1.618.
Name | Description |
---|---|
A[:, size(A, 1)] | Matrix |
Name | Description |
---|---|
eigenvalues[size(A, 1), 2] | Eigenvalues of matrix A (Re: first column, Im: second column) |
eigenvectors[size(A, 1), size(A, 2)] | Real-valued eigenvector matrix |
Matrices.eigenValueMatrix(eigenvalues);
The function call returns a block diagonal matrix J
from the the two-column matrix eigenvalues
(computed by function
Matrices.eigenValues).
Matrix eigenvalues
must have the real part of the
eigenvalues in the first column and the imaginary part in the
second column. If an eigenvalue i has a vanishing imaginary
part, then J[i,i] = eigenvalues[i,1], i.e., the diagonal
element of J is the real eigenvalue.
Otherwise, eigenvalue i and conjugate complex eigenvalue i+1
are used to construct a 2 by 2 diagonal block of J:
J[i , i] := eigenvalues[i,1]; J[i , i+1] := eigenvalues[i,2]; J[i+1, i] := eigenvalues[i+1,2]; J[i+1, i+1] := eigenvalues[i+1,1];
Name | Description |
---|---|
eigenValues[:, 2] | Eigen values from function eigenValues(..) (Re: first column, Im: second column) |
Name | Description |
---|---|
J[size(eigenValues, 1), size(eigenValues, 1)] | Real valued block diagonal matrix with eigen values (Re: 1x1 block, Im: 2x2 block) |
sigma = Matrices.singularValues(A); (sigma, U, VT) = Matrices.singularValues(A);
This function computes the singular values and optionally the singular vectors of matrix A. Basically the singular value decomposition of A is computed, i.e.,
A = U S VT = U*Sigma*VT
where U and V are orthogonal matrices (UUT=I,
VVT=I). S
= diag(si)
has the same size as matrix A with nonnegative diagonal elements
in decreasing order and with all other elements zero
(s1 is the largest element). The function
returns the singular values si
in vector sigma
and the orthogonal matrices in
matrices U
and V
.
A = [1, 2, 3, 4; 3, 4, 5, -2; -1, 2, -3, 5]; (sigma, U, VT) = singularValues(A); results in: sigma = {8.33, 6.94, 2.31}; i.e. Sigma = [8.33, 0, 0, 0; 0, 6.94, 0, 0; 0, 0, 2.31, 0]
Name | Description |
---|---|
A[:, :] | Matrix |
Name | Description |
---|---|
sigma[min(size(A, 1), size(A, 2))] | Singular values |
U[size(A, 1), size(A, 1)] | Left orthogonal matrix |
VT[size(A, 2), size(A, 2)] | Transposed right orthogonal matrix |
(Q,R,p) = Matrices.QR(A);
This function returns the QR decomposition of a rectangular matrix A (the number of columns of A must be less than or equal to the number of rows):
Q*R = A[:,p]
where Q is a rectangular matrix that has orthonormal columns and has the same size as A (QTQ=I), R is a square, upper triangular matrix and p is a permutation vector. Matrix R has the following important properties:
This means that if abs(R[i,i]) ≤ ε then abs(R[j,k]) ≤ ε for j ≥ i, i.e., the i-th row up to the last row of R have small elements and can be treated as being zero. This allows to, e.g., estimate the row-rank of R (which is the same row-rank as A). Furthermore, R can be partitioned in two parts
A[:,p] = Q * [R1, R2; 0, 0]
where R1 is a regular, upper triangular matrix.
Note, the solution is computed with the LAPACK functions "dgeqpf"
and "dorgqr", i.e., by Householder transformations with
column pivoting. If Q is not needed, the function may be
called as: (,R,p) = QR(A)
.
Extends from Modelica.Icons.Function (Icon for functions).Real A[3,3] = [1,2,3; 3,4,5; 2,1,4]; Real R[3,3]; algorithm (,R) := Matrices.QR(A); // R = [-7.07.., -4.24.., -3.67..; 0 , -1.73.., -0.23..; 0 , 0 , 0.65..];
Name | Description |
---|---|
A[:, :] | Rectangular matrix with size(A,1) >= size(A,2) |
pivoting | True if column pivoting is performed. True is default |
Name | Description |
---|---|
Q[size(A, 1), size(A, 2)] | Rectangular matrix with orthonormal columns such that Q*R=A[:,p] |
R[size(A, 2), size(A, 2)] | Square upper triangular matrix |
p[size(A, 2)] | Column permutation vector |
H = Matrices.hessenberg(A); (H, U) = Matrices.hessenberg(A);
Function hessenberg computes the Hessenberg matrix H of matrix A as well as the orthogonal transformation matrix U that holds H = U'*A*U. The Hessenberg form of a matrix is computed by repeated Householder similarity transformation. The elementary reflectors and the corresponding scalar factors are provided by function "Utilities.toUpperHessenberg()". The transformation matrix U is then computed by LAPACK.dorghr.
A = [1, 2, 3; 6, 5, 4; 1, 0, 0]; (H, U) = hessenberg(A); results in: H = [1.0, -2.466, 2.630; -6.083, 5.514, -3.081; 0.0, 0.919, -0.514] U = [1.0, 0.0, 0.0; 0.0, -0.9864, -0.1644; 0.0, -0.1644, 0.9864] and therefore, U*H*transpose(U) = [1.0, 2.0, 3.0; 6.0, 5.0, 4.0; 1.0, 0.0, 0.0]
Matrices.Utilities.toUpperHessenberg
Extends from Modelica.Icons.Function (Icon for functions).
Name | Description |
---|---|
A[:, size(A, 1)] | Square matrix A |
Name | Description |
---|---|
H[size(A, 1), size(A, 2)] | Hessenberg form of A |
U[size(A, 1), size(A, 2)] | Transformation matrix |
S = Matrices.realSchur(A); (S, QZ, alphaReal, alphaImag) = Matrices.realSchur(A);
Function realSchur calculates the real Schur form of a real square matrix A, i.e.
A = QZ*S*transpose(QZ)
with the real nxn matrices S and QZ. S is a block upper triangular matrix with 1x1 and 2x2 blocks in the diagonal. QZ is an orthogonal matrix. The 1x1 blocks contains the real eigenvalues of A. The 2x2 blocks [s11, s12; s21, s11] represents the conjugated complex pairs of eigenvalues, whereas the real parts of the eigenvalues are the elements of the diagonal (s11). The imaginary parts are the positive and negative square roots of the product of the two elements s12 and s21 (imag = +-sqrt(s12*s21)).
The calculation in lapack.dgees is performed stepwise, i.e., using the internal methods of balancing and scaling of dgees.
Real A[3,3] = [1, 2, 3; 4, 5, 6; 7, 8, 9]; Real T[3,3]; Real Z[3,3]; Real alphaReal[3]; Real alphaImag[3]; algorithm (T, Z, alphaReal, alphaImag):=Modelica.Math.Matrices.realSchur(A); // T = [16.12, 4.9, 1.59E-015; // 0, -1.12, -1.12E-015; // 0, 0, -1.30E-015] // Z = [-0.23, -0.88, 0.41; // -0.52, -0.24, -0.82; // -0.82, 0.4, 0.41] //alphaReal = {16.12, -1.12, -1.32E-015} //alphaImag = {0, 0, 0}
Name | Description |
---|---|
A[:, size(A, 1)] | Square matrix |
Name | Description |
---|---|
S[size(A, 1), size(A, 2)] | Real Schur form of A |
QZ[size(A, 1), size(A, 2)] | Schur vector Matrix |
alphaReal[size(A, 1)] | Real part of eigenvalue=alphaReal+i*alphaImag |
alphaImag[size(A, 1)] | Imaginary part of eigenvalue=alphaReal+i*alphaImag |
H = Matrices.cholesky(A); H = Matrices.cholesky(A, upper=true);
Function cholesky computes the Cholesky factorization of a real symmetric positive definite matrix A. The optional Boolean input "upper" specifies whether the upper or the lower triangular matrix is returned, i.e.
A = H'*H if upper is true (H is upper triangular) A = H*H' if upper is false (H is lower triangular)
The computation is performed by LAPACK.dpotrf.
Extends from Modelica.Icons.Function (Icon for functions).A = [1, 0, 0; 6, 5, 0; 1, -2, 2]; S = A*transpose(A); H = Matrices.cholesky(S); results in: H = [1.0, 6.0, 1.0; 0.0, 5.0, -2.0; 0.0, 0.0, 2.0] with transpose(H)*H = [1.0, 6.0, 1; 6.0, 61.0, -4.0; 1.0, -4.0, 9.0] //=S
Name | Description |
---|---|
A[:, size(A, 1)] | Symmetric positive definite matrix |
upper | True if the right cholesky factor (upper triangle) should be returned |
Name | Description |
---|---|
H[size(A, 1), size(A, 2)] | Cholesky factor U (upper=true) or L (upper=false) for A = U'*U or A = L*L' |
(D,B) = Matrices.balance(A);
This function returns a vector D, such that B=inv(diagonal(D))*A*diagonal(D) has a better condition as matrix A, i.e., conditionNumber(B) ≤ conditionNumber(A). The elements of D are multiples of 2 which means that this function does not introduce round-off errors. Balancing attempts to make the norm of each row of B equal to the norm of the respective column.
Balancing is used to minimize roundoff errors induced through large matrix calculations like Taylor-series approximation or computation of eigenvalues.
- A = [1, 10, 1000; 0.01, 0, 10; 0.005, 0.01, 10] - Matrices.norm(A, 1); = 1020.0 - (T,B)=Matrices.balance(A) - T = {256, 16, 0.5} - B = [1, 0.625, 1.953125; 0.16, 0, 0.3125; 2.56, 0.32, 10.0] - Matrices.norm(B, 1); = 12.265625
The Algorithm is taken from
which based on the balance
function from EISPACK.
Name | Description |
---|---|
A[:, size(A, 1)] |
Name | Description |
---|---|
D[size(A, 1)] | diagonal(D)=T is transformation matrix, such that B = inv(T)*A*T has smaller condition as A |
B[size(A, 1), size(A, 1)] | Balanced matrix (= inv(diagonal(D))*A*diagonal(D) ) |
r = Matrices.trace(A);
This function computes the trace, i.e., the sum of the elements in the diagonal of matrix A.
Extends from Modelica.Icons.Function (Icon for functions).A = [1, 3; 2, 1]; r = trace(A); results in: r = 2.0
Name | Description |
---|---|
A[:, size(A, 1)] | Square matrix A |
Name | Description |
---|---|
result | Trace of A |
result = Matrices.det(A);
This function returns the determinant "result" of matrix A computed by a LU decomposition with row pivoting. For details about determinants, see http://en.wikipedia.org/wiki/Determinant. Usually, this function should never be used, because there are nearly always better numerical algorithms as by computing the determinant. Examples:
Name | Description |
---|---|
A[:, size(A, 1)] |
Name | Description |
---|---|
result | Determinant of matrix A |
invA = Matrices.inv(A);
This function returns the inverse of matrix A, i.e., A*inv(A) = identity(size(A,1)) computed by a LU decomposition with row pivoting. Usually, this function should not be used, because there are nearly always better numerical algorithms as by computing directly the inverse. Example:
Use x = Matrices.solve(A,b) to solve the linear equation A*x = b, instead of computing the solution by x = inv(A)*b, because this is much more efficient and much more reliable.
Name | Description |
---|---|
A[:, size(A, 1)] |
Name | Description |
---|---|
invA[size(A, 1), size(A, 2)] | Inverse of matrix A |
result = Matrices.rank(A); result = Matrices.rank(A,eps=0);
This function returns the rank of a square or rectangular matrix A computed by singular value decomposition. For details about the rank of a matrix, see http://en.wikipedia.org/wiki/Matrix_rank. To be more precise:
Name | Description |
---|---|
A[:, :] | Matrix |
eps | If eps > 0, the singular values are checked against eps; otherwise eps=max(size(A))*norm(A)*Modelica.Constants.eps is used |
Name | Description |
---|---|
result | Rank of matrix A |
r = Matrices.conditionNumber(A);
This function calculates the condition number (norm(A) * norm(inv(A))) of a general real matrix A, in either the 1-norm, 2-norm or the infinity-norm. In the case of 2-norm the result is the ratio of the largest to the smallest singular value of A. For more details, see http://en.wikipedia.org/wiki/Condition_number.
A = [1, 2; 2, 1]; r = conditionNumber(A); results in: r = 3.0
Name | Description |
---|---|
A[:, :] | Input matrix |
p | Type of p-norm (only allowed: 1, 2 or Modelica.Constants.inf) |
Name | Description |
---|---|
result | Condition number of matrix A |
r = Matrices.rcond(A);
This function estimates the reciprocal of the condition number (norm(A) * norm(inv(A))) of a general real matrix A, in either the 1-norm or the infinity-norm, using the LAPACK function DGECON. If rcond(A) is near 1.0, A is well conditioned and A is ill conditioned if rcond(A) is near zero.
A = [1, 2; 2, 1]; r = rcond(A); results in: r = 0.3333
Name | Description |
---|---|
A[:, size(A, 1)] | Square real matrix |
inf | Is true if infinity norm is used and false for 1-norm |
Name | Description |
---|---|
rcond | Reciprocal condition number of A |
info | Information |
Matrices.norm(A); Matrices.norm(A, p=2);
The function call "Matrices.norm(A)
" returns the
2-norm of matrix A, i.e., the largest singular value of A.
The function call "Matrices.norm(A, p)
" returns the
p-norm of matrix A. The only allowed values for p are
Note, for any matrices A1, A2 the following inequality holds:
Matrices.norm(A1+A2,p) ≤ Matrices.norm(A1,p) + Matrices.norm(A2,p)
Note, for any matrix A and vector v the following inequality holds:
Vectors.norm(A*v,p) ≤ Matrices.norm(A,p)*Vectors.norm(A,p)
Name | Description |
---|---|
A[:, :] | Input matrix |
p | Type of p-norm (only allowed: 1, 2 or Modelica.Constants.inf) |
Name | Description |
---|---|
result | p-norm of matrix A |
r = Matrices.frobeniusNorm(A);
This function computes the Frobenius norm of a general real matrix A, i.e., the square root of the sum of the squares of all elements.
A = [1, 2; 2, 1]; r = frobeniusNorm(A); results in: r = 3.162;
Name | Description |
---|---|
A[:, :] | Input matrix |
Name | Description |
---|---|
result | Frobenius norm of matrix A |
Z = Matrices.nullspace(A); (Z, nullity) = Matrices.nullspace(A);
This function calculates an orthonormal basis Z=[z_1, z_2, ...] of the nullspace of a matrix A, i.e., A*z_i=0.
The nullspace is obtained by SVD method. That is, matrix A is decomposed into the matrices S, U, V:
A = U*S*transpose(V)
with the orthonormal matrices U and V and the matrix S with
S = [S1, 0] S1 = [diag(s); 0]
and the singular values s={s1, s2, ..., sr} of A and r=rank(A). Note, that S has the same size as A. Since U and V are orthonormal we may write
transpose(U)*A*V = [S1, 0].
Matrix S1 obviously has full column rank and therefore, the left n-r rows (n is the number of columns of A or S) of matrix V span a nullspace of A.
The nullity of matrix A is the dimension of the nullspace of A. In view of the above, it becomes clear that nullity holds
nullity = n - r
with
n = number of columns of matrix A r = rank(A)
A = [1, 2, 3, 1; 3, 4, 5, 2; -1, 2, -3, 3]; (Z, nullity) = nullspace(A); results in: Z=[0.1715; -0.686; 0.1715; 0.686] nullity = 1
Name | Description |
---|---|
A[:, :] | Input matrix |
Name | Description |
---|---|
Z[size(A, 2), :] | Orthonormal nullspace of matrix A |
nullity | Nullity, i.e., the dimension of the nullspace |
phi = Matrices.exp(A); phi = Matrices.exp(A,T=1);
This function computes the exponential eAT of matrix A, i.e.
(AT)^2 (AT)^3 Φ = e^(AT) = I + AT + ------ + ------ + .... 2! 3!
where e=2.71828..., A is an n x n matrix with real elements and T is a real number, e.g., the sampling time. A may be singular. With the exponential of a matrix it is, e.g., possible to compute the solution of a linear system of differential equations
der(x) = A*x -> x(t0 + T) = e^(AT)*x(t0)
The algorithm is taken from
The following steps are performed to calculate the exponential of A:
In several sources it is not recommended to use Taylor series expansion to calculate the exponential of a matrix, such as in 'C.B. Moler and C.F. Van Loan: Nineteen dubious ways to compute the exponential of a matrix. SIAM Review 20, pp. 801-836, 1979' or in the documentation of m-file expm2 in Matlab version 6 (http://www.MathWorks.com) where it is stated that 'As a practical numerical method, this is often slow and inaccurate'. These statements are valid for a direct implementation of the Taylor series expansion, but not for the implementation variant used in this function.
Extends from Modelica.Icons.Function (Icon for functions).
Name | Description |
---|---|
A[:, size(A, 1)] | |
T |
Name | Description |
---|---|
phi[size(A, 1), size(A, 1)] | = exp(A*T) |
(phi,gamma) = Matrices.integralExp(A,B); (phi,gamma) = Matrices.integralExp(A,B,T=1);
This function computes the exponential phi = e^(AT) of matrix A and the integral gamma = integral(phi*dt)*B.
The function uses a Taylor series expansion with Balancing and scaling/squaring to approximate the integral Ψ of the matrix exponential Φ=e^(AT):
AT^2 A^2 * T^3 A^k * T^(k+1) Ψ = int(e^(As))ds = IT + ---- + --------- + ... + -------------- 2! 3! (k+1)!
Φ is calculated through Φ = I + A*Ψ, so A may be singular. Γ is simply Ψ*B.
The algorithm runs in the following steps:
Balancing put the bad condition of a square matrix A into a diagonal
transformation matrix D. This reduce the effort of following calculations.
Afterwards the result have to be re-balanced by transformation D*Atransf
*inv(D).
Scaling halfen T k-times, until the norm of A*T is less than 0.5. This
guarantees minimum rounding errors in the following series
expansion. The re-scaling based on the equation exp(A*2T) = exp(AT)^2.
The needed re-scaling formula for psi thus becomes:
Φ = Φ'*Φ' I + A*Ψ = I + 2A*Ψ' + A^2*Ψ'^2 Ψ = A*Ψ'^2 + 2*Ψ'
where psi' is the scaled result from the series expansion while psi is the re-scaled matrix.
The function is normally used to discretize a state-space system as the zero-order-hold equivalent:
x(k+1) = Φ*x(k) + Γ*u(k) y(k) = C*x(k) + D*u(k)
The zero-order-hold sampling, also known as step-invariant method, gives exact values of the state variables, under the assumption that the control signal u is constant between the sampling instants. Zero-order-hold sampling is described in
Syntax: (phi,gamma) = Matrices.expIntegral(A,B,T) A,phi: [n,n] square matrices B,gamma: [n,m] input matrix T: scalar, e.g., sampling time
The Algorithm to calculate psi is taken from
Name | Description |
---|---|
A[:, size(A, 1)] | |
B[size(A, 1), :] | |
T |
Name | Description |
---|---|
phi[size(A, 1), size(A, 1)] | = exp(A*T) |
gamma[size(A, 1), size(B, 2)] | = integral(phi)*B |
(phi,gamma,gamma1) = Matrices.integralExp(A,B); (phi,gamma,gamma1) = Matrices.integralExp(A,B,T=1);
This function computes the exponential phi = e^(AT) of matrix A and the integral gamma = integral(phi*dt)*B and the integral integral((T-t)*exp(A*t)*dt)*B, where A is a square (n,n) matrix and B, gamma, and gamma1 are (n,m) matrices.
The function calculates the matrices phi,gamma,gamma1 through the equation:
[ A B 0 ] [phi gamma gamma1] = [I 0 0]*exp([ 0 0 I ]*T) [ 0 0 0 ]
The matrices define the discretized first-order-hold equivalent of a state-space system:
x(k+1) = phi*x(k) + gamma*u(k) + gamma1/T*(u(k+1) - u(k))
The first-order-hold sampling, also known as ramp-invariant method, gives more smooth control signals as the ZOH equivalent. First-order-hold sampling is, e.g., described in
Name | Description |
---|---|
A[:, size(A, 1)] | |
B[size(A, 1), :] | |
T |
Name | Description |
---|---|
phi[size(A, 1), size(A, 1)] | = exp(A*T) |
gamma[size(A, 1), size(B, 2)] | = integral(phi)*B |
gamma1[size(A, 1), size(B, 2)] | = integral((T-t)*exp(A*t))*B |
X = Matrices.continuousLyapunov(A, C); X = Matrices.continuousLyapunov(A, C, ATisSchur, eps);
This function computes the solution X of the continuous-time Lyapunov equation
X*A + A'*X = C
using the Schur method for Lyapunov equations proposed by Bartels and Stewart [1].
In a nutshell, the problem is reduced to the corresponding problem
Y*R' + R*Y = D
with R=U'*A'*U is the real Schur form of A' and D=U'*C*U and Y=U'*X*U
are the corresponding transformations of C and X. This problem is solved sequentially for the 1x1 or 2x2 Schur blocks by exploiting the block triangular form of R.
Finally the solution of the original problem is recovered as X=U*Y*U'.
The Boolean input "ATisSchur" indicates to omit the transformation to Schur in the case that A' has already Schur form.
[1] Bartels, R.H. and Stewart G.W. Algorithm 432: Solution of the matrix equation AX + XB = C. Comm. ACM., Vol. 15, pp. 820-826, 1972.
A = [1, 2, 3, 4; 3, 4, 5, -2; -1, 2, -3, -5; 0, 2, 0, 6]; C = [-2, 3, 1, 0; -6, 8, 0, 1; 2, 3, 4, 5; 0, -2, 0, 0]; X = continuousLyapunov(A, C); results in: X = [1.633, -0.761, 0.575, -0.656; -1.158, 1.216, 0.047, 0.343; -1.066, -0.052, -0.916, 1.61; -2.473, 0.717, -0.986, 1.48]
Matrices.continuousSylvester, Matrices.discreteLyapunov
Extends from Modelica.Icons.Function (Icon for functions).
Name | Description |
---|---|
A[:, size(A, 1)] | Square matrix A in X*A + A'*X = C |
C[size(A, 1), size(A, 2)] | Square matrix C in X*A + A'*X = C |
ATisSchur | True if transpose(A) has already real Schur form |
eps | Tolerance eps |
Name | Description |
---|---|
X[size(A, 1), size(A, 2)] | Solution X of the Lyapunov equation X*A + A'*X = C |
X = Matrices.continuousSylvester(A, B, C); X = Matrices.continuousSylvester(A, B, C, AisSchur, BisSchur);
Function continuousSylvester computes the solution X of the continuous-time Sylvester equation
A*X + X*B = C.
using the Schur method for Sylvester equations proposed by Bartels and Stewart [1].
In a nutshell, the problem is reduced to the corresponding problem
S*Y + Y*T = D.
with S=U'*A*U is the real Schur of A, T=V'*T*V is the real Schur form of B and
D=U'*C*V and Y=U*X*V'
are the corresponding transformations of C and X. This problem is solved sequentially by exploiting the block triangular form of S and T.
Finally the solution of the original problem is recovered as X=U'*Y*V.
The Boolean inputs "AisSchur" and "BisSchur" indicate to omit one or both of the transformation to Schur in the case that A and/or B have already Schur form.
The function applies LAPACK-routine DTRSYL. See LAPACK.dtrsyl for more information.
[1] Bartels, R.H. and Stewart G.W. Algorithm 432: Solution of the matrix equation AX + XB = C. Comm. ACM., Vol. 15, pp. 820-826, 1972.
A = [17.0, 24.0, 1.0, 8.0, 15.0 ; 23.0, 5.0, 7.0, 14.0, 16.0 ; 0.0, 6.0, 13.0, 20.0, 22.0; 0.0, 0.0, 19.0, 21.0, 3.0 ; 0.0, 0.0, 0.0, 2.0, 9.0]; B = [8.0, 1.0, 6.0; 0.0, 5.0, 7.0; 0.0, 9.0, 2.0]; C = [62.0, -12.0, 26.0; 59.0, -10.0, 31.0; 70.0, -6.0, 9.0; 35.0, 31.0, -7.0; 36.0, -15.0, 7.0]; X = continuousSylvester(A, B, C); results in: X = [0.0, 0.0, 1.0; 1.0, 0.0, 0.0; 0.0, 1.0, 0.0; 1.0, 1.0, -1.0; 2.0, -2.0, 1.0];
Matrices.discreteSylvester, Matrices.continuousLyapunov
Extends from Modelica.Icons.Function (Icon for functions).
Name | Description |
---|---|
A[:, :] | Square matrix A |
B[:, :] | Square matrix B |
C[size(A, 1), size(B, 2)] | Matrix C |
AisSchur | True if A has already real Schur form |
BisSchur | True if B has already real Schur form |
Name | Description |
---|---|
X[size(A, 1), size(B, 2)] | Solution of the continuous Sylvester equation |
X = Matrices.continuousRiccati(A, B, R, Q); (X, alphaReal, alphaImag) = Matrices.continuousRiccati(A, B, R, Q, true);
Function continuousRiccati computes the solution X of the continuous-time algebraic Riccati equation
A'*X + X*A - X*G*X + Q = 0
with G = B*inv(R)*B'
using the Schur vector approach proposed by Laub [1].
It is assumed that Q is symmetric and positive semidefinite and R is symmetric, nonsingular and positive definite, (A,B) is stabilizable and (A,Q) is detectable.
These assumptions are not checked in this function !!
The assumptions guarantee that the Hamiltonian matrix
H = [A, -G; -Q, -A']
has no pure imaginary eigenvalue and can be put to an ordered real Schur form
U'*H*U = S = [S11, S12; 0, S22]
with orthogonal similarity transformation U. S is ordered in such a way, that S11 contains the n stable eigenvalues of the closed loop system with system matrix A - B*inv(R)*B'*X. If U is partitioned to
U = [U11, U12; U21, U22]
with dimensions according to S, the solution X is calculated by
X*U11 = U21.
With optional input refinement=true
a subsequent iterative refinement based on Newton's method with exact line search is applied.
See continuousRiccatiIterative
for more information.
[1] Laub, A.J. A Schur Method for Solving Algebraic Riccati equations. IEEE Trans. Auto. Contr., AC-24, pp. 913-921, 1979.
A = [0.0, 1.0; 0.0, 0.0]; B = [0.0; 1.0]; R = [1]; Q = [1.0, 0.0; 0.0, 2.0]; X = continuousRiccati(A, B, R, Q); results in: X = [2.0, 1.0; 1.0, 2.0];
Matrices.Utilities.continuousRiccatiIterative, Matrices.discreteRiccati
Extends from Modelica.Icons.Function (Icon for functions).
Name | Description |
---|---|
A[:, size(A, 1)] | Square matrix A in CARE |
B[size(A, 1), :] | Matrix B in CARE |
R[size(B, 2), size(B, 2)] | Matrix R in CARE |
Q[size(A, 1), size(A, 1)] | Matrix Q in CARE |
refine | True for subsequent refinement |
Name | Description |
---|---|
X[size(A, 1), size(A, 2)] | stabilizing solution of CARE |
alphaReal[size(H, 1)] | Real parts of eigenvalue=alphaReal+i*alphaImag |
alphaImag[size(H, 1)] | Imaginary parts of eigenvalue=alphaReal+i*alphaImag |
X = Matrices.discreteLyapunov(A, C); X = Matrices.discreteLyapunov(A, C, ATisSchur, sgn, eps);
This function computes the solution X of the discrete-time Lyapunov equation
A'*X*A + sgn*X = C
where sgn=1 or sgn =-1. For sgn = -1, the discrete Lyapunov equation is a special case of the Stein equation:
A*X*B - X + Q = 0.
The algorithm uses the Schur method for Lyapunov equations proposed by Bartels and Stewart [1].
In a nutshell, the problem is reduced to the corresponding problem
R*Y*R' + sgn*Y = D.
with R=U'*A'*U is the the real Schur form of A' and D=U'*C*U and Y=U'*X*U
are the corresponding transformations of C and X. This problem is solved sequentially by exploiting the block triangular form of R.
Finally the solution of the original problem is recovered as X=U*Y*U'.
The Boolean input "ATisSchur" indicates to omit the transformation to Schur in the case that A' has already Schur form.
[1] Bartels, R.H. and Stewart G.W. Algorithm 432: Solution of the matrix equation AX + XB = C. Comm. ACM., Vol. 15, pp. 820-826, 1972.
A = [1, 2, 3, 4; 3, 4, 5, -2; -1, 2, -3, -5; 0, 2, 0, 6]; C = [-2, 3, 1, 0; -6, 8, 0, 1; 2, 3, 4, 5; 0, -2, 0, 0]; X = discreteLyapunov(A, C, sgn=-1); results in: X = [7.5735, -3.1426, 2.7205, -2.5958; -2.6105, 1.2384, -0.9232, 0.9632; 6.6090, -2.6775, 2.6415, -2.6928; -0.3572, 0.2298, 0.0533, -0.27410];
Matrices.discreteSylvester, Matrices.continuousLyapunov
Extends from Modelica.Icons.Function (Icon for functions).
Name | Description |
---|---|
A[:, size(A, 1)] | Square matrix A in A'*X*A + sgn*X = C |
C[size(A, 1), size(A, 2)] | Square matrix C in A'*X*A + sgn*X = C |
ATisSchur | True if transpose(A) has already real Schur form |
sgn | Specifies the sign in A'*X*A + sgn*X = C |
eps | Tolerance eps |
Name | Description |
---|---|
X[size(A, 1), size(A, 2)] | Solution X of the Lyapunov equation A'*X*A + sgn*X = C |
X = Matrices.discreteSylvester(A, B, C); X = Matrices.discreteSylvester(A, B, C, AisHess, BTisSchur, sgn, eps);
Function discreteSylvester computes the solution X of the discrete-time Sylvester equation
A*X*B + sgn*X = C.
where sgn = 1 or sgn = -1. The algorithm applies the Hessenberg-Schur method proposed by Golub et al [1]. For sgn = -1, the discrete Sylvester equation is also known as Stein equation:
A*X*B - X + Q = 0.
In a nutshell, the problem is reduced to the corresponding problem
H*Y*S' + sgn*Y = F.
with H=U'*A*U is the Hessenberg form of A and S=V'*B'*V is the real Schur form of B',
F=U'*C*V and Y=U*X*V'
are appropriate transformations of C and X. This problem is solved sequentially by exploiting the specific forms of S and H.
Finally the solution of the original problem is recovered as X=U'*Y*V.
The Boolean inputs "AisHess" and "BTisSchur" indicate to omit one or both of the transformation to Hessenberg form or Schur form respectively in the case that A and/or B have already Hessenberg form or Schur respectively.
[1] Golub, G.H., Nash, S. and Van Loan, C.F. A Hessenberg-Schur method for the problem AX + XB = C. IEEE Transaction on Automatic Control, AC-24, no. 6, pp. 909-913, 1979.
A = [1.0, 2.0, 3.0; 6.0, 7.0, 8.0; 9.0, 2.0, 3.0]; B = [7.0, 2.0, 3.0; 2.0, 1.0, 2.0; 3.0, 4.0, 1.0]; C = [271.0, 135.0, 147.0; 923.0, 494.0, 482.0; 578.0, 383.0, 287.0]; X = discreteSylvester(A, B, C); results in: X = [2.0, 3.0, 6.0; 4.0, 7.0, 1.0; 5.0, 3.0, 2.0];
Matrices.continuousSylvester, Matrices.discreteLyapunov
Extends from Modelica.Icons.Function (Icon for functions).
Name | Description |
---|---|
A[:, size(A, 1)] | Square matrix A in A*X*B + sgn*X = C |
B[:, size(B, 1)] | Square matrix B in A*X*B + sgn*X = C |
C[size(A, 2), size(B, 1)] | Rectangular matrix C in A*X*B + sgn*X = C |
AisHess | True if A has already Hessenberg form |
BTisSchur | True if B' has already real Schur form |
sgn | Specifies the sign in A*X*B + sgn*X = C |
eps | Tolerance |
Name | Description |
---|---|
X[size(A, 2), size(B, 1)] | solution of the discrete Sylvester equation A*X*B + sgn*X = C |
X = Matrices.discreteRiccati(A, B, R, Q); (X, alphaReal, alphaImag) = Matrices.discreteRiccati(A, B, R, Q, true);
Function discreteRiccati computes the solution X of the discrete-time algebraic Riccati equation
A'*X*A - X - A'*X*B*inv(R + B'*X*B)*B'*X*A + Q = 0
using the Schur vector approach proposed by Laub [1].
It is assumed that Q is symmetric and positive semidefinite and R is symmetric, nonsingular and positive definite, (A,B) is stabilizable and (A,Q) is detectable. Using this method, A has also to be invertible.
These assumptions are not checked in this function !!!
The assumptions guarantee that the Hamiltonian matrix.
H = [A + G*T*Q, -G*T; -T*Q, T]
with
-T T = A
and
-1 G = B*R *B'
has no eigenvalue on the unit circle and can be put to an ordered real Schur form
U'*H*U = S = [S11, S12; 0, S22]
with orthogonal similarity transformation U. S is ordered in such a way, that S11 contains the n stable eigenvalues of the closed loop system with system matrix
-1 A - B*(R + B'*X*B) *B'*X*A
If U is partitioned to
U = [U11, U12; U21, U22]
according to S, the solution X can be calculated by
X*U11 = U21.
[1] Laub, A.J. A Schur Method for Solving Algebraic Riccati equations. IEEE Trans. Auto. Contr., AC-24, pp. 913-921, 1979.
A = [4.0 3.0] -4.5, -3.5]; B = [ 1.0; -1.0]; R = [1.0]; Q = [9.0, 6.0; 6.0, 4.0] X = discreteRiccati(A, B, R, Q); results in: X = [14.5623, 9.7082; 9.7082, 6.4721];
Name | Description |
---|---|
A[:, size(A, 1)] | Square matrix A in DARE |
B[size(A, 1), :] | Matrix B in DARE |
R[size(B, 2), size(B, 2)] | Matrix R in DARE |
Q[size(A, 1), size(A, 1)] | Matrix Q in DARE |
refine | True for subsequent refinement |
Name | Description |
---|---|
X[size(A, 1), size(A, 2)] | orthogonal matrix of the Schur vectors associated to ordered rsf |
alphaReal[size(H, 1)] | Real part of eigenvalue=alphaReal+i*alphaImag |
alphaImag[size(H, 1)] | Imaginary part of eigenvalue=alphaReal+i*alphaImag |
sorted_M = Matrices.sort(M); (sorted_M, indices) = Matrices.sort(M, sortRows=true, ascending=true);
Function sort(..) sorts the rows of a Real matrix M in ascending order and returns the result in sorted_M. If the optional argument "sortRows" is false, the columns of the matrix are sorted. If the optional argument "ascending" is false, the rows or columns are sorted in descending order. In the optional second output argument, the indices of the sorted rows or columns with respect to the original matrix are given, such that
sorted_M = if sortedRow then M[indices,:] else M[:,indices];
Extends from Modelica.Icons.Function (Icon for functions).(M2, i2) := Matrices.sort([2, 1, 0; 2, 0, -1]); -> M2 = [2, 0, -1; 2, 1, 0 ]; i2 = {2,1};
Name | Description |
---|---|
M[:, :] | Matrix to be sorted |
sortRows | = true if rows are sorted, otherwise columns |
ascending | = true if ascending order, otherwise descending order |
Name | Description |
---|---|
sorted_M[size(M, 1), size(M, 2)] | Sorted matrix |
indices[if sortRows then size(M, 1) else size(M, 2)] | sorted_M = if sortRows then M[indices,:] else M[:,indices] |
A_flr = Matrices.flipLeftRight(A);
Function flipLeftRight computes from matrix A a matrix A_flr with flipped columns, i.e., A_flr[:,i]=A[:,n-i+1], i=1,..., n.
A = [1, 2, 3; 3, 4, 5; -1, 2, -3]; A_flr = flipLeftRight(A); results in: A_flr = [3, 2, 1; 5, 4, 3; -3, 2, -1]
Name | Description |
---|---|
A[:, :] | Matrix to be flipped |
Name | Description |
---|---|
Aflip[size(A, 1), size(A, 2)] | Flipped matrix |
A_fud = Matrices.flipUpDown(A);
Function flipUpDown computes from matrix A a matrix A_fud with flipped rows, i.e., A_fud[i,:]=A[n-i+1,:], i=1,..., n.
A = [1, 2, 3; 3, 4, 5; -1, 2, -3]; A_fud = flipUpDown(A); results in: A_fud = [-1, 2, -3; 3, 4, 5; 1, 2, 3]
Name | Description |
---|---|
A[:, :] | Matrix to be flipped |
Name | Description |
---|---|
Aflip[size(A, 1), size(A, 2)] | Flipped matrix |