Latest Tweets

Free math worksheets

Matrices and systems of equations

Regarding the introduction of matrices, we can find their connection with the linear system of m equations with n unknowns:

    \[a_{11}\cdot x_1 + a_{12} \cdot x_2 + a_{13} \cdot x_3 + \ldots + a_{1n} \cdot x_n = b_1\]

    \[a_{21}\cdot x_1 + a_{22} \cdot x_2 + a_{23} \cdot x_3 + \ldots + a_{2n} \cdot x_n = b_2\]

    \[a_{31}\cdot x_1 + a_{32} \cdot x_2 + a_{33} \cdot x_3 + \ldots + a_{3n} \cdot x_n = b_3\]


    \[a_{m1}\cdot x_1 + a_{m2} \cdot x_2 + a_{m3} \cdot x_3 + \ldots + a_{mn} \cdot x_n = b_m,\]

where a_{ij} are coefficients of the system and b_1, \ldots , b_m are free members.

The system of equations above we usually connect with following matrices:

    \[\mathbf{A} = \left[ \begin{array} {ccccc} a_{11} & a_{12} & a_{13} & \ldots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \ldots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \ldots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & a_{m3} & \ldots & a_{mn} \\ \end{array} \right] , \quad  \mathbf{X}= \left[ \begin{array} {c} x_1 \\ x_2\\ x_3\\ \vdots \\ x_n\\ \end{array} \right] , \quad \mathbf{B}= \left[ \begin{array} {c} b_1 \\ b_2\\ b_3\\ \vdots \\ b_m\\ \end{array} \right],\]

where a matrix \mathbf{A} is called a matrix of the system, matrix \mathbf{X} is matrix of unknown values and matrix \mathbf{B} is a matrix of free members.

An equivalent record for the system above is:

    \[\mathbf{A} \mathbf{X} = \mathbf{B}.\]

The solution of a matrix system above is a column matrix

    \[\mathbf{C} = \left[\begin{array} {c} \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \vdots \\ \gamma_{n} \\ \end{array} \right],\]

for which the substitution \mathbf{X} = \mathbf{C} gives the equality

    \[\mathbf{A} \mathbf{C} = \mathbf{B}.\]

The question of existence of the inverse of the matrix often occurs when solving systems of linear equations. If a matrix \mathbf{A} is a regular matrix then by multiplying the equation \mathbf{A}\mathbf{X}=\mathbf{B} to the left with \mathbf{A^{-1}} we obtain the following equation:

    \[\mathbf{X} = \mathbf{A^{-1}} \mathbf{B},\]

which gives a unique solution of the system.

Therefore, if we know the inverse matrix \mathbf{A^{-1}} of a matrix \mathbf{A}, then the solution of the system of n equations and n unknown values we can write in the form \mathbf{X} = \mathbf{A^{-1}} \mathbf{B}, which makes sense only if the matrix of the system is invertible.

An augmented matrix \mathbf{\hat{A}} for the matrix system \mathbf{A} \mathbf{X} = \mathbf{B}  is defined as

    \[\mathbf{\hat{A}} = [\mathbf{A} | \mathbf{B}] = \left[ \begin{array} {ccccc|c} a_{11} & a_{12} & a_{13} & \ldots & a_{1n} & b_{1}\\ a_{21} & a_{22} & a_{23} & \ldots & a_{2n} & b_{2} \\ a_{31} & a_{32} & a_{33} & \ldots & a_{3n}  & b_{3}\\ \vdots & \vdots & \vdots & \ddots &  \vdots  & \vdots\\ a_{m1} & a_{m2} & a_{m3} & \ldots & a_{mn}  & b_{m} \\ \end{array} \right].\]


An algorithm for solving systems of linear equations is the Gaussian elimination. This algorithm can also be used for calculating the determinant of matrices and finding an inverse matrix of a given square matrix.

Gaussian elimination is based on using elementary operations to transform the augmented matrix into the upper triangular form.

The elementary operations are:

1.) interchanging the position of two rows (or columns),

2.) multiplication of a row (or column) by a scalar different from 0,

3.) adding a one row (or column), multiplied by a scalar, to another.


A matrix is in an echelon form or row echelon form  if satisfies the following conditions:

1.) all non – zero rows are located above any rows or all zeroes,

2.) each leading element of a row (pivot) is in column located to the right of the leading element of the row above it,

3.) all elements in a column below pivot are zeroes.

A matrix is in reduced row echelon form if is an row echelon form and pivot in any non-zero row is 1.

For example, a matrix

    \[\mathbf{A} =\left [ \begin{array}{ccc} 3 & 2 & 7 \\ 0 & 5 &  2 \\ 0 &  0 & 6 \\ \end{array}\right]\]

is in an echelon form, and a matrix

    \[\mathbf{B} =\left [ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 &  0 \\ 0 &  0 & 0 \\ \end{array}\right]\]

is in reduced echelon form.

For the purposes of finding the inverse matrix of a square matrix \mathbf{A} of order n by using the method of elementary transformations, we will observe a matrix \mathbf{[A|I]}, where a matrix \mathbf{I} is an identity matrix also of order n.


Rank of matrices


The rank of matrices is another concept appropriate for recognizing the regularity of matrices and calculating their inverses.

Let \mathbf{A} = [a_{ij}] \in \mathbb{R}^{m \times n} and let \mathbf{C_{1}} = \left [\begin{array} {c}  a_{11}  \\  a_{21} \\  a_{31} \\  \vdots \\  a_{m1} \\  \end{array}\right]\mathbf{C_{2}} =\left [\begin{array} {c}  a_{12}  \\  a_{22} \\  a_{32} \\  \vdots \\  a_{m2} \\  \end{array}\right] , \ldots\mathbf{C_{n}} =\left [\begin{array} {c}  a_{1n}  \\  a_{2n} \\  a_{3n} \\  \vdots \\  a_{mn} \\  \end{array}\right]

be columns of a matrix \mathbf{A}. The rank of matrix \mathbf{A}, r (\mathbf{A}), is defined by the formula:


    \[r(\mathbf{A}) = dim [\left{\mathbf{C_{1}} ,  \mathbf{C_{2}}, \ldots, \mathbf{C_{n}}\right}].\]


This means that the rank is the number of linearly independent columns in the row echelon form of a matrix \mathbf{A}, that is, the number of non-zero rows, obtained by using elementary operations on matrix \mathbf{A}.

A zero matrix is the only matrix with the rank 0. An identity matrix has a full rank, that is r(\mathbf{I}) = n.

A matrix \mathbf{A} \in \mathbb{R}^{m \times n} and its transpose matrix \mathbf{A}^T have the same rank, that is r(\mathbf{A}) = r(\mathbf{A}^T).

Let a matrix \mathbf{A'} be a matrix obtained from matrix \mathbf{A} \in \mathbb{R}^{m \times n} by applying one of the elementary transformations. Then is r(\mathbf{A'}) = r(\mathbf{A}).

For instance, the rank of matrix \mathbf{A} =\left [ \begin{array}{ccc}  3 & 2 & 7 \\  0 & 5 &  2 \\  0 &  0 & 6 \\  \end{array}\right] is 3, and the rank of matrix \mathbf{B} =\left [ \begin{array}{ccc}  1 & 0 & 0 \\  0 & 1 &  0 \\  0 &  0 & 0 \\  \end{array}\right] is 2.

Solutions to equations

Linear systems of equations have either no solution, one solution or infinitely many solutions.

The Kronecker – Capelli theorem

The system \mathbf{A}\mathbf{X} = \mathbf{B} is solvable iff  r (\mathbf{A}) = r (\mathbf{\hat{A}}).

The system \mathbf{A}\mathbf{X} = \mathbf{B} is homogeneous if b_{1} = b_{2} = \ldots = b_{m} = 0, that is

    \[\mathbf{A}\mathbf{X} = 0.\]

A homogeneous system is always solvable. If r(\mathbf{A})= n then a system \mathbf{A}\mathbf{X} = 0 has only the trivial solution.

Cramer’s rule

If the matrix \mathbf{A} of the system \mathbf{A} \mathbf{X} = \mathbf{B} is invertible, then the solution of the system is unique and given with the formula:

    \[x_{i}=\frac{D_i}{D}, \quad i = 1, 2, \ldots, n ,\]

where D = det \mathbf{A} and D_{i} is the determinant of the matrix obtained by the replacing i-th column of matrix \mathbf{A} by column matrix \mathbf{B}.

Consider one more special case. We observe system \mathbf{A}\mathbf{X} = \mathbf{B} in which the number of equations m is equal to the number of unknown values n, that is, matrix \mathbf{A} is a square matrix. If r(\mathbf{A}) < n then is n- r(\mathbf{A}) > 0 and we have an infinite set of solutions.