# Definition and types of matrices

Let $n$ and $m$ be natural numbers. Every rectangular array $\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \ldots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \ldots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \ldots & a_{3n} \\ \vdots &\vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & a_{m3} & \ldots & a_{mn} \end{bmatrix}$

is called a matrix.

Its elements $a_{ij}$ are arranged in $m$ rows and $n$ columns , where $i=1, \ldots , m$ , $j=1, \ldots ,n$. The index $i$ indicates in which row the observed element is located, and $j$ in which column the same element is located. The dimensions of a matrix $\mathbf{A}$ is $m \times n$ (read: $m$ by $n$), because it contains $m$ rows and $n$ columns.

Elements of the matrix are usually real numbers, however, they can be complex numbers or vectors.

Every matrix $\mathbf{A}$ with $m$ rows and $n$ columns we will denote by $[a_{ij}]$.

For example, let’s look at following matrix:
$$\mathbf{A} = \begin{bmatrix} 2 & 4 & -7 & 1 \\ 3 & 1 & 6 & -2 \\ 11 & -9 & -1 & -4 \end{bmatrix}.$$

The dimension of the matrix above is $3 \times 4$ (three by four), because there are $3$ rows and $4$ columns. For instance, an element of the matrix $\mathbf{A}$ that is located in the first row and first column is $2$ and that we denote as $a_{11}=2$.
We will observe real matrices, that is, matrices in which all elements are real numbers. The set of all real matrices with $m$ rows and $n$ columns we will denote by $\mathbb{R}^{m \times n}$.

Two matrices $\mathbf{A}$ and $\mathbf{B}$ are equal if they have the same dimension and if $a_{ij} = b_{ij}$,  $\forall i= 1, \ldots , m$ and $\forall j = 1, \ldots, n$.

If $m=n$, then the matrix has an equal number of rows and columns and is called a square matrix:

$$\mathbf{A} = \left[ \begin{array} {ccccc} a_{11} & a_{12} & a_{13} & \ldots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \ldots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \ldots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ a_{n1} & a_{n2} & a_{n3} & \ldots & a_{nn} \\ \end{array} \right] .$$

The set of all real square matrices we will denote by $\mathbb{R} ^{n}$.

A zero matrix is a matrix which contains all elements equal to zero.

A diagonal matrix is a matrix which contains elements with equal indexes, that is, elements that are located on the diagonal of the matrix, and all other elements are equal to zero. Only square matrices can be diagonal matrices.

$$\mathbf{D} = \left[ \begin{array} {ccccc} a_{11} & 0 & 0 & \ldots & 0 \\ 0 & a_{22} & 0 & \ldots & 0 \\ 0 & 0 & a_{33} & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \ldots & a_{nn} \\ \end{array} \right] .$$

A diagonal matrix is usually denoted as $\mathbf{D}= diag(a_{11}, a_{22}, a_{33}, … , a_{nn})$.

The trace of a square matrix $\mathbf {A}$ is the sum of elements in the diagonal and that we will denote as $tr (\mathbf{A})$:

$$tr (\mathbf{A})= a_{11} + a_{22} + a_{33} + \ldots + a_{nn}.$$

An identity matrix is a diagonal matrix in which all elements on the diagonal are equal to $1$. An indentity matrix is denoted by $\mathbf{I}$:

$$\mathbf{I} = \begin{bmatrix} 1 & 0 & 0 & \ldots & 0 \\ 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \ldots & 1 \\ \end{bmatrix}.$$

An upper triangular matrix is a square matrix for which  $a_{ij} = 0$, $i > j$, is valid. This means that all elements below the diagonal are equal to zero:

$$\left[ \begin{array} {ccccc} a_{11} & a_{12} & a_{13} & \ldots & a_{1n} \\ 0 & a_{22} & a_{23} & \ldots & a_{2n} \\ 0 & 0 & a_{33} & \ldots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \ldots & a_{nn} \\ \end{array} \right] .$$

A lower triangular matrix is a square matrix for which  $a_{ij} = 0$, $i < j$, is valid. This means that all elements above the diagonal are equal to zero:

$$\left[ \begin{array} {ccccc} a_{11} & 0 & 0 & \ldots & 0 \\ a_{21} & a_{22} & 0 & \ldots & 0 \\ a_{31} & a_{32} & a_{33} & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ a_{n1} & a_{n2} & a_{n3} & \ldots & a_{nn} \\ \end{array} \right].$$

A transpose matrix of matrix $\mathbf{A}$ is a matrix in which $[a_{ij}] = [a_{ji}]$ , $\forall i =1, \ldots , m , \forall j = 1, \ldots , n$,  is valid. The symbol $\mathbf{A} ^T$ will denote a transpose matrix of matrix $\mathbf{A}$. This means that the rows of given matrix $\mathbf{A}$ become columns in matrix $\mathbf{A}^T$. The association described in this way is called transposition.

For instance, if the matrix $\mathbf{A} = \begin{bmatrix} 1 & -1 & 5 & 2 \\ 4 & -3 & 6 & 9 \end{bmatrix}$  is the given matrix, then its transpose matrix is matrix $\mathbf{A}^T = \begin{bmatrix} 1 & 4 \\ -1 & -3 \\ 5 & 6 \\ 2 & 9 \end{bmatrix}$.

A square matrix $\mathbf{A}$ is an orthogonal matrix if $\mathbf{A} \mathbf{A^T} = \mathbf{A^T} \mathbf{A} = \mathbf{I}$.

A square matrix $\mathbf{A}$ is  a symmetric matrix if $\mathbf{A} = \mathbf{A} ^T$.  For instance,  the following matrix is a symmetric matrix:

$$\mathbf{A} =\begin{bmatrix} 1 & -1 & 0 \\ -1 & 1 & 6 \\ 0 & 6 & 1 \end{bmatrix} = \mathbf{A}^ T.$$

A square matrix $\mathbf{A}$ is a skew-symmetric matrix if $\mathbf{A} = – \mathbf{A} ^T$  and if all of the elements on the diagonal are equal to zero. For example:

$$\mathbf{A} =\begin{bmatrix} 0& -1 & 1 & 2 \\ 1 & 0 & 5 & -7 \\ -1 & -5 & 0 & -3 \\ -2 & 7 & 3 & 0 \end{bmatrix}= -\mathbf{A}^ T.$$

Matrices which have only one row or only one column are vectors, so their dimension is $1 \times n$ or $n \times 1$.  We will denote vectors  with the small bold highlighted letters, for example $\mathbf{a} , \mathbf{b}, \mathbf{x}$…

If $\mathbf{a}$ is a row vector, then its transpose vector $\mathbf{a}^T$ is a column vector, i.e.:

$$\begin{bmatrix} a_{1} & a_{2} & a_{3} & \ldots & a_{n} \end{bmatrix}^T = \begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ \vdots \\ a_{n} \end{bmatrix}.$$

The reverse is also true, a transpose vector of a column vector is a row vector.

Accordingly, a matrix can be understood as a set of vectors, that is, its rows and columns.