**Definition**. Let $V$ and $W$ be two vector spaces over the same field $\mathbb{F}$. A transformation $A: V \to W$ is called a **linear operator** if

$$A ( \alpha x + \beta y) = \alpha A x + \beta A y, \quad \forall x, y \in V, \quad \forall \alpha, \beta \in \mathbb{F}.$$

The definition allows the possibility $V = W$; then the condition that $V$ and $W$ are vector spaces over the same field is automatically filled. In this case it is said to be that $A$ is a linear operator on $V$.

If $V \neq W$, the definition of a linear operator requires that the spaces must be over the same field. Namely, the scalars $\alpha$ and $\beta$ on the left side of equality $A (\alpha x + \beta y) = \alpha A x + \beta A y$ multiply vectors in $V$, while those same scalars multiply the images of these vectors ($Ax$ and $Ay$) in $W$, on the right side of equality.

**Note**. A function $A: V \to W$ is called a **linear transformation** if

$$A( \alpha x + \beta y) = \alpha A x + \beta A y, \quad \forall x, y \in V, \quad \forall \alpha, \beta \in \mathbb{F}.$$

From here it follows immediately

(i) $A (x +y) = Ax + By, \quad \forall x, y \in V$, if we take $\alpha = \beta = 1$;

(ii) $A ( \alpha x) = \alpha A x , \quad \forall x \in V, \forall \alpha \in \mathbb{F}$, if we take $\beta = 0$.

The property (i) is called **additivity**, and the property (ii) is called **homogeneity.** Therefore, each linear operator is additive and homogeneous. The reverse is also true.

Every linear operator maps the zero vector into the zero vector: $A0 = 0$.

If $A: V \to W$ is a linear operator, then

$$A ( \alpha_1 x_1 + \alpha_2 x_2 + \ldots + \alpha_n x_n) = \alpha_1 A x_1 + \alpha_2 A x_2 + \ldots + \alpha_n A x_n, \quad \forall x_i \in V, \forall \alpha_i \in \mathbb{F},$$

that is, linear operators respect the linear combinations.

Suppose now that $A: V \to W$ is a linear operator and $\{b_1, b_2, \ldots, b_n \}, n \in \mathbb{N}$, a basis for $V$. We choose the vector $x \in V$ and write it in the form

$$x = \alpha_1 b_1 + \alpha_2 b_2 + \ldots + \alpha_n b_n.$$

Then $Ax$ can be expressed as

$$Ax = \alpha_1 A b_1+ \alpha_2 A b_2 + \ldots + \alpha_n A b_n.$$

From here we can conclude that if we know the vectors $Ab_1, \ldots, Ab_n$, then implicitly we know the vector $x$, for each vector $x \in V$.

Looking from another angle, this observation leads us to the following proposition.

**Proposition 1**. Let $V$ and $W$ be two vector spaces over the same field $\mathbb{F}$ and $\{b_1, \ldots, b_n \}$ be any basis for $V$ and $(w_1, \ldots, w_n)$ any ordered $n$- tuple of vectors in $W$. There there is a unique linear operator $A: V \to W$ such that $Ab_i = w_i, \quad \forall i = 1, \ldots, n$.

Since $(w_1, \ldots, w_n)$ is the ordered $n$- tuple, it is possible that some of the vectors $w_i$ are repeated.

We can now explore some basic properties of linear operators.

Firstly, linear operators kept the properties of subspaces.

**Proposition 2**. Let $A: V \to W$ be a linear operator.

(1) If $K$ is a subspace of $V$, then $A(K)$ is a subspace of $W$.

(2) If $L$ is a subspace of $W$, then $A^{-1}(L)$ is a subspace of $V$.

Consider two special cases: $K = V \le V$ and $L = \{0 \} \le W$. This leads us to the definition of the rank and nullity – the central concepts in the study of linear operators.

**Definition**. Let $V$ and $W$ be a finite dimensional vector spaces and let $A: V \to W$ be a linear operator. The** range** ( or **image**) of the linear operator $A$ is the set of all vectors $w \in W$ such that $w = Av$, for some $v \in V$, that is,

$$Im (A) = A(V) = \{ Av : v \in V\}.$$

The **kernel** of the linear operator $A$ is the set of all vectors $v \in V$ such that $Av = 0$, that is,

$$\ker (A) = A^{-1}\{0 \} = \{ v \in V: Av = 0 \}.$$

The number $\dim (\ker (A))$ is called the** nullity** of the operator $A$ and it is denoted with $null (A)$, and the number $\dim (Im(A))$ is called the **rank** of the operator $A$ denoted with $r(A)$.

It is routine to show that $\ker (A)$ is a subspace of $V$ and $Im (A)$ is a subspace of $W$. Moreover, we have the following proposition.

**Proposition 3**. Let $A: V \to W$ be a linear operator.

(i) $A$ is surjective if and only if $Im (A) = W$.

(ii) $A$ is injective if and only if $\ker (A) = \{0\}$.

**Proposition 4**. Let $V$ and $W$ be a finite dimensional vector spaces and let $A: V \to W$ be a linear operator. Then $A$ is injective if and only if for each linearly independent set $S$ in $V$ the set $A(S) = \{Av: v \in S \}$ is linearly independent in $W$.

Now we can write the** the rank – nullity theorem**.

**Theorem 1**. If $V$ and $W$ are finite dimensional vector spaces and $A: V \to W$ is a linear operator, then

$$r(A) + null (A) = \dim V.$$

**Definition**. A linear operator $A: V \to W$ is called

(i) a **monomorphism** if $A$ is injective;

(ii) an **epimorphism** if $A$ is surjective;

(iii) an **isomorphism** if $A$ is bijective.

A consequence of the rank – nullity theorem is the following corollary.

**Corollary 1.** Let $A: V \to W$ be a linear operator and let $\dim V = \dim W < \infty$. The following conditions are equivalent to each other:

(1) $A$ is a monomorphism.

(2) $A$ is an epimorphism.

(3) $A$ is an isomorphism.

Two vector spaces $V$ and $W$ are called **isomorphic** if there is an isomorphism $A: V \to W$.

**Proposition 5**. Let $A: V \to W$ and $B: W \to Z$ be linear operators. Then the composition $B \circ A: V \to Z$ is also a linear operator. Specially, the composition of two isomorphisms is again an isomorphism.

The following proposition characterizes the isomorphisms of the finite dimensional vector spaces.

**Proposition 6**. Let $A: V \to W$ be a linear operator and $\dim V = n$. Then the following claims are equivalent to each other:

(1) $A$ is an isomorphism.

(2) For each basis $\{b_1, b_2, \ldots, b_n \}$ of $V$, the set $\{Ab_1, Ab_2, \ldots, Ab_n \}$ is a basis for $W$.

(3) There is a basis $\{e_1, e_2, \ldots, e_n \}$ of $V$ such that $\{Ae_1, Ae_2, \ldots, Ae_n \}$ is a basis for $W$.

**Proposition 7**. Let $V$ and $W$ be a finite dimensional vector spaces over the same field. Then $V$ and $W$ are isomorphic if and only if $\dim V = \dim W$.

**The space of linear operators**

When $V$ and $W$ are the vector spaces over the same field, we can observe the set of all linear operators from $V$ into $W$, denoted by $L (V, W)$. This set is always non empty.

We want now to introduce the structure of a vector space in the $L (V, W)$.

**Definition**. Let $V$ and $W$ be two vector spaces over the same field $\mathbb{F}$.

- For $A, B \in L(V, W)$ we define $A + B: V \to W$ as $(A + B) x = Ax + Bx$.
- For $A \in L (V, W)$ and $\alpha \in \mathbb{F}$ we define $\alpha A: V \to W$ as $(\alpha A) x = \alpha A x$.

This means that the linear operators may be added using point wise addition, and they can be multiplied by scalars in a similar way.

**Theorem 2**. If $V$ and $W$ are the vector spaces over the same field $\mathbb{F}$, then $L (V, W)$ is also a vector space over the field $\mathbb{F}$.

**Theorem 3**. Suppose $\dim V = n$ and $\dim W = m$. Then $ L (V, W)$ has finite dimension $n \cdot m$, that is,

$$\dim L (V, W) = \dim V \cdot \dim W.$$

Consider yet another special case when $V = W$. This we will denote by $L(V)$, instead $L(V, V)$.

Of course, $\dim L(V,V)=n^2$

**Proposition 8. **Let $V$ be vector space. For $L(V)$ we have:

(1) $L(V)$ is a vector space

(2) $\forall A,B \in L(V) AB \in L(V)$

(3) $A(BC)=(AB)C, \forall A, B, C \in L(V)$

(4) $A(B+C)=AB+AC, (A+B)C=AC+BC, \forall A, B, C \in L(V)$

(5) $(\alpha A)B= \alpha(AB) = A(\alpha B), \forall A, B \in L(V)$

(6) $\exists! I \in L(V)$ such that $ AI=IA=A, \forall A \in L(V)$