Source: /cirosantilli/linear-map

= Linear map
{title2=linear operator}
{wiki}

A linear map is a function $f : V_1(F) \to V_2(F)$ where $V_1(F)$ and $V_2(F)$ are two vector spaces over <underlying field of a vector space>[underlying fields] $F$ such that:
$$
\forall v_{1}, v_{2} \in V_1, c_{1}, c_{2} \in F \\
f(c_{1} v_{1} + c_{2} v_{2}) = c_{1} f(v_{1}) + c_{2} f(v_{2})
$$

A common case is $F = \R$, $V_1 = \R_m$ and $V_2 = \R_n$.

One thing that makes such functions particularly simple is that they can be fully specified by specifyin how they act on all possible combinations of input basis vectors: they are therefore specified by only a finite number of elements of $F$.

Every linear map in <finite dimension> can be represented by a <matrix>, the points of the <domain (function)> being represented as <vectors>.

As such, when we say "linear map", we can think of a generalization of <matrix multiplication> that makes sense in <infinite dimensional> spaces like <Hilbert spaces>, since calling such infinite dimensional maps "matrices" is stretching it a bit, since we would need to specify infinitely many rows and columns.

The prototypical building block of <infinite dimensional> linear map is the <derivative>. In that case, the vectors being operated upon are <functions>, which cannot therefore be specified by a finite number of parameters, e.g. 

For example, the left side of the <time-independent Schrödinger equation> is a linear map. And the <time-independent Schrödinger equation> can be seen as a <eigenvalue> problem.