This section is about functions that operate on numbers such as the integers or real numbers.

We define this as the functional equation:
It is a bit like cauchy's functional equation but with multiplication instead of addition.

$f(x,y)=f(x)f(y)$

The differential equation that is solved by the exponential function:
with initial condition:

$y_{′}(x)=y(x)$

$y(0)=1$

TODO find better name for it, "linear homogenous differential equation of degree one" almost fully constrainst it except for the exponent constant and initial value.

The Taylor series expansion is the most direct definition of the expontial as it obviously satisfies the exponential function differential equation:

- the first constant term dies
- each other term gets converted to the one before
- because we have infinite many terms, we get what we started with!

$e_{x}=∑_{n=0}n!x_{n} =1+1x +2x_{2} +2×3x_{3} +2×3×4x_{4} +…$

$e_{x}=lim_{n→∞}(1+nx )_{n}$

The basic intuition for this is to start from the origin and make small changes to the function based on its known derivative at the origin.

More precisely, we know that for any base b, exponentiation satisfies:And we also know that for $b=e$ in particular that we satisfy the exponential function differential equation and so:
One interesting fact is that the only thing we use from the exponential function differential equation is the value around $x=0$, which is quite little information! This idea is basically what is behind the importance of the ralationship between Lie group-Lie algebra correspondence via the exponential map. In the more general settings of groups and manifolds, restricting ourselves to be near the origin is a huge advantage.

- $b_{x+y}=b_{x}b_{y}$.
- $b_{0}=1$.

$dxde_{x} (0)=1$

Now suppose that we want to calculate $e_{1}$. The idea is to start from $e_{0}$ and then then to use the first order of the Taylor series to extend the known value of $e_{0}$ to $e_{1}$.

E.g., if we split into 2 parts, we know that:
or in three parts:
so we can just use arbitrarily many parts $e_{1/n}$ that are arbitrarily close to $x=0$:
and more generally for any $x$ we have:

$e_{1}=e_{1/2}e_{1/2}$

$e_{1}=e_{1/3}e_{1/3}e_{1/3}$

$e_{1}=(e_{1/n})_{n}$

$e_{x}=(e_{x/n})_{n}$

Let's see what happens with the Taylor series. We have near $y=0$ in little-o notation:
Therefore, for $y=x/n$, which is near $y=0$ for any fixed $x$:
and therefore:
which is basically the formula tha we wanted. We just have to convince ourselves that at $lim_{n→∞}$, the $o(1/n)$ disappears, i.e.:

$e_{y}=1+y+o(y)$

$e_{x/n}=1+x/n+o(1/n)$

$e_{x}=(e_{x/n})_{n}=(1+x/n+o(1/n))_{n}$

$(1+x/n+o(1/n))_{n}=(1+x/n)_{n}$

To do that, let's multiply $e_{y}$ by itself once:
and multiplying a third time:
TODO conclude.

$e_{y}e_{y}=(1+y+o(y))(1+y+o(y))=1+2y+o(y)$

$e_{y}e_{y}e_{y}=(1+2y+o(y))(1+y+o(y))=1+3y+o(y)$

Is the solution to a system of linear ordinary differential equations, the exponential function is just a 1-dimensional subcase.

Note that more generally, the matrix exponential can be defined on any ring.

The matrix exponential is of particular interest in the study of Lie groups, because in the case of the Lie algebra of a matrix Lie group, it provides the correct exponential map.

en.wikipedia.org/wiki/Logarithm_of_a_matrix#Existence mentions it always exists for all invertible complex matrices. But the real condition is more complicated. Notable counter example: -1 cannot be reached by any real $e_{tk}$.

The Lie algebra exponential covering problem can be seen as a generalized version of this problem, because

- Lie algebra of $GL(n)$ is just the entire $M_{n}$
- we can immediately exclude non-invertible matrices from being the result of the exponential, because $e_{tM}$ has inverse $e_{−tM}$, so we already know that non-invertible matrices are not reachable

## Articles by others on the same topic (0)

There are currently no matching articles