This section is about functions that operate on numbers such as the integers or real numbers.
We define this as the functional equation:It is a bit like cauchy's functional equation but with multiplication instead of addition.
TODO find better name for it, "linear homogenous differential equation of degree one" almost fully constrainst it except for the exponent constant and initial value.
The Taylor series expansion is the most direct definition of the expontial as it obviously satisfies the exponential function differential equation:
- the first constant term dies
- each other term gets converted to the one before
- because we have infinite many terms, we get what we started with!
The basic intuition for this is to start from the origin and make small changes to the function based on its known derivative at the origin.
More precisely, we know that for any base b, exponentiation satisfies:And we also know that for in particular that we satisfy the exponential function differential equation and so:One interesting fact is that the only thing we use from the exponential function differential equation is the value around , which is quite little information! This idea is basically what is behind the importance of the ralationship between Lie group-Lie algebra correspondence via the exponential map. In the more general settings of groups and manifolds, restricting ourselves to be near the origin is a huge advantage.
- .
- .
Now suppose that we want to calculate . The idea is to start from and then then to use the first order of the Taylor series to extend the known value of to .
E.g., if we split into 2 parts, we know that:or in three parts:so we can just use arbitrarily many parts that are arbitrarily close to :and more generally for any we have:
Let's see what happens with the Taylor series. We have near in little-o notation:Therefore, for , which is near for any fixed :and therefore:which is basically the formula tha we wanted. We just have to convince ourselves that at , the disappears, i.e.:
Is the solution to a system of linear ordinary differential equations, the exponential function is just a 1-dimensional subcase.
Note that more generally, the matrix exponential can be defined on any ring.
The matrix exponential is of particular interest in the study of Lie groups, because in the case of the Lie algebra of a matrix Lie group, it provides the correct exponential map.
en.wikipedia.org/wiki/Logarithm_of_a_matrix#Existence mentions it always exists for all invertible complex matrices. But the real condition is more complicated. Notable counter example: -1 cannot be reached by any real .
The Lie algebra exponential covering problem can be seen as a generalized version of this problem, because
- Lie algebra of is just the entire
- we can immediately exclude non-invertible matrices from being the result of the exponential, because has inverse , so we already know that non-invertible matrices are not reachable
In this section we collect results about algebraic equations over more "exotic" fields
The set of all algebraic numbers forms a field.
This field contains all of the rational numbers, but it is a quadratically closed field.
Like the rationals, this field also has the same cardinality as the natural numbers, because we can specify and enumerate each of its members by a fixed number of integers from the polynomial equation that defines them. So it is a bit like the rationals, but we use potentially arbitrary numbers of integers to specify each number (polynomial coefficients + index of which root we are talking about) instead of just always two as for the rationals.
Each algebraic number also has a degree associated to it, i.e. the degree of the polynomial used to define it.
TODO understand.
Sometimes mathematicians go a little overboard with their naming.
Open as of 2020:
Sometimes systems of Diophantine equations are considered.
Problems generally involve finding integer solutions to the equations, notably determining if any solution exists, and if infinitely solutions exist.
The general problem is known to be undecidable: Hilbert's tenth problem.
The Pythagorean triples, and its generalization Fermat's last theorem, are the quintessential examples.
Direct consequence of Euclid's formula.
A generalization of the Pythagorean triple infinity question.
Once you hear about the uncomputability of such problems, it makes you see that all Diophantine equation questions risk being undecidable, though in some simpler cases we manage to come up with answers. The feeling is similar to watching people trying to solve the Halting problem, e.g. in the effort to determine BB(5).
www.jstor.org/stable/1970438 says for prime modulo there is an algorithm.
Question for non-prime modulo: math.stackexchange.com/questions/4944623/are-diophantine-equations-decidable-in-modular-arithmetic
TODO is it or is not:
- mathoverflow.net/questions/207482/algorithmic-un-solvability-of-diophantine-equations-of-given-degree-with-given
- math.stackexchange.com/questions/181380/second-degree-diophantine-equations
- mathoverflow.net/questions/142938/is-there-an-algorithm-to-solve-quadratic-diophantine-equations
- math.stackexchange.com/questions/798609/is-there-any-solution-to-this-quadratic-diophantine-equation
mathoverflow.net/questions/11540/what-are-the-most-attractive-turing-undecidable-problems-in-mathematics/103415#103415 provides a specific single undecidable Diophantine equation.
mathoverflow.net/questions/11540/what-are-the-most-attractive-turing-undecidable-problems-in-mathematics/11557#11557 contains a good overview of the decidability status of variants over rings other than the integers.
And when it can't, attempt to classify which subset of the integers can be reached. E.g. Legendre's three-square theorem.
4 squares are sufficient by Lagrange's four-square theorem.
3 is not enough by Legendre's three-square theorem.
The subsets reachable with 2 and 3 squares are fully characterized by Legendre's three-square theorem and
Compared to Waring's problem, this is potentially much harder, as we can go infinitely negative in our attempts, there isn't a bound on how many tries we can have for each number.
In other words, it is unlikely to have a Conjecture reduction to a halting problem.
It is exactly what you'd expect from the name, Waring was watching Netflix with Goldbach, when they suddenly came up with this.
A polynomial of degree 1, i.e. of form .
A polynomial with multiple input arguments, e.g. with two inputs and :as opposed to a polynomial with a single argument e.g. one with just :
By default, we think of polynomials over the real numbers or complex numbers.
However, a polynomial can be defined over any other field just as well, the most notable example being that of a polynomial over a finite field.
For example, given the finite field of order 9, and with elements , we can denote polynomials over that ring aswhere is the variable name.
For example, one such polynomial could be:and another one:Note how all the coefficients are members of the finite field we chose.
Given this, we could evaluate the polynomial for any element of the field, e.g.:and so on.
We can also add polynomials as usual over the field:and multiplication works analogously.
However, there is nothing in the immediate definition that prevents us from having a ring instead, i.e. a field but without the commutative property and inverse elements.
The only thing is that then we would need to differentiate between different orderings of the terms of multivariate polynomial, e.g. the following would all be potentially different terms:while for a field they would all go into a single term:so when considering a polynomial over a ring we end up with a lot more more possible terms.
If the ring is a commutative ring however, polynomials do look like proper polynomials: Section "Polynomial over a commutative ring".
Unlike over non-commutative rings, polynomials do look like proper polynomials over commutative ring.
In particular, Hilbert's tenth problem is about polynomials over the integers, which is a commutative ring, and therefore brings mindshare to this definition.
The polynomials together with polynomial addition and multiplication form a commutative ring.
Articles by others on the same topic
There are currently no matching articles.