A linear map is a function where and are two vector spaces over underlying fields such that:
A common case is , and .
One thing that makes such functions particularly simple is that they can be fully specified by specifyin how they act on all possible combinations of input basis vectors: they are therefore specified by only a finite number of elements of .
Every linear map in finite dimension can be represented by a matrix, the points of the domain being represented as vectors.
As such, when we say "linear map", we can think of a generalization of matrix multiplication that makes sense in infinite dimensional spaces like Hilbert spaces, since calling such infinite dimensional maps "matrices" is stretching it a bit, since we would need to specify infinitely many rows and columns.
The prototypical building block of infinite dimensional linear map is the derivative. In that case, the vectors being operated upon are functions, which cannot therefore be specified by a finite number of parameters, e.g.
For example, the left side of the time-independent Schrödinger equation is a linear map. And the time-independent Schrödinger equation can be seen as a eigenvalue problem.
A form is a function from a vector space to elements of the underlying field of the vector space.
A Linear map where the image is the underlying field of the vector space, e.g. .
The set of all linear forms over a vector space forms another vector space called the dual space.
For the typical case of a linear form over , the form can be seen just as a row vector with n elements, the full form being specified by the value of each of the basis vectors.
The dual space of a vector space , sometimes denoted , is the vector space of all linear forms over with the obvious addition and scalar multiplication operations defined.
Since a linear form is completely determined by how it acts on a bases, and since for each basis element it is specified by a scalar, at least in finite dimension, the dimension of the dual space is the same as the , and so they are isomorphic because all vector spaces of the same dimension on a given field are isomorphic, and so the dual is quite a boring concept in the context of finite dimension.
One place where duals are different from the non-duals however is when dealing with tensors, because they transform differently than vectors from the base space .
Dual vectors are the members of a dual space.
In the context of tensors , we use raised indices to refer to members of the dual basis vs the underlying basis:
The dual basis vectors are defined to "pick the corresponding coordinate" out of elements of V. E.g.:
By expanding into the basis, we can put this more succinctly with the Kronecker delta as:
Note that in Einstein notation, the components of a dual vector have lower indices. This works well with the upper case indices of the dual vectors, allowing us to write a dual vector as:
In the context of quantum mechanics, the bra notation is also used for dual vectors.
We define it as a linear map where the domain is the same as the image, i.e. an endofunction.
Examples:
Given a linear operator over a space that has a inner product defined, we define the adjoint operator (the symbol is called "dagger") as the unique operator that satisfies:
Linear map of two variables.
More formally, given 3 vector spaces X, Y, Z over a single field, a bilinear map is a function from:
that is linear on the first two arguments from X and Y, i.e.:
Note that the definition only makes sense if all three vector spaces are over the same field, because linearity can mix up each of them.
The most important example by far is the dot product from , which is more specifically also a symmetric bilinear form.
Analogous to a linear form, a bilinear form is a Bilinear map where the image is the underlying field of the vector space, e.g. .
Some definitions require both of the input spaces to be the same, e.g. , but it doesn't make much different in general.
The most important example of a bilinear form is the dot product. It is only defined if both the input spaces are the same.
As usual, it is useful to think about how a bilinear form looks like in terms of vectors and matrices.
Unlike a linear form, which was a vector, because it has two inputs, the bilinear form is represented by a matrix which encodes the value for each possible pair of basis vectors.
In terms of that matrix, the form is then given by:
If is the change of basis matrix, then the matrix representation of a bilinear form that looked like:
then the matrix in the new basis is:
Sylvester's law of inertia then tells us that the number of positive, negative and 0 eigenvalues of both of those matrices is the same.
Proof: the value of a given bilinear form cannot change due to a change of bases, since the bilinear form is just a function, and does not depend on the choice of basis. The only thing that change is the matrix representation of the form. Therefore, we must have:
and in the new basis:
and so since:
See form.
Analogous to a linear form, a multilinear form is a Multilinear map where the image is the underlying field of the vector space, e.g. .
Subcase of symmetric multilinear map:
Requires the two inputs and to be in the same vector space of course.
The most important example is the dot product, which is also a positive definite symmetric bilinear form.
Like the matrix representation of a bilinear form, it is a matrix, but now the matrix has to be a symmetric matrix.
We can then immediately see that the matrix is symmetric, then so is the form. We have:
But because is a scalar, we have:
and:
The prototypical example of it is the complex dot product.
Note that this form is neither strictly symmetric, it satisfies:
where the over bar indicates the complex conjugate, nor is it linear for complex scalar multiplication on the second argument.
;
Multivariate polynomial where each term has degree 2, e.g.:
is a quadratic form because each term has degree 2:
but e.g.:
is not because the term has degree 3.
More generally for any number of variables it can be written as:
There is a 1-to-1 relationship between quadratic forms and symmetric bilinear forms. In matrix representation, this can be written as:
where contains each of the variabes of the form, e.g. for 2 variables:
Strictly speaking, the associated bilinear form would not need to be a symmetric bilinear form, at least for the real numbers or complex numbers which are commutative. E.g.:
But that same matrix could also be written in symmetric form as:
so why not I guess, its simpler/more restricted.
Symmetric bilinear form that is also positive definite, i.e.:
Subcase of antisymmetric multilinear map:
Same value if you swap any input arguments.
Change sign if you swap two input values.