Name origin: likely because it "determines" if a matrix is invertible or not, as a matrix is invertible iff determinant is not zero.
When it exists, which is not for all matrices, only invertible matrix, the inverse is denoted:
The set of all invertible matrices forms a group: the general linear group with matrix multiplication. Non-invertible matrices don't form a group due to the lack of inverse.
When it distributes it inverts the order of the matrix multiplication:
The transpose and matrix inverse commute:
Since a matrix can be seen as a linear map , the product of two matrices can be seen as the composition of two linear maps:
One cool thing about linear functions is that we can easily pre-calculate this product only once to obtain a new matrix, and so we don't have to do both multiplications separately each time.
No 2x2 examples please. I'm talking about large matrices that would be used in supercomputers.
TODO application.
TODO speedup over algorithm for general matrices.
www.studentclustercompetition.us/ comments:
The HPCG benchmark uses a preconditioned conjugate gradient (PCG) algorithm to measure the performance of HPC platforms with respect to frequently observed but challenging patterns of computing, communication, and memory access. While HPL provides an optimistic performance target for applications, HPCG can be considered as a lower bound on performance. Many of the top 500 supercomputers also provide their HPCG performance as a reference.
The terminology GEMM is present on BLAS, and has stuck pretty much.
Unlike the simple case of a matrix, in infinite dimensional vector spaces, the spectrum may be continuous.
The quintessential example of that is the spectrum of the position operator in quantum mechanics, in which any real number is a possible eigenvalue, since the particle may be found in any position. The associated eigenvectors are the corresponding Dirac delta functions.
Every invertible matrix can be written as:
where:
Note therefore that this decomposition is unique up to swapping the order of eigenvectors. We could fix a canonical form by sorting eigenvectors from smallest to largest in the case of a real number.
Intuitively, Note that this is just the change of bases formula, and so:
  • changes basis to align to the eigenvectors
  • multiplies eigenvectors simply by eigenvalues
  • changes back to the original basis
The general result from eigendecomposition of a matrix:
becomes:
where is an orthogonal matrix, and therefore has .
The theorem states that the number of 0, 1 and -1 in the metric signature is the same for two symmetric matrices that are congruent matrices.
For example, consider:
The eigenvalues of are and , and the associated eigenvectors are:
symPy code:
A = Matrix([[2, sqrt(2)], [sqrt(2), 3]])
A.eigenvects()
and from the eigendecomposition of a real symmetric matrix we know that:
Now, instead of , we could use , where is an arbitrary diagonal matrix of type:
With this, would reach a new matrix :
Therefore, with this congruence, we are able to multiply the eigenvalues of by any positive number and . Since we are multiplying by two arbitrary positive numbers, we cannot change the signs of the original eigenvalues, and so the metric signature is maintained, but respecting that any value can be reached.
Note that the matrix congruence relation looks a bit like the eigendecomposition of a matrix:
but note that does not have to contain eigenvalues, unlike the eigendecomposition of a matrix. This is because here is not fixed to having eigenvectors in its columns.
But because the matrix is symmetric however, we could always choose to actually diagonalize as mentioned at eigendecomposition of a real symmetric matrix. Therefore, the metric signature can be seen directly from eigenvalues.
Also, because is a diagonal matrix, and thus symmetric, it must be that:
What this does represent, is a general change of bases that maintains the matrix a symmetric matrix.
Two symmetric matrices and are defined to be congruent if there exists an in such that:
So, by taking , we understand that two matrices being congruent means that they can both correspond to the same bilinear form in different bases.
This is the possibly infinite dimensional version of a Hermitian matrix, since linear operators are the possibly infinite dimensional version of matrices.
There's a catch though: now we don't have explicit matrix indices here however in general, the generalized definition is shown at: en.wikipedia.org/w/index.php?title=Hermitian_adjoint&oldid=1032475701#Definition_for_bounded_operators_between_Hilbert_spaces
A good definition is that the sparse matrix has non-zero entries proportional the number of rows. Therefore this is Big O notation less than something that has non zero entries. Of course, this only makes sense when generalizing to larger and larger matrices, otherwise we could take the constant of proportionality very high for one specific matrix.
Of course, this only makes sense when generalizing to larger and larger matrices, otherwise we could take the constant of proportionality very high for one specific matrix.
The matrix ring of degree n is the set of all n-by-n square matrices together with the usual vector space and matrix multiplication operations.
This set forms a ring.
Members of the orthogonal group.
Applications:
A matrix that equals its transpose:
The definition implies that this is also a symmetric matrix.
The dot product is a positive definite matrix, and so we see that those will have an important link to familiar geometry.
WTF is a skew? "Antisymmetric" is just such a better name! And it also appears in other definitions such as antisymmetric multilinear map.