A matrix is a bi-dimensional rectangular array of expressions arranged in rows and columns
- Binary Operations
- Unary Operations
- Row Echelon Form
- Vector Spaces
- Similarity Transformations
- Special Matrices
An matrix has rows and columns. Given a matrix , the notation refers to the element of in the th row and the th column, counting from the top and from the left, respectively.
Scaling (Scalar Multiplication)
Given constant :
Vector Dot Product (or Inner Product)
Denoted or given two vectors and .
Note that the vectors must have the same number of rows, and that the result of a dot product is a scalar.
Two vectors and are orthogonal if .
Vector dot product of and is equivalent to the matrix product of and :
Vector Outer Product
Given two vectors and with the same number of elements, the outer product between them is , where the result is always a square matrix:
Vector Product (or Cross Product)
Note that the vectors must have the same number of rows, and that the result of a cross product is another vector of the same number of rows.
Cross product is not commutative: .
Given matrix and vector , the number of columns in must equal the number of rows in :
The resulting matrix has the same number of rows as , but only 1 column.
Note that the following addition is a linear combination:
Notice that given matrices and vectors and , is equivalent to .
Given matrices and , the number of columns in must match the number of rows in .
Note that matrix multiplication is associative: but its not commutative: .
Multiplying a matrix with a matrix looks like this:
Given constant :
Dividing is the same as multiplying by the inverse of : .
The trace of a square matrix its the sum of its diagonal, and its defined as . For example:
Given constant , then . The trace function is commutative and associative: , and . Also .
Vector Norm (length)
Given vector , the norm of is the absolute value: , which is also equal to the square root of the dot product of with itself: .
The unit vector of vector is divided by its norm: .
The minor of an entry of a square matrix is the determinant of the square submatrix of when the row and column (indexed by 1) are removed, and is denoted . For example, given: , its minor is .
The cofactor of an entry of a square matrix is denoted or , and is defined as the entry’s minor with alternating sign depending on the indexes: .
The adjugate matrix of matrix is another where every entry of is replaced by its cofactor.
For example, as:
The determinant of a square matrix is a scalar denoted or .
The determinant of a matrix is the element itself: . Given a matrix: . For and larger matrices , the determinant is defined recursively: where is the number of columns in .
The following laws hold given two square matrices and :
- where is the number of rows in
The rows of a matrix are linearly independent if . We can say if any of the rows of is all zeroes. Also, matrix is not invertible if . If then is deficient, and full otherwise.
Given row operations:
- Adding a multiple of one row to another row doesn’t change the determinant of the matrix
- Swapping rows changes the sign of the determinant
- Multiplying a row by a constant is equal to multiplying the determinant by the same constant
Considering RREF, given square matrix , then implies that . Also, if , then , and conversely, if , then .
A matrix is the inverse of matrix if either or .
The Invertible Matrix Theorem states that for any square matrix , the following statements are either all true or all false:
- is invertible
- is invertible
- has exactly one solution for any dimensional vector
- The null space of only contains the zero vector:
- only has solution
- The rank of is
- The determinant of is non zero:
- The RREF of is the dimensional identity matrix
- The columns of are linearly independent
- The rows of are linearly independent
The following laws hold, given two invertible matrices and :
We can calculate the inverse of an square matrix using its adjugate and determinant as follows:
For example, given , we know its adjugate is and its determinant is , so .
Which we can check as:
Using Gauss-Jordan Elimination
We can calculate the inverse of an square matrix by creating an matrix that contains at the left and at the right:
Given , the matrix is then .
Calculate the RREF of the matrix:
The left side of the RREF should be the identity matrix (otherwise the matrix is not invertible) and the right side contains the inverse:
Which we can check as:
Matrix transpose flips a matrix by its diagonal, and its denoted for a matrix .
- Given a matrix:
- Given a matrix:
- Given a matrix:
- Given a square matrix:
- Given a matrix:
The following laws hold, given and :
The rank of a matrix , denoted is a scalar that equals the number of pivots in the RREF of . More formally, is the dimension of either the row or column spaces of : . Basically, the rank describes the number of linearly independent rows or columns in a matrix.
The nullity of a matrix , denoted , is the number of linearly independent vectors in the null space of : .
Row Echelon Form
The first non-zero element of a matrix row is the leading coefficient or pivot of the row. A matrix is in row echelon form (REF) if:
- The leading coefficients of all rows are at the right of the leading coefficients of the rows above
- All rows containing all zeroes are below the rows with leading coefficients
For example: .
The process of bringing a matrix to row echelon form is called Gaussian Elimination. Starting with the first row:
- Obtain a leading coefficient 1 in the row by either:
- Swapping the current row with any of the rows below
- Dividing or multiplying the row vector by a constant
- Subtract or add the row one or more times to the rows below to zero out the leading coefficient column in all the rows below
- Repeat the process with the row below
For example, given , the leading coefficient of the first row is already 1, so we can move on. The value below the first leading coefficient is 4, so we can multiply the first vector by 4 and substract it from the second row: so the matrix is now . The leading coefficient of the third row is 7, so we can multiply the first row by 7 and substract it from the third row: so the matrix is now: . The entries below the first row’s leading coefficient are zero, so we can move on to the second row, which we can divide by -3 to make its leading coefficient 1: , so the matrix is now: . The coefficient below the second row’s leading coefficient is -6, so we can add the second row multiplied by 6 to it: so the matrix is now: and is in row echelon form as the third row is all zeroes.
Reduced Row Echelon Form
A matrix is in reduced row echelon form (RREF) if:
- It is in row echelon form (REF)
- The leading coefficients of all non-zero rows are 1
- All the entries above and below a pivot are zero for that column
The process of bringing a matrix to row echelon form is called Gaussian-Jordan Elimination. Starting with the last row with a pivot:
- Add or subtract the row one or more times to the rows above it to zero out the entries above the pivot in that column
- Repeat the process with the row above
For example, given , the last row with a pivot is the second row. The entry above the leading coefficient is 2, so we can multiply the second row by 2 and substract it from the first row: , so the matrix is now: and is in reduced row echelon form. There is no pivot in the third column, so the last elements of the first and second rows don’t need to be zeroed out.
The following vector spaces are the fundamental vector spaces of a matrix. Assume an matrix .
The set of all vectors that can multiply from the left. Basically the vectors where is a valid operation. Given an matrix , its left space is dimensional.
Any element from the left space can be written as the sum of a vector from the column space and a vector from the left null space:
The set of all vectors that can multiply from the right. Basically the vectors where is a valid operation. Given an matrix , its right space is dimensional.
Any element from the right space can be written as the sum of a vector from the row space and a vector from the null space:
The span of the rows of matrix : . Note that . Defined as .
The span of the columns of matrix : . Defined as .
(Right) Null Space
The set of vectors where is the zero vector: . It always contains the zero vector. Sometimes called the kernel of the matrix.
Given matrix with a null space containing more than the zero vector, then the equation has infinite solutions, as the rows in would not be linearly independent, and given a solution , we can add any member of the null space and it would still be a valid solution.
For example, consider . Its null space consists of and any linear combination of such vector, including the zero vector. Then consider the equation . A valid solution is as and . But then another valid solution is as and . Same for any given any constant .
If the null space of only contains the zero vector, then has exactly one solution, as that solution is plus any member of the vector space, which is only the zero vector, and plus the zero vector is just .
Left Null Space
The set of vectors where is the zero vector. It is denoted as the (right) null space of the transpose of the input vector: , or similarly: .
We say that matrices and are related by a similarity transformation if there exists an invertible matrix such that: .
If the above holds, then the following statements hold as well:
The identity matrix is a square matrix with 1’s in the diagonal and 0’s elsewhere. The identity matrix is:
Given a square and invertible matrix , then . The identity matrix is symmetric and positive semidefinite.
Multiplying the identity matrix with a dimensional vector is equal to the same vector. Basically for any vector .
Every row or column operation that can be performed on a matrix, such as a row swap, can be expressed as left multiplication by special matrices called elementary matrices.
For example, given a matrix , the elementary matrix to swap the first and second rows is as:
In order to find elementary matrices, we can perform the desired operation on the identity matrix. In the above case, we can build a identity matrix and then swap the rows: .
Some more elementary matrices examples:
- Add times the second row to the first row:
- Multiply the first row times:
A diagonal matrix is a square matrix with values on the diagonal and zeroes everywhere else, such as: . The values on the diagonal are the eigenvalues of : .
An matrix is only diagonalizable if it has eigenvalues. All normal matrices are diagonalizable.
A matrix is normal if
A matrix is orthogonal if , which means that . All orthogonal matrices are normal. The determinant of an orthogonal matrix is always -1 or 1.
A matrix is symmetric if . All symmetric matrices are normal. Notice that given any matrix , the matrix is always symmetric.
A matrix is upper triangular if it contains zeroes below the diagonal, such as .
An matrix is a square matrix if . A trick to convert a non-square matrix into a square matrix is multiply it by its transpose:
- has the same column space as :
- has the same row space as :
A matrix is positive semidefinite if .
For example, conside and let , then:
Both and , so is positive semidefinite.
A matrix is positive definite if .