## Linear algebra summary

On the heels of how successful it was for me to work with Advanced Calculus demystified, I picked up Linear algebra demystified from the library to work through. I’ve been working through it from April 11 to April 27. This book was similarly effective for me, being a cursory, wide review, and providing lots of worked examples. David McMahon does a good job, and apparently has written a number of these books.

**Chapter 1 – Systems of linear equations** I learned how to carefully and correctly perform Gauss-Jordan elimination reduction of a matrix. For me the trick again is just to be very very organized, and not to let lack of space allow me to compromise the steps. Everything must be double checked, and this is only easy with well organized work. Also,

The *Hermitian conjugate *of is denoted and is the transpose and complex conjugate. A matrix with an inverse is called *nonsingular* (note, no information is lost when A operates on a vector).

**Chapter 2 – Matrix algebra ** *Cramer’s rule* uses determinants to solve systems of linear equations

Calculating the inverse of a matrix (of any size), can be accomplished by calculating the determinant and the adjugate of the matrix. The *minor* of a matrix is created by eliminating (row, col)(m,n) . The *cofactor* is the *signed minor* for (m,n) . The *adjugate* of is the matrix of cofactors. , so

**Chapter 3 – Determinants**

**Chapter 4 – Vectors**

**Chapter 5 – Vector spaces ** A vector space is a set of elements that is closed under addition, multiplication. Is associative, commutative. There exists an identity element and inverse under addition. Scalar multiplication is associated and distributive. There exists an identity element for multiplication. Given a vector space V, the subset W is a *subspace *if W is also a vector space. This is verifiable by checking only for the zero vector in W and closure under addition in W. E.g. has a subspace . Is it true that generally, has subspaces of , or is there an infinite number?

A matrix A in row-echelon form has a row space of A and column space of A. The *null space* of a matrix is found from , and the rank(A) + nullity(A) = n. The *closure relation* or *completeness *means that we can write the identity in terms of outer products of a set of basis vectors.

**Chapter 6 – Inner product spaces** The *inner product* is given by

*Inner products* on function spaces can be used to check for orthogonality of functions! The Gram-Schmidt process can generate an orthonormal basis from an arbitrary basis.

**Chapter 7 – Linear transformations**

**Chapter 8 – Eigenvalues ** The *characteristic polynomial* of a matrix A is given by and by setting this to zero, you have the *characteristic equation*. Two matrices A and B are *similar *if . Similar matrices have the same eigenvalues. The eigenvectors of a symmetric or Hermitian matrix form an orthonormal basis. A unitary matrix has the property that . When tow or more eigenvectors share the same eigenvalue, the eigenvalue is called *degenerate*. The number of eigenvectors that have the same eigenvalue is the *degree of degeneracy*.

**Chapter 9** – **Special matrices** A matrix A is *symmetric *if **.** Also note that . A symmetric matrix S can be written as and any A can be used to construct a symmetric matrix. A* skew-symmetric* matrix K has the properties that and . The product of two symmetric matrices is symmetric if they commute. The product of two skew-symmetric matrices is skew-symmetric if they anti-commute. The *Hermitian operator* is defined by

Note that and and . A *Hermitian matrix *is A is one that . A Hermitian matrix has the properties that the elements of its trace are all real numbers, and that all the eigenvalues are real, and the eigenvectors are orthogonal and form a basis. For an *anti-Hermitian matrix *A, the trace diagonal and the eigenvalues are all imaginary. For an *orthogonal matrix *P, .

**Chapter 10 – Matrix decomposition** A=LU decomposition is where any matrix A is expressed as a matrix with only lower elements (L) with zeros above, and an upper matrix U with zeros below. The matrix L can be formed from

where are the simple row operations to take A to U. A must not be singular. The SVD decomposition is singular value decomposition for A singular or nearly singular. QR decomposition is for A non-square non-singular where R is a square upper triangular matrix, and Q is an orthogonal matrix.

**Explore posts in the same categories:**Math

## Leave a Reply