Linear algebra summary

On the heels of how successful it was for me to work with Advanced Calculus demystified, I picked up Linear algebra demystified from the library to work through.  I’ve been working through it from April 11 to April 27.  This book was similarly effective for me, being a cursory, wide review, and providing lots of worked examples.  David McMahon does a good job, and apparently has written a number of these books.

Chapter 1 – Systems of linear equations I learned how to carefully and correctly perform Gauss-Jordan elimination reduction of a matrix.  For me the trick again is just to be very very organized, and not to let lack of space allow me to compromise the steps.  Everything must be double checked, and this is only easy with well organized work.  Also,

(AB)^T = B^T A^T

(AB)^{-1} = B^{-1} A^{-1}

The Hermitian conjugate of A is denoted A^{\dagger} and is the transpose and complex conjugate.  A matrix with an inverse is called nonsingular (note, no information is lost when A operates on a vector).

Chapter 2 – Matrix algebra Cramer’s rule uses determinants to solve systems of linear equations

\det|AB| = \det|A| \det|B|

\det|A| \neq 0 \Rightarrow \exists A^{-1}

A=\left( \begin{matrix} a_{11} & \cdots & a_{1n} \\ 0 & \ddots & \vdots \\ 0 & 0 & a_{nn} \end{matrix} \right) \Rightarrow \det |A|= \prod \limits_{i=1} a_{ii}

Calculating the inverse of a matrix (of any size), can be accomplished by calculating the determinant and the adjugate of the matrix.  The minor of a matrix A is created by eliminating (row, col)(m,n) \equiv A_{mn}.  The cofactor is the signed minor for (m,n) (-1)^{m+n} \det |A_{mn}| \equiv a_{mn}.  The adjugate of A is the matrix of cofactors.  adj(A) = \left( \begin{matrix} a_{ii} & \cdots \\ \vdots & \ddots \end{matrix} \right) , so

A^{-1} = \frac{1}{\det|A|} adj(A)

Chapter 3 – Determinants

Chapter 4 – Vectors

Chapter 5 – Vector spaces A vector space is a set of elements that is closed under addition, multiplication.  Is associative, commutative.  There exists an identity element and inverse under addition.  Scalar multiplication is associated and distributive.  There exists an identity element for multiplication.  Given a vector space V, the subset W is a subspace if W is also a vector space.   This is verifiable by checking only for the zero vector in W and closure under addition in W.       E.g. C^3 has a subspace C^2 .  Is it true that generally, C^n has n subspaces of C^{n-1} , or is there an infinite number?

A matrix A in row-echelon form has a row space of A and column space of A.  The null space of a matrix is found from Ax=0 , and the rank(A) + nullity(A) = n.  The closure relation or completeness means that we can write the identity in terms of outer products of a set of basis vectors.

Chapter 6 – Inner product spaces The inner product is given by

(f,g) = \int_a^b f(x) g(x) dx

Inner products on function spaces can be used to check for orthogonality of functions!   The Gram-Schmidt process can generate an orthonormal basis from an arbitrary basis.

Chapter 7 – Linear transformations

Chapter 8 – Eigenvalues The characteristic polynomial of a matrix A is given by \det|A-\lambda I| and by setting this to zero, you have the characteristic equation.    Two matrices A and B are similar if B = S^{-1} A S .  Similar matrices have the same eigenvalues.  The eigenvectors of a symmetric or Hermitian matrix form an orthonormal basis.  A unitary matrix has the property that U\dagger = U^{-1} .  When tow or more eigenvectors share the same eigenvalue, the eigenvalue is called degenerate.  The number of eigenvectors that have the same eigenvalue is the degree of degeneracy.

tr(A) = \sum \lambda_i \quad \det |a| = \prod \lambda_i

Chapter 9Special matrices A matrix A is symmetric if A=A^T . Also note that (A+B)^T = A^T + B^T .   A symmetric matrix S can be written as \frac{1}{2}(A+A^T) and any A can be used to construct a symmetric matrix.   A skew-symmetric matrix K has the properties that K= -K^T and k_{ii}=0 .  The product of two symmetric matrices is symmetric if they commute.  The product of two skew-symmetric matrices is  skew-symmetric if they anti-commute.  The Hermitian operator is defined by

A^\dagger = (A^*)^T

Note that (A^\dagger)^\dagger = A and (A+B)^\dagger = A^\dagger + B^\dagger and (AB)^\dagger = B^\dagger A^\dagger .  A Hermitian matrix is A is one that A^\dagger= A .   A Hermitian matrix has the properties that the elements of its trace are all real numbers, and that all the eigenvalues are real, and the eigenvectors are orthogonal and form a basis.   For an anti-Hermitian matrix A, -A = T^\dagger the trace diagonal and the eigenvalues are all imaginary.   For an orthogonal matrix P,  P^T = P^{-1}.

Chapter 10 – Matrix decomposition A=LU decomposition is where any matrix A is expressed as a matrix with only lower elements (L) with zeros above, and an upper matrix U with zeros below.  The matrix L can be formed from

L = E^{-1}_1 E^{-1}_2 ... E^{-1}_n

where E^{-1}_i are the simple row operations to take A to U.    A must not be singular.  The SVD decomposition is singular value decomposition for A singular or nearly singular.   QR decomposition is for A non-square non-singular where R is a square upper triangular matrix, and Q is an orthogonal matrix.

Advertisements
Explore posts in the same categories: Math

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: