Archive for April 2009

Linear algebra demystified

28 April 2009

LinearAlgebraDemystifiedRead through Linear Algebra Demystified, and worked through the problems.  It was very helpful for me as a self-teaching source.


Linear algebra summary

27 April 2009

On the heels of how successful it was for me to work with Advanced Calculus demystified, I picked up Linear algebra demystified from the library to work through.  I’ve been working through it from April 11 to April 27.  This book was similarly effective for me, being a cursory, wide review, and providing lots of worked examples.  David McMahon does a good job, and apparently has written a number of these books.

Chapter 1 – Systems of linear equations I learned how to carefully and correctly perform Gauss-Jordan elimination reduction of a matrix.  For me the trick again is just to be very very organized, and not to let lack of space allow me to compromise the steps.  Everything must be double checked, and this is only easy with well organized work.  Also,

(AB)^T = B^T A^T

(AB)^{-1} = B^{-1} A^{-1}

The Hermitian conjugate of A is denoted A^{\dagger} and is the transpose and complex conjugate.  A matrix with an inverse is called nonsingular (note, no information is lost when A operates on a vector).

Chapter 2 – Matrix algebra Cramer’s rule uses determinants to solve systems of linear equations

\det|AB| = \det|A| \det|B|

\det|A| \neq 0 \Rightarrow \exists A^{-1}

A=\left( \begin{matrix} a_{11} & \cdots & a_{1n} \\ 0 & \ddots & \vdots \\ 0 & 0 & a_{nn} \end{matrix} \right) \Rightarrow \det |A|= \prod \limits_{i=1} a_{ii}

Calculating the inverse of a matrix (of any size), can be accomplished by calculating the determinant and the adjugate of the matrix.  The minor of a matrix A is created by eliminating (row, col)(m,n) \equiv A_{mn}.  The cofactor is the signed minor for (m,n) (-1)^{m+n} \det |A_{mn}| \equiv a_{mn}.  The adjugate of A is the matrix of cofactors.  adj(A) = \left( \begin{matrix} a_{ii} & \cdots \\ \vdots & \ddots \end{matrix} \right) , so

A^{-1} = \frac{1}{\det|A|} adj(A)

Chapter 3 – Determinants

Chapter 4 – Vectors

Chapter 5 – Vector spaces A vector space is a set of elements that is closed under addition, multiplication.  Is associative, commutative.  There exists an identity element and inverse under addition.  Scalar multiplication is associated and distributive.  There exists an identity element for multiplication.  Given a vector space V, the subset W is a subspace if W is also a vector space.   This is verifiable by checking only for the zero vector in W and closure under addition in W.       E.g. C^3 has a subspace C^2 .  Is it true that generally, C^n has n subspaces of C^{n-1} , or is there an infinite number?

A matrix A in row-echelon form has a row space of A and column space of A.  The null space of a matrix is found from Ax=0 , and the rank(A) + nullity(A) = n.  The closure relation or completeness means that we can write the identity in terms of outer products of a set of basis vectors.

Chapter 6 – Inner product spaces The inner product is given by

(f,g) = \int_a^b f(x) g(x) dx

Inner products on function spaces can be used to check for orthogonality of functions!   The Gram-Schmidt process can generate an orthonormal basis from an arbitrary basis.

Chapter 7 – Linear transformations

Chapter 8 – Eigenvalues The characteristic polynomial of a matrix A is given by \det|A-\lambda I| and by setting this to zero, you have the characteristic equation.    Two matrices A and B are similar if B = S^{-1} A S .  Similar matrices have the same eigenvalues.  The eigenvectors of a symmetric or Hermitian matrix form an orthonormal basis.  A unitary matrix has the property that U\dagger = U^{-1} .  When tow or more eigenvectors share the same eigenvalue, the eigenvalue is called degenerate.  The number of eigenvectors that have the same eigenvalue is the degree of degeneracy.

tr(A) = \sum \lambda_i \quad \det |a| = \prod \lambda_i

Chapter 9Special matrices A matrix A is symmetric if A=A^T . Also note that (A+B)^T = A^T + B^T .   A symmetric matrix S can be written as \frac{1}{2}(A+A^T) and any A can be used to construct a symmetric matrix.   A skew-symmetric matrix K has the properties that K= -K^T and k_{ii}=0 .  The product of two symmetric matrices is symmetric if they commute.  The product of two skew-symmetric matrices is  skew-symmetric if they anti-commute.  The Hermitian operator is defined by

A^\dagger = (A^*)^T

Note that (A^\dagger)^\dagger = A and (A+B)^\dagger = A^\dagger + B^\dagger and (AB)^\dagger = B^\dagger A^\dagger .  A Hermitian matrix is A is one that A^\dagger= A .   A Hermitian matrix has the properties that the elements of its trace are all real numbers, and that all the eigenvalues are real, and the eigenvectors are orthogonal and form a basis.   For an anti-Hermitian matrix A, -A = T^\dagger the trace diagonal and the eigenvalues are all imaginary.   For an orthogonal matrix P,  P^T = P^{-1}.

Chapter 10 – Matrix decomposition A=LU decomposition is where any matrix A is expressed as a matrix with only lower elements (L) with zeros above, and an upper matrix U with zeros below.  The matrix L can be formed from

L = E^{-1}_1 E^{-1}_2 ... E^{-1}_n

where E^{-1}_i are the simple row operations to take A to U.    A must not be singular.  The SVD decomposition is singular value decomposition for A singular or nearly singular.   QR decomposition is for A non-square non-singular where R is a square upper triangular matrix, and Q is an orthogonal matrix.

All the mathematics you missed

18 April 2009

All the Mathematics You Missed

I have to put in a plug for this book, All the Mathematics You Missed.  It is a terse overview of general college math, placing subjects in context and relation, and summarizing results.  If you’ve taken a class in one of the subjects described, it is a good refresher, if not, it is a good introduction with leads to more complete books.  I like this book so much, when I thought I lost it, I had to go buy another.  Now I have two for reference.

Partial derivative notation

10 April 2009

There is another notation that was new to me \frac{\partial f}{\partial x} = f_x and \frac{\partial^2 f}{\partial x^2} = f_{xx} .  This also works for mixed partials, but note that the subscripts are reversed \frac{\partial^2 f}{\partial x \partial y} = f_{yx} .

Bose condensate to fermion gas

5 April 2009

The energy and spatial distribution of a fermi gas is governed by its temperature, unlike a bose-einstein condensate.  However, it is possible to have pairs of fermions (spin 1/2) become bosonic (spin 1).  What happens when a “pseudo” bose condensate comprised of paired fermions were to have the pairing process suddenly decay?   A large number of fermions suddenly “appearing” in a (non-physically-allowed) constrained region of phase space?

I’ve since found that this is called a fermionic condensate, and perhaps my question can be more simply stated as, “what happens when a fermionic condensate undergos a reverse BCS transition“?

Update – I read that there is a concept of “conservation of statistics“, that may bear on this.  I don’t understand it yet, however.

San Francisco trip

5 April 2009

My two day trip to San Francisco was great.

I  learned about the existence of Hadoop and the MapReduce methodology from a friend.  I’m also becoming more convinced to blog publically a couple of friends have encouraged me in the direction.   I’m even kind of interested in Twitter.

Advanced calculus demystified

4 April 2009

Advanced calculus demystifiedMost of my time recently was spent in completing a self-review of vector calculus using the text Advanced Calculus Demystified.  This study refreshed my knowledge of the mechanics and added a couple new tricks.  I completed nearly 75 of the problems in about 3 weeks of study.  (This book was particularly good, because all the problems were not just answered but worked out in the back of the book).

I learned a couple meta-lessons for me in this.

  1. There is no substitute for doing the problems.  I picked up on tics in my approach, and it was confidence building to get the correct answer.
  2. Almost all mistakes came from hasty leaps in the math (or combining leaps), and almost all hasty leaps were dictated by space on the paper!  (Emotional issue here, not wanting to waste paper, or shortcuts because “only sluggards have to write out the details”).
  3. I actually (half) remember more than I thought – my judgment of my current ability was a mistake in perception.

A lot of these still condense down to the maxim I was holding close a year ago – no more shortcuts.