## Archive for February 2009

### Cryptic night

27 February 2009

Relaxed and did a cryptic tonight from this book.   Cryptics are fun and tend to require a lot less knowledge and more creative thinking than a crossword puzzle.

### Numerical approximation for x+\ln(1+e^{-x})

25 February 2009

One other mystery from the numerical solution is the function that handles the evaluation of

$x+\ln(1+e^{-x})$

when the argument is extreme.  I also notated this onto paper, so it can be evaluated, I think it has to do with the precision of a float in c.  Perhaps it can be improved for a double in C#.

### Analytic expression for electron emission code

24 February 2009

Carefully reviewed the C# code today for the analytic expressions for TFE and extended Schottky regimes of electron emission, and transcribed it back to paper.  This is the first step in eliminating any doubt in my mind that these are the correct expressions.  The next step is to write down all the math and approximations that takes us from first principles to these equations, so that it is completely clear.

By the way, the code expressions  seem to match exactly the TED and J expressions found in references – the only issue is units.   When I figure out the units correctly, I believe I can run the C# TED expression without any normalization, which will show more clearly why the analytic and numerical expressions deviate from one another.

### Electron emission model

22 February 2009

Today I got back to work on making coherent notes on the open questions from the last paper.  I want to answer why the analytical model deviates from the numeric model.   I moved some of the work into a latex document and added initial figures.   I got stuck when I was approaching the numerical comparison which motivates the study of the WKB approximation.   I believe the WKB can be the only large source of error, but I haven’t shown that in all the four cases of TFE-J, TFE-dE, ESE-J, ESE-dE yet.

### MIT 1803 lecture 13

22 February 2009

This lecture covered finding the $y_p$, a particular solution of the ODE

$y'' + Ay' + By = f(x)$

Let the solutions be generally complex so that $e^{(a+iw)x} = e^{\alpha x}$.   Express the ODE as

$D^2y + A D y + B y = f(x)$

let $p(D)$ be $(D^2 + AD + B)$ or any polynomial operator on $D$, then

$p(D) e^{\alpha x} = p(\alpha) e^{\alpha x}$

which we will call the substitution rule.  This yields the exponential input theorem

$y_p = \frac{e^{\alpha x}}{p(\alpha) }$

when $p(\alpha) \neq 0$.   If $p(\alpha) = 0$, then the exponential shift rule can be used

$p(D) e^{\alpha x} u(x) = e^{\alpha x} p(D + \alpha) u(x)$

In proving this rule – he again warns you not to hack your way forward – generating more work than necessary – make sure to use your previous result.

### MIT 1803 lecture 12

22 February 2009

This lecture covered introduced the second order inhomogeneous equation $y'' +py' + qy = f(x)$.

Theorem:  $Ly = f(x)$ where $L$ is a linear operator.

Solution: $y_p + y_c$ that is a particular solution plus a characteristic solution.  The method for solving is:  first find $y_c$, and then find $y_p$$y_p$ is a particular solution to $Ly=f(x)$.

$y = y_p + c_1y_1 + c_2y_2$

Prove:  All the $y_p + c_1y_1 + c_2y_2$ are solutions, by,

$L(y_p + c_1y_1 + c_2y_2) = L(y_p) + L(c_1y_1 + c_2y_2)$

From above we know that $L(y_p)=f(x)$ so the remainder is $L(c_1y_1 + c_2y_2) =0$.

Prove:  There are no other solutions, by, let $u(x)$ be a solution, then

$L(u) = f(x) \quad L(y_p) = f(x) \quad L(u-y_p) = 0$

He is so careful in even choosing the names of constants – they should suggest a system or be intentionally neutral, suggesting nothing.

Lastly, he looks at the question – can second order linear inhomogeneous equations have transient solutions?  That is, under what conditions of $A$ and $B$ does $c_1y_1 + c_2y_2$ go to zero as $t \rightarrow \infty$?  He lists the characteristic roots, solutions, and stability conditions, and concludes that the simplist most elegant way to say it is the ODE is stable if all characteristic roots have a negative real part.

### MIT 1803 lecture 11

21 February 2009

This lecture covered a study of linearity in  y”, y’, y, etc. on the second order homogeneous differential equations.  He addressed the questions

1. Why are $c_1y_1 + c_2y_2$ solutions of a second order linear ODE?
2. What are they all the solutions?

By answering carefully answering these questions elegantly, the theory can be developed to also handle  higher order ODEs without any extra work.

To insure the work is elegantly extensible we first prove the superposition principle using operator notation.  Let $D$ be the differentiation operator that operates on or applies to $y$, and let $L$ be the linear operator.  The definition of a linear operator is an operator that obeys the rules

$L(u_1 + u_2) = L(u_1) + L(u_2)$

$L(cu) = cL(u)$

(where $c$ is a constant).   Now we can observe that $D$ is linear because

$(u_1 + u_2)' = u_1' + u_2'$

$(cu)' = cu'$

Notes that the second order ODE $y'' + py' + qy = 0$ becomes

$D^2y + pDy + qy = 0$

$(D^2 + pD + q) y = 0$

$Ly = 0$

Imagine $L$ as a black box that takes in a function $u(x)$ and outputs a function $v(x)$.  Finding the solution to the homogeneous linear ODE is equivalent to asking “if we want $v(x)=0$, so what $u(x)$ must we put into $L$?”

Proof of 2) starts by letting $y= c_1y_1 + c_2y_2$ and giving initial conditions $y(x_0) = a$ and $y'(x_o) = b$, then

$c_1y_1(x_0) + c_2y_2(x_0) = a$

$c_1y_1'(x_0) + c_2y_2'(x_0) = b$

this set of (two) linear equations is solvable iff the Wronskian $W(y_1,y_2)(x)$ is

$W(y_1,y_2) = \left| \begin{matrix} y_1 & y_2 \\ y_1' & y_2' \end{matrix} \right| \neq 0$

note that if $y_2 = cy_1$ (the solutions are not linearly independent) then $W(y_1,y_2) = 0$ (but note that the reverse is not strictly true – the Wronskian can be zero for other reasons).

Now, somehow, we are meant to find $Y_1$ and $Y_2$ which are normalized solutions, which are better than other solutions because their initial values are nicer.  Why are normalized solutions so good?  Because they allow us to instantly solve the initial conditions problem.  This can be seen from

$y = y_1 Y_1 + y'_1Y'_2$