Last semester Anton was lecturing on the undergraduate Solid State Physics course at TU Delft. The course lasted several weeks, and each week there was a mini exam that students on the course could take for partial credit. This was a big course with 200 participants, and the prospect of having to manually grade 200 exam manuscripts every week was not something that anyone on the course team was looking forward to.
I wrote a column for the newsletter of our institute. Since I liked the result, I'm also reposting it below.
As a child I had a book "Bad advice" that contained nothing but poems suggesting you to do what you should really never do. So here is my bad professional advice (except that I won't risk making poetry):
Why do spectrum plots look ugly?¶
Very often when we compute the spectrum of a Hamiltonian over a finite grid of parameter values, we cannot resolve whether crossings are avoided or not. Further if we only compute a part of the spectrum using e.g. a sparse diagonalization routine, we fail to find a proper sequence of levels.
Let us illustrate these two failure modes.
# Just some initialization %matplotlib inline import numpy as np from scipy import linalg from scipy.optimize import linear_sum_assignment import matplotlib from matplotlib import pyplot matplotlib.rcParams['figure.figsize'] = (8, 6)
def ham(n): """A random matrix from a Gaussian Unitary Ensemble.""" h = np.random.randn(n, n) + 1j*np.random.randn(n, n) h += h.T.conj() return h def bad_ham(x, alpha1=.2, alpha2=.0001, n=10, seed=0): """A messy Hamiltonian with a bunch of crossings.""" np.random.seed(seed) h1, h2, h3 = ham(n), ham(n), ham(n) a1, a2 = alpha1 * ham(2*n), alpha2 * ham(3*n) * (1 + 0.1*x) a2[:2*n, :2*n] += a1 a2[:n, :n] += h1 * (1 - x) a2[n:2*n, n:2*n] += h2 * x a2[-n:, -n:] += h3 * (x - .5) return a2 xvals = np.linspace(0, 1) data = [linalg.eigvalsh(bad_ham(x)) for x in xvals] pyplot.plot(data) pyplot.ylim(-2.5, 2.5);
This is mock data produced by a random Hamiltonian with a bunch of crossings. We know that some of these apparent avoided crossings are too tiny to resolve, and should instead be classified as real crossings.
Let's now simulate what would happen if we also use sparse diagonalization to obtain some number of eigenvalues closest to 0.
truncated = [sorted(i[np.argsort(abs(i))[:13]]) for i in data] pyplot.plot(truncated);
The ugly jumps are not real, they appear merely because some levels exit our window and new ones enter.
A desperate person who needs results right now at this point replots the data using a scatterplot.
This is OK, but at the points where the lines are dense our eye identifies vertical lines, making the plot harder to interpret.