Measuring the Ising Model#

Our goal in this section will be to start measuring properties of our Ising model. We will work with a larger system (\(27 \times 27\)) on this page. When reporting energy and magnetization squared it should always be per spin.

Minimal Background#

We can measure properties or observables of our system at a temperature \(T.\) An observable \(O(c)\) eats a configuration and spits out a number. The average value of an observable is

\[\sum_c P(c) O(c)\]

Three observables we will consider are the energy \(E(c)\) (which we’ve already seen) whose average is

\[\langle E \rangle = \sum_c P(c) E(c) = \sum_c \frac{\exp[-\beta E(c)]}{Z} E(c) = \sum_E P(E) E \]

where \(Z=\sum_c \exp[-\beta E(c)]\) Notice that this means that \(P(E) = \sum_{c\,\mathrm{with\,energy}\,E} P(c)\) and the squared magnetization per spin \(M^2(c)= (\frac{1}{N} \sum_i s_i)^2\) whose average

\[\langle M^2 \rangle = \sum_c P(c) M(c)^2 = \sum_{M^2} P(M^2) M^2\]

Notice that this means that \(P(M^2) = \sum_{c\,\mathrm{with\,squared\,magnetization}\,M^2} P(c)\) .

Question to think about: Why do we study \(\langle m^2 \rangle\) and not \(\langle m \rangle\)?

When we measure an observable we can either compute its histogram or expectation value.

Phases#

A phase of matter (like gas, liquid, solid) can be defined by an order parameter (i.e. an observable) which is zero in one phase and not zero in another phase. A 2D Ising model has two phases: the ferromagnet (essentially all spins up) and the paramagnet (essentially all random spins). In the paramagnet, the squared magnetization is zero in the thermodynamic limit. The ferromagnet, the squared magnetization is not zero in the thermodynamic limit.

A note about errors#

You can think of your Monte Carlo code as a black box that gives correlated samples of the magnetization. To get an average, do exactly what you’re familiar with: sum the results then divide by the number of results. But further analysis involves subtleties that come with correlated data.

Consider estimating the error of the previously computed average. Given uncorrelated data, the error in the mean typically quantified by the standard error, \(\sigma_E \equiv \sigma / \sqrt{N}\), where \(\sigma\) is the standard deviation (ask python!) and \(N\) is the number of datapoints. (Note: The standard deviation is essentially a measure of the spread of the data and has nothing to do with the amount of data points you have.) For correlated data, this is the wrong way to estimate error. In the correlated case you must divide the number of datapoints by a quantity called the integrated autocorrelation time, \(\tau\), to get the number of uncorrelated points.

As an example, suppose you’ve generated 100 samples of the Ising model with a MCMC method which has \(\tau = 4\). A non-zero value of \(\tau\) reflects the fact that movement from one configuration to another depends on where you start; here we need four moves to reach a configuration which is plausibly uncorrelated with the starting configuration. Thus, when analyzing the error of a mean, the number of uncorrelated samples in the calculation is a mere \(100/4 = 25\) samples.

To compute the standard error scaled by the autocorrelation time \(\tau\), you can use the stats.py script. You can do this as follows

import stats
(mean, variance, error, autocorrelation) = stats.Stats(myNumpyArrayOfData)

The variable error is the standard error that properly takes into account the autocorrelation time.


Measuring#

For a range of \(\beta J \in [0.0,0.1,0.2,...,1.0]\) and \(\beta=\infty\) (*in your code you can just use a large \(\beta\) *) compute

  • a prototypical snapshot (you can build this with pylab.matshow() )

  • a histogram of \(P(E)\)

  • the value of \(\langle E \rangle\) (and error bar)

  • a histogram of \(P(M^2)\)

  • the value of \(\langle M^2 \rangle\) (and error bar)

Testing

To test your results, it’s important to check limits that you are able to compute in some alternative way. In our case, it is reasonably straightforward to figure out the \(\beta=0\) and \(\beta=\infty\) results. These results can be figured out either analytically or writing a short five-line python code. For these two limits, you should compute the histograms and average values of \(\langle M^2\rangle\) and \(\langle E \rangle\).

Grading

Fill out the table in the document. Draw vertical lines on your respective histograms where \(\langle E \rangle\) and \(\langle M^2 \rangle\) are (use axvline()). On your \(\beta=\infty\) and \(\beta=0\) histogram, draw the ‘testing’ curvesfg on top of your Monte Carlo curve. Also write in the table, the `testing’ values you determined for \(\langle E \rangle\) and \(\langle M^2 \rangle\) for \(\beta = 0,\infty\).

At \(\beta=0\) (infinite temperature) every configuration is equally likely, but your energy and squared magnetization are not equally likely (see your histograms). What this is telling you is that macroscopic observables are different from microscopic observables.

If you look at your snapshot, you should see two phases. One (at \(\beta=0\)) is the paramagnetic phase. Every configuration is equally likely. The other phase (large \(\beta\)) is the ferromagnetic phase. Every spin should agree.

Note about domain walls: Maybe you are seeing domain walls? Why is that? This is because it’s very hard to move from the all-up to the all-down configuration. This isn’t a critical problem (this happens in the real world too). You don’t have to fix it but you could by adding in additional smarter moves.

In between these two phases there is a phase transition; there is a critical point (one very specific temperature) where this happens.

You now want to use this information to compute the transition location. You should be able to roughly guess where the transition is just by looking at the prototypical snapshots. To do a more thorough job, we are going to use the magnetization squared. This is the order parameter for ferromagnetic phase. When \(L=\infty\) the order parameter is non-zero in the ferromagnetic phase and zero in the paramagnetic phase.

It won’t be exactly this sharp in the finite system but it should still give you a good sense for the location of the transition.

Grading

Graph \(\langle M^2\rangle\) as a function of \(\beta\). This graph must have error bars on it. Where is the transition? Paste these graphs into your document.

Another way to identify the transition is to compute the specific heat $\( C_v = \frac{\partial E}{\partial T} \)$ and look for a peak.

If it easy for you to compute a curve for \(E\) vs. \(T\) (you probably want constant \(\Delta T\) and not constant \(\Delta \beta\). We now have to think how to get the derivative. There are two ways to go about this. One is to just try to subtract your nearest neighbor points (something like Energies[1:] - Energies[:-1]). This will work but requires you to have the error bars on your energies pretty small.

An alternative (and more robust approach) is to first interpolate your \(E\) vs \(T\) curve. You can do this using scipy.interpolate.UnivariateSpline. Then you can use .derivative() to get the derivative.

Grading

Produce a curve of the specific heat vs. \(T\). Paste it into your document

Extra Credit (5 points)

One thing that we haven’t measured is the free energy and entropy. In order to measure this, you need to do some additional thermodynamic integration. Look up how to go about this and implement it computing the entropy and free energy of the Ising model as a function of temperature.