Please upload one PDF file to Gradescope (Entry code: ME62YG).
For this homework you will implement code (on PrairieLearn) for a Monte Carlo simulation code to calculate the transition temperature of argon (as you did in homework 3, using molecular dynamics). You will be able to re-use many pieces of your code from earlier homeworks, so if you are concerned that they do not fully work, please make sure you get those working (by posting on Canvas and/or by attending office hours)!
When reusing the old MD code, set box length to \(L\)=4.0.
We will start by writing a Metropolis Monte Carlo code. Remember, in classical Metropolis Monte Carlo you will be sampling configurations with probability \(\exp(-\beta V)\), where \(V\) is the potential energy of the configuration and \(\beta= 1/kT\) is the inverse temperature. These probabilities should be familiar to you as the Boltzmann distribution for a canonical ensemble at thermodynamic equilibrium.
First, we should verify that we can evaluate the potential energy on a single particle in your system. You should already have code that helps you do this from your molecular dynamics simulation. Adapt it for this homework.
Metropolis Monte Carlo works by proposing particle moves at random, and probabilistically accepting or rejecting these moves according to the Metropolis criterion. We need to implement a function that updates the position of a single particle. You will update a particle position \(x\) by choosing the new position \(x'=x+\eta\) where \(\eta\) is chosen from a Gaussian distribution of mean 0 and variance \(\sigma^2\), i.e., the probability of proposing a move to position \(x'\) such that \(\eta = x' - x\) is \[\frac{1}{\sigma\sqrt{2\pi}} \exp{\left(-\eta^2/(2\sigma^2)\right)}.\]
To study \(T(x\rightarrow x')\) as the probability of drawing the displacement \((x' - x)\) from a Gaussian distribution centered on x with variance \(\sigma^2\), we write: \[T(x\rightarrow x') = \frac{1}{\sqrt{2\sigma}}e^{-(x'-x)^2/2\sigma^2}.\] Note that \(T(x\rightarrow x') = T(x'\rightarrow x)\) since \((x'-x)^2 = (x-x')^2\). For many more sophisticated moves, this is not the case.
Now, let’s calculate the acceptance probability. Remember, in Metropolis Monte Carlo, the probability of accepting a move from position \(x\) to \(x'\) is \[A(x\rightarrow x') = \min\left(1, \frac{e^{-\beta V(x')}T(x'\rightarrow x)}{e^{-\beta V(x)}T(x\rightarrow x')}\right)\] where \(V(x)\) is the potential energy of the system in configuration \(x\), and \(T(x\rightarrow x')\) is the probability of proposing a move from \(x\) to \(x'\).
Remember the steps of a basic Metropolis Monte Carlo:
UpdatePosition()
.Before we set up some runs to generate data, we want to add functionality to our code that calculates acceptance ratios (i.e., measure the total number of accepted moves divided by the total number of attempted moves). Knowing the acceptance ratio of your MC simulation is particularly important to judge whether you are sampling configuration space efficiently. You can tune your simulation by adjusting the value of \(\sigma\), which sets the length scale for your attempted moves and thus determines the acceptance ratio of these moves.
For all runs here, unless explicitly noted otherwise, use the following simulation parameters:
L = 4.0 # box length
rho = 1.0 # number density
N = 64 # particle number
M = 48.0 # mass
T = 2.0 # Temperature; beta = 0.5; for canonical ensemble.
Take the statement of detailed balance, \[P(x)T(x\rightarrow x')A(x\rightarrow x') = P(x')T(x'\rightarrow x)A(x'\rightarrow x),\] which says that the flux of particles from point \(x\) to \(x'\) is equal to the flux in the reverse direction, i.e., the net flux is zero. Here, \(P(x)\) is the probability of the system being in configuration \(x\), \(T(x\rightarrow x')\) is the probability of proposing a move from \(x\) to \(x'\), and \(A(x\rightarrow x')\) is the probability of accepting the proposed move.
Above, we implemented a Gaussian displacement move that proposed a displacement drawn from a Gaussian distribution with mean 0 and specified variance \(\sigma^2\). In this case the forward and reverse sampling probabilities canceled out, i.e. \[\frac{T(x\rightarrow x')}{T(x'\rightarrow x)} = 1\] which you derived. Often it is non-trivial to find the reverse sampling probability, \(T(x'\rightarrow x)\), given a sampling probability \(T(x\rightarrow x')\). We will study one case here.
The motivation of a “force-bias type” move is to propose a displacement that is not random, but rather is biased in the direction of the force acting on the particle to be moved. To implement this idea, we will make the same Gaussian displacement described above, but we will shift the center of our Gaussian in the direction of the force. The center of the Gaussian will be shifted by \[x_{\text{adjust}} = \frac{F(x)}{2m}(\delta t)^2.\] Thus the new move will choose a new position \(x' = x + \eta + x_{\text{adjust}}\) where \(\eta\) is chosen from a Gaussian distribution with probability \[\frac{1}{\sigma\sqrt{2\pi}}e^{-\eta^2/(2\sigma^2)}.\] This is called “smart Monte Carlo,” and has an asymmetry in the transition probability which must be reflected in the Metropolis acceptance criterion.