How to Construct the MD or MC Potentials



Simulation results are only as good as the potentials (forces) that are used. It is not an easy problem to get accurate forces because the origin of forces is quantum mechanics, to date a computationally intractable problem, but there has been some recent progress.  The most common approach is semi-emprical: a functional form is guessed and parameters in the form are deduced from theory or fitted from data.

How do we define the potential to be used in MD?

The Born-Oppenheimer Potential

The fundamental description is in terms of the non-relativistic Schroedinger equation for both the ionic and electronic degrees of freedom. But since the electrons are much lighter they respond more-or-less instantly to the ionic motion. Also typical temperatures are very cold compared to the electronic energy scale (1 Hartree, which sets the scale of the electronic energy, equals 316,000 K) so the electrons are in or close to their ground state. In the
Born-Oppenheimer (BO) approximation we fix the ions (and neglect the nuclear kinetic energy) and solve (or try to solve) for the electronic ground state energy. Corrections to BO are usually very small because corrections go as the ratio of the electron to nuclear mass which is smaller than 0.0001 for all atoms except hydrogen and helium.

Unfortunately solving for the quantum mechanics is not very easy to do. We will describe some attempts when we discuss quantum Monte Carlo. The other methods are the local density functional theory (and improvements such as the GGA theory), quantum chemistry methods such as Hartree-Fock theory and the configuration interaction method.

One break-though occurred in the late 1980's when Car and Parrinello (CP) showed how one could simultaneously solve the LDA equation for the electronic energy and the dynamical equations for the nuclei (see, Phys. Rev. Lett. 55, 2471 (1985)). This means one does not have to tabulate the BO energy but just compute it as the ions move. We will describe this method later in the course but the CP approach is very much slower than using fitted potentials (about 4 orders of magnitude). The largest CP studies have several hundreds of electrons or atoms while the largest MD simulation have millions of particles.

Unless we use the CP approach then we have to assume some form for the potential. The assumption of a potential is a big approximation because the potential energy surface is a function of 3N variables for N ions. Symmetries such as translational degrees or rotational of freedom will only get rid of a few degrees of freedom. What usually has to be done is to assume the many-body potential is a sum of pair potentials and perhaps three-body potentials etc. This also assumes the molecule is rigid and weakly interacting with the environment so its internal electronic and vibronic state does not couple with surrounding molecules. A pair potential is a function of a single variable; there is no way it can exactly represent a 3N dimensional function.

 

Where Do You Get Potentials?

Luckily for computer simulation, basic intermolecular structure depends overwhelmingly on two features of the interaction: the shape of the molecules and their long-range interactions due to Coulomb forces. These are not difficult to determine from experiment.

So where do we get the pair potentials from? The basic input is theoretical as we will discuss. The other input is empirical . Some sources of experimental data are:

One should keep in mind the difference between  interpolation and extrapolation. If experimental information is used to determine the potential, it will be good in a nearby region of phase space and for similar properties. But that potential could be quite bad for unrelated properties or outside the region that has been fitted. For example, if the potential has been determined using bulk properties, there is no reason to think it will be good to compute surface properties because the local environment on a surface is not as symmetrical as in the bulk.
EXAMPLE: Suppose we consider a "bonding" type atomic interaction in a solid phase. What properties must the analytic form of the potential function be required minimally to mimic some "real" behavior?

A small list of properties that we might wish to represent, for example, could be:

Then, for example, does the simulation reproduce the melting temperature, Tm?

 

Three Common Situations: Noble Gases, Metals, Covalently-bonded Molecules

Let us now discuss some of the theoretical ideas that go into determining potentials. (See also possible Criteria, and More Sophisticated Choices, for Potentials.) Looking at the periodic table, there are three common situations.

Noble Gases

These are the non-bonding, rare gases (e.g., Kr and Ar) where the electronic shells are closed. Thus means they are spherically symmetric, with rather weak, but long-ranged, van der Waals type interactions. They were the first systems to be simulated and pair potentials work best for them. For this case, a oft used pair potentials is the Lennard-Jones (6-12) potential.

At large distances (r >> r0, the equilibrium distance) one can undestand the interaction by considering how two oscillators would interact, giving an r-6 attraction, where the coefficient is determined by the atomic polarizibility (arising due ot correlations between electron clouds surrounding atoms and give van der Waals (dipole-dipole) interactions.) On the other hand, at short distances (r << r0), the potentials are strongly repulsive because of the Pauli exclusion principle (electrons are fermions). Lennard-Jones assumed a stronger inverse power law, conveniently r-12. One should not think the Lennard-Jones (L-J) potential as anything more fundamental than this.

Even for Argon it is only accurate to 10% or so. And three-body potentials can be significant at the 10% level (more in case of polarizable species). An example of the difference between the simple L-J 6-12 potential and one that has been constructed using a large quantity of experimental data is on pg. 8 of A&T. The potentials that are fit to bulk data are really effective potentials and could be somewhat different than the real 2-body potential that would be determined from gas-phase data because they include corrections for three- and higher-body interactions, for, in effect, the two-body potential is obtained by averaging over the third spatial coordinate:

(1/Volume) drk V3i (ri,rj,rk) = V2(3)(rij)      so that V(r)=(1/2)ij V2eff(rij).

Hence, the effective potentials can be dependent upon density and less so on temperature.

Metals: Other problems with 2-body interactions

How about metals? Metals are characterized by a rare gas shell plus a number of valence electrons. Pair potentials do not work terribly well, because the electrons will be delocalized. If one does use a pair potential, it must be softer than the Lennard-Jones form, for example a Morse potential (sum of exponentials.) The Morse potential is often fit by choosing equilibrium lattice constant, the bulk moduli, and the cohesive energy. This is the basis for the so-called Rose equation of state and is the form used in Effective Medium Theory (see below).

The major problem with pair potentials for solid metals is that they have no environmental dependence so the a bulk atom is too much like a surface atom. For metals, the strength of the individual bonds decrease as the local environment becomes more crowded, corresponding to the behavior expected from Pauli's "exclusion principle."  In contrast, with 2-body potentials the "strength" of the individual bonds does not depend on environment. Thus, capturing the environmental dependence of bonding is crucial for metallic systems.  

Failure:  An example of how 2-body interactions like L-J fail is the relaxation of a surface layer of atoms in a metallic system, like Cu. Pair potentials predicted expansion of surface layer, whereas, most metals contract inward to increase the charge density and, therefore, the metallic bonding.

In the formula page we show the potential used in the Embedded Atom Method (EAM) for metals. The basic idea is that metallic bonding involves ions in an electron sea so the total potential energy is the sum of direct ion-ion interactions (e.g., to keep ion cores from overlapping) and the interactions between ion and the electron sea. One assumes that locally there is an electroni charge density coming from the postive charge of the ion and nearby ions and that the interaction is some function of this density. This latter term is then a true many-body potential. The EAM works well for spherically symmetric atoms (close-packed solids) such as Cu, Al, and Pb.

References and other related concepts, derived from Quantum Mechanics, are:

These methods have very different parameterizations and functions. In addition, while they usually appear composed of 2-body-type potentials, all really are cleverly disguised MANY-body potentials.

Covalently-bonded Molecules

When molecules form bonds it is best to put that in directly. Silicon has been studied extensively since it is so important for the semiconductor industry. Silicon forms bonds with the nearest 4 atoms with a local tetrahedron structure. This is very unlikely to ever come out of a pair potential so it has to be put in.

In the formula page we give the well-known potential function for silicon: the Stillinger-Weber potential. It contains an explicit term which forces the angle between nearby triples to stay close to 109 degrees and has been adjusted to giving roughly the lattice constant, the cohesive energy, the melting point and liquid structure. Unfortunately, this does not work well for other polytypes found under pressure, e.g., for there is no bond coordination changes or bending allowed: NO TRANSFERABILITY. There is a recent review article Balamane, Phys. Rev. 46, 2250 (1992) describing how well various potentials work on silicon. They conclude that the potentials, even if they have been carefully tuned, do not really describe all aspects of pure silicon terribly well. That is why the Car-Parrinello technique is viewed as an advance (even though it is slow). One gets a better (but still not perfect) potential, without have to adjust lots of parameters.

For other systems there are extensive codes which handle real biomolecular systems (GROMOS, CHARM, AMBER ...). They use an experimental data base and put in specific terms for bond lengths, bond angles torsion angles. etc. They are really quite complicated because they have to distinguish between many local bonding arrangements. Even so, one should never trust the accuracy of the empirical potentials. The errors can still be quite large even if a lot of data has been put in. This is because the potential surface is so highly dimensional and there are so many different possible combinations of atoms that experiment can only hope to sample a very few of them. (See A&T pgs 6-24)


TOP

Calendar

©1998-2001 D.D. Johnson, 2000 D. Ceperley