Chinese Journal of Chemical Physics  2019, Vol. 32 Issue (3): 277-286

The article information

An-hui Wang, Zhi-chao Zhang, Guo-hui Li
王安辉, 张志超, 李国辉
Advances in Enhanced Sampling Molecular Dynamics Simulations for Biomolecules
Chinese Journal of Chemical Physics, 2019, 32(3): 277-286
化学物理学报, 2019, 32(3): 277-286

Article history

Received on: May 10, 2019
Accepted on: June 5, 2019
Advances in Enhanced Sampling Molecular Dynamics Simulations for Biomolecules
An-hui Wanga,b , Zhi-chao Zhangb , Guo-hui Lia     
Dated: Received on May 10, 2019; Accepted on June 5, 2019
a. Laboratory of Molecular Modeling and Design, State Key Laboratory of Molecular Reaction Dynamics, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023, China;
b. State Key Laboratory of Fine Chemicals, School of Chemistry, Dalian University of Technology, Dalian 116024, China
Abstract: Molecular dynamics simulation has emerged as a powerful computational tool for studying biomolecules as it can provide atomic insights into the conformational transitions involved in biological functions. However, when applied to complex biological macromolecules, the conformational sampling ability of conventional molecular dynamics is limited by the rugged free energy landscapes, leading to inherent timescale gaps between molecular dynamics simulations and real biological processes. To address this issue, several advanced enhanced sampling methods have been proposed to improve the sampling efficiency in molecular dynamics. In this review, the theoretical basis, practical applications, and recent improvements of both constraint and unconstrained enhanced sampling methods are summarized. Furthermore, the combined utilizations of different enhanced sampling methods that take advantage of both approaches are also briefly discussed.
Key words: Enhanced sampling    Umbrella sampling    Replica exchange    Metadynamics    Accelerated molecular dynamics    

With the recent advances in graphics processing unit (GPU) and high-performance computers, molecular dynamics (MD) simulation has already become a powerful computational tool for investigating biomolecules, with applications in various biological processes such as protein folding, enzyme reactions, ligand binding/unbinding, and functional conformational transition [1-5]. These simulations can provide atomic insights into the temporal evolution of molecular systems, thus complement experiments in verifying existing mechanisms, revealing missing details and even proposing new hypotheses which can be conversely examined by further experiments [6-10].

Despite its success in a set of applications, MD simulations are still limited by the efficiency of conformational sampling [11, 12]. MD simulations have always been viewed as a general sampling method to capture the conformational transition details of biomolecules [13]. Generally, complicated biomolecular systems are known to have rough energy landscapes, with numerous metastable states separated by high-energy barriers, making the system trapped in an energy well which is difficult to jump out of in common straightforward MD simulations (see FIG. 1 for details) [14]. In these situations, the biological processes of interest usually take place over timescales of milliseconds or beyond, and make it hard for MD simulations to cross the energy barrier to visit new states, not to speak of exploring the conformational dynamics involved in biological functions.

FIG. 1 The free energy landscape along the conformational transition coordinate defines the timescales of the biological process. Figure adapted with permission from Ref.[15], copyright 2007 Springer Nature.

To facilitate the crossing of energy barriers and extend the sampling timescales of conventional MD (cMD) simulations, various enhanced sampling algorithms have been proposed. Depending on whether explicit constraints are applied to the collective variables (namely the reaction coordinates), these sampling methods can be roughly divided into two groups: constraint enhanced sampling and unconstrained enhanced sampling. In this review, we present a brief overview of several such sampling methods including their theoretical basis, practical applications and recent improvements for biomolecules.


To improve the efficiency in sampling conformations of interest, a bias potential as a function of selected collective variables (CVs) can be added to the simulated system to constrain it into a desired CVs space. Generally, these selected CVs provide a coarse-grained description of this system and are expected to describe the transitions between all metastable states in full phase space. The selection of CVs ranges from simple atomic positions to complicated ones such as native contacts or secondary structures of the system. More details of selecting CVs have been discussed in recent contributions [16-18]. There are a variety of ways to add bias potential to the system to achieve an efficient conformation sampling, including (but not limited to) umbrella sampling, metadynamics, and steered molecular dynamics. In this part, some advances of these methods will be reported.

A. Umbrella sampling

Because of its quick convergence and accurate estimation of the potential of mean force (PMF), the umbrella sampling introduced by Torrie and co-workers [19] serves as one of the most historically important enhanced sampling approaches. As seen in FIG. 2, the umbrella sampling simulation is usually performed along a series of equally spaced windows that connect the two endpoints of selected CVs. For each window $i$, a simple harmonic potential $\delta$$U_i$($x$) depending on the CV $\theta$($x$) and a reference point $\theta_i$ in the CV space is added to the system's Hamiltonian:

FIG. 2 Separation of the simulating windows along CVs in umbrella sampling. The endpoints A and B are defined as two minima on the two-dimensional free energy landscape. Figure adapted with permission from Ref.[24], copyright 2011 John Wiley and Sons.
$ \delta U_i (x)=\frac{k}{2}\left(\theta(x)-\theta_i\right)^2 $ (1)

Once finished, simulations of all windows can be integrated using the Weighted Histogram Analysis Method [20] (WHAM), the Umbrella Integration method [21] (UI), or the multistate Bennett acceptance ratio method [22, 23] (MBAR) to recover the unbiased free energy landscapes.

Although the global free energy profiles can be reconstructed by integrating the individual windows, this approach also has its inherent drawback that the sampling results of each independent window are extremely limited to its predefined window center. To address this issue, several adaptive umbrella samplings (AUS) have been proposed to alter the initial set of window centers during the simulation [25-27]. Dasgupta and co-workers [28] developed the virtual-system adaptive umbrella sampling (VAUS) in which the real system can be coupled with an arbitrary virtual system composed of virtual states. This technique was then applied to the simulation of a two Ace-(Ala)$_5$-Nme peptides system, and broader conformational space including bound states, strongly bound states, and unbound states was successfully covered. Zacharias and co-workers [29] combined Hamiltonian replica-exchange with the umbrella sampling to improve the convergence of PMF calculations in which the configurations are allowed to exchange between different simulation windows. Its application to a DNA-ligand binding system suggested both improved convergence and more accurate binding free energy calculations compared to standard umbrella sampling.

Benefiting from its quick convergence and easy expansibility, umbrella sampling has achieved great success in the simulation of biomolecules [30-33]. For example, in a recent contribution using umbrella sampling to restrain the motions of Na$^+$ and K$^+$ at some dispersed positions along the gA channel [34], the authors successfully estimated free energy profiles of ions permeating through the channel with the AMOEBA polarizable force field and predicted conductance results that are in excellent agreement with experiments.

B. Metadynamics

Metadynamics (MetaD) was first developed by Parrinello and co-workers [35] to improve the sampling of rare events by imposing a history-dependent bias potential on the system to discourage visiting those already explored regions in the free energy landscape (see FIG. 3 for details). The bias potential $\Delta V(\theta(x), t)$ is expressed as a sum of Gaussians deposited in the already visited CV space:

FIG. 3 Schematic representation of the metadynamics protocol. In metadynamics, the simulated system (shown as a purple dot) can jump out of the energy well following the Gaussians deposits (a). The deposits are continued until all energy wells are filled with Gaussians (b). Then the free energy profiles can be reconstructed by reversing the sum of deposited bias (c). (Figure adapted with permission from Ref.[37], copyright 2014 Royal Society of Chemistry).
$ \begin{eqnarray} \Delta V(\theta(x), t)&&=\nonumber\\ &&{\sum\limits_{{t^{'}} < t}} h(t') \textrm{exp}\left(\frac{-1}{2\omega^2}||\theta (x(t))\hspace{-0.05cm}-\hspace{-0.05cm}\theta(x(t'))||^2 \right) \end{eqnarray} $ (2)

where $h$($t'$) and $\omega$ is the height and width of each Gaussian function. The previous time points $t'$=$\tau$, $2\tau$, $\cdots$ are defined by the time interval $\tau$ between successive Gaussians deposits. The height and width of the Gaussian define the energy deposition rate and the details needed to resolve the free energy surface features, respectively. Additionally, the bias potential asymptotically converges to the negative of the free energy, thus the free energy profiles can be reconstructed by reversing the deposited bias [36]:

$ \begin{eqnarray} \textrm{lim} _{t\rightarrow \infty} \Delta V(\theta(x), t)=-F(x)+C \end{eqnarray} $ (3)

where $C$ is a constant. The deposited Gaussian bias potential leads to the high sampling efficiency of MetaD in which the energy barriers between separate conformational states can be easily overcome.

Meanwhile, oscillations of the deposited bias potential can also lead to difficulties in achieving simulation convergence. To address this problem, the well-tempered metadynamics [38, 39] (WTMetaD) was developed, in which the heights of the deposited Gaussians are rescaled at each step. This technique indeed ensures more smooth convergence of the bias potential. Another advantage of this approach is that the smaller bias growth reduces the dependence of the initial parameter settings of Gaussian height and width. On the basis of WTMetaD, Parrinello and co-workers [40] proposed an improved algorithm in which the widths of the Gaussians are self-adaptive based on the time evolution of CVs or geometrical coordinates. This approach is shown to significantly improve the convergence speed, especially when CVs are nonlinear functions of the atomic coordinates. More recently, parallel bias metadynamics [41, 42] (PBMetaD) was developed to accelerate the crossing of high barriers that characterize multidimensional free-energy profiles by simultaneously applying multiple parallel bias potentials in low dimensionality.

Not only thermodynamic properties but also long-time scale kinetics of studied systems can be accurately predicted through the newly proposed infrequent metadynamics [43] (InMetaD) and Frequency-Adaptive Metadynamics [44] (FaMetaD) strategies at a fixed computational cost. Furthermore, apart from these adaptations in modifying the deposited bias potential, other methods aimed at obtaining better convergence by selecting more reasonable CVs have also been discussed in recent articles [16-18].

MetaD is one of the most attractive enhanced sampling methods whose popularity continues to grow, and it has been successfully adopted in a wide range of biomolecule systems, including evaluations of drug molecules unbinding kinetics [45-47], predictions of protein-ligand binding pose [48], refinements of protein-protein complex [49], transfers of proton-coupled electron [50], and designs of selective cyclic pentapeptide [51].

C. Steered molecular dynamics

Steered molecular dynamics [52] (SMD) is inspired by the atomic force microscopy [53] (AFM) experiments. In SMD simulations, a harmonic external force $f$ with predefined direction $\boldsymbol{n}$ is applied to guide the motion of specific atoms:

$ \begin{eqnarray} {f}=-1/2 k\nabla[vt-(\boldsymbol{r}-\boldsymbol{r}_0 )\cdot \boldsymbol{n}]^2 \end{eqnarray} $ (4)

where $k$ is the harmonic constant, $v$ is the velocity of the pulling force, $\boldsymbol{r}$ and $\boldsymbol{r}_0$ are the current and initial positions of the pulled atoms. It should be noted that SMD is a nonequilibrium simulation method, and the two parameters harmonic constant and pulling velocity are of crucial importance to the simulation results. During the simulation, the selected atoms move smoothly along the predefined direction so as to enable enhanced sampling of conformational transition. The movement of the pulled atoms will generally cause some steric hindrance when these atoms collide with adjacent atoms. To optimize the pulling direction, the minimal steric hindrance [54] (MSH) method can be used to quantify the steric hindrance contributed by each atom once a pulling direction is given. Then the optimal direction will be approximatively accessible when the minimal steric hindrance is obtained.

The SMD framework is widely adopted in the studies of ligand's dissociation process from its binding pocket in which the PMF along the dissociation pathway can be evaluated using the external force or work applied during the pulling process [55-59]. For example, SMD simulation has been performed to compare the binding characteristics of five structurally related flavonoids to the protein FabZ [60]. Similar to single-molecule pulling experiments, the force required to pull out the compound from its binding pocket was calculated. Interestingly, the force profiles derived from SMD could distinguish compounds with stronger binding affinity from those with weaker binding affinity. Based on this simulation results, a new compound was designed and confirmed with biological activity by subsequent experiments. This paradigm distinctly demonstrates the role of SMD in computer-aided drug design. Apart from the binding/unbinding of different ligands, SMD is also extensively applied in identifying the sequence dependence of hydrophobic interactions in peptide systems and the self-assembly of peptide amphiphile (PA) nanofiber [61, 62].


Although constraint enhanced sampling techniques can notably extend the timescales of MD simulations, they require predefined CVs to precisely describe the actual biological process and usually suffer from hidden energy barriers. In real MD simulation runs, these CVs are usually not straightforward to determine, especially when the transition process to be studied is too complicated. See FIG. 4 as an example, there are one basin along the $x$ direction and two basins along the $y$ direction. If only $x$ is predefined as the CV, the convergence of $y$ cannot be guaranteed even though $x$ has been well sampled. Therefore, much valuable information might be missing when the transition along $y$ is crucial for the system. To address this "hidden barrier" issue, unconstrained enhanced sampling methods, including replica-exchange MD, accelerated MD, random acceleration MD, and the adaptive biasing force method are also widely adopted in investigating complicated conformational transitions. Compared to constraint enhanced sampling, these approaches don't need predefined CVs, thus require no a priori knowledge of the biomolecules to be studied. In this part, a brief overview of these unconstrained enhanced sampling methods will be provided.

FIG. 4 An example of a hidden energy barrier along the $y$ direction.
A. Replica-exchange molecular dynamics

Replica-exchange molecular dynamics [63] (REMD) is also known as parallel tempering. This method originates from the efforts in improving the Monte Carlo framework [64]. In REMD, several independent simulations at different temperatures are performed in parallel, and the system configurations are exchanged with two neighboring replicas depending on their temperature and potential energy. During a simulation, the exchange is attempted at a regular frequency and only accepted when the Metropolis criterion is satisfied. The exploration of conformation space is thus enhanced by transferring configurations which are only accessible in higher temperature replicas into lower room temperature replicas.

It should be noted that the number of replicas and the range of temperatures needed for REMD are often positively correlated with the size of the simulated system. It means much more computational cost is required for complicated biomolecules. Fortunately, many efforts have been made to further improve the efficiency of the standard REMD since its initial application to the folding of a pentapeptide. In a more general form of REMD, the Hamiltonians instead of the configurations are exchanged, and the reversible folding/unfolding behavior of two medium-sized proteins can be observed with a limited number of replicas [65-67]. Furthermore, in a more efficient approach named replica-exchange with solute scaling [68, 69] (REST2), all replicas are set at the same temperature while the interaction energy between solute-solute and solute-solvent are scaled by different factors. This method is believed to be more suitable for large protein-water systems especially when striking conformational energy changes are involved in the proteins. Other variants of traditional REMD include surface-tension replica-exchange [70], constant pH replica-exchange [71], $\lambda$-REMD [72], replica-exchange umbrella sampling [73], multiplexed-REMD [74], and ab initio replica-exchange [75].

Since REMD and its adaptations allow unconstrained enhanced sampling without setting specific CVs to use them, many applications of these methods exist, including the studies of protein folding [3, 76], protein structural ensemble generation [77, 78], protein-protein recognition mechanism [79], peptide-nanomaterials interaction [80, 81], binding/unbinding kinetics [82], RNA structural dynamics [83], and polymers assembly [84].

B. Accelerated molecular dynamics and adaptations

Accelerated molecular dynamics (aMD) introduced by the McCammon group [85, 86] is another promising enhanced sampling method with no predefined CVs required. Hundreds-of-nanosecond aMD simulations are reported to match millisecond-timescale conformational dynamics for a variety of biomolecules [87-90]. For example, in a benchmark study of bovine pancreatic trypsin inhibitor (BPTI, the first protein simulated by MD technique) [89], all five long-lived structures sampled in 1 ms long unbiased cMD can be captured by 500 ns aMD simulation while 500 ns cMD simulation is still trapped within a local conformational space around the crystal structure (see FIG. 5 for details).

FIG. 5 The projection of the first two principal components (PC1, PC2) for 1 ms cMD (a), 500 ns cMD (b), and 500 ns aMD (c) simulations. The crystal structure of BPTI is labeled as red diamond, and the long-lived structures sampled in 1 ms cMD are labeled as triangles. Figure reprinted with permission from Ref.[89], copyright 2012 American Chemical Society.

aMD works by adding a boost potential to the minima of potential energy surface to enable the crossing of energy barriers between different conformational states. The boost potential $\Delta V$($x$) is triggered when the system's potential energy gets below the threshold energy $E$ and its magnitude is determined using the following equation:

$ \begin{eqnarray} \Delta V({x})= \left( {\begin{array}{*{20}c} \displaystyle{\frac{{{{(E - V(x))^2}}}}{{{{\alpha + (E - V(x))}}}}{\rm{, }}} & {{{V(x) < E}}} \\ {0, } & {{{V(x)}} \ge {{E}}} \\ \end{array}} \right) \end{eqnarray} $ (5)

where $V(x)$ is the potential energy of state $x$, $\alpha$ is a predefined acceleration factor that controls the flatness of the final potential energy. In general, the boost energy can either be implemented on the torsional terms of the potential or the whole potential or even both of them [89, 90]. To facilitate the sampling of specific conformational transition such as the motion of flexible loops, the boost potential can also be added to the loop regions of the biomolecule [91].

After the simulation biased by aMD, the original free energy profiles can be recovered by multiplying a Boltzmann factor $\textrm{e}^{\Delta V(x)/k_\textrm{B} T}$ of the boost potential imposed on each conformational state. Because states with lower boost potential contribute little to the Boltzmann factors, the free energy reweighting is typically dominated by a very few states with higher boost potential (conformational states with lower potential energy). In real simulations, as seen in FIG. 6, aMD simulations with higher enhanced sampling levels (smaller $\alpha$ values) sample only very few low-potential conformations. It will lead to observable energetic fluctuations and statistical noises especially when the degree of freedom of the studied biomolecules increases [90, 92, 93].

FIG. 6 The distribution of potential energy obtained from simulations of cMD, aMD, and IaMD. Simulations of aMD with higher enhanced sampling levels (smaller $\alpha$ values) sample few low-potential conformations, while IaMD provides a uniform sampling in broader potential regions. Figure reprinted with permission from Ref.[93], copyright 2018 American Chemical Society.

To reduce the statistical noise, several enhancements including adaptions to both the reweighing procedure and the boost potential have been proposed. For the reweighing procedure, the exponential term in Boltzmann factors can be replaced by a summation of the Maclaurin series expansion of the boost potential. This form of Maclaurin series expansion is shown to produce PMF profiles with less noise than those obtained from an exponential average method [89]. In a recent contribution, the reweighting accuracy can be further improved by a cumulant expansion to the second-order [94].

The statistical noise can also be reduced by altering the form of the boost potential. For example, in Gaussian accelerated molecular dynamics [95] (GaMD), the original boost potential was modified into a harmonic function of the difference between the threshold energy $E$ and the potential energy of current state $V$($x$). GaMD can integrate various statistical information of the potential energy such as its mean, minimum, maximum and standard deviations values, thus shows more efficiency in identifying the protein-folding and ligand-binding pathways [95, 96].

More recently, the integrated accelerated molecular dynamics (IaMD) which integrates several aMD with varying levels of acceleration into a single boost potential was proposed by Dr. Li's group [93]. Compared to the original aMD, IaMD can provide a uniform sampling in broader energy regions especially in the low-energy regions that are inaccessible in standard aMD (see FIG. 6 for details). This method is demonstrated to markedly improve the sampling efficiency and maintain the reweighting accuracy simultaneously in simulating protein's conformational transitions [91, 93].

There are some other enhancements to the standard aMD scheme, including the windowed aMD [97], windowed replica-exchange aMD [98], adaptive aMD [99], accelerated adaptive integration method [100], and ab initio aMD [101]. More detailed discussions of the development of aMD can also be found in other review articles [92, 102].

Unlike REMD, aMD needs only one single replica of the system, and more successful applications have been achieved with this method. These applications cover a broad area including binding/unbinding of ligands [103], conformational transitions of peptides and proteins [5, 104], and the adsorption of proteins on nanometer materials [105, 106].

C. Random acceleration molecular dynamics

Random acceleration molecular dynamics [107] (RaMD) is another effective enhanced sampling scheme used for characterizing the ligand unbinding from the buried binding pocket of proteins. In contrast with SMD, RaMD works by imposing a small expulsion force with random direction on the center of mass of the ligand. Therefore, the ligand's exploring capacity to the protein space will be enhanced under tractions of the expulsion force. During the simulation, the force direction will vary stochastically whenever the movement along initial direction causes conspicuous steric hindrance. In general, the expulsion force continues within the timescale of nanoseconds, allowing further comprehension of the structure-affinity relationship (SAR) of druggable molecules.

Compared with SMD, RaMD requires no pre-definition of ligand dissociation pathway and thus enables detecting alternative ligand release pathway. However, careful selections of the force magnitude and threshold distance that are used to determine whether the force direction should be changed, are still needed [108]. Improper selection of force might lead to unexpected conformational changes in protein structure, and specific parameters should be set for different biomolecules [109-111].

Since RaMD does not require a pre-defined expulsion direction, the dissociation channels derived by this method remain unbiased [112]. RaMD approach has been widely used to explore the complicated ligand unbinding mechanism which provides detailed insights into the ingress/egress routes of drug molecules [113-116].

D. Adaptive biasing force method

Adaptive biasing force (ABF) method proposed by Pohorille and co-workers [117, 118] is based on the thermodynamic integration. In ABF, insufficient sampling is considered as a result of the impediment from the mean force along the CV (i.e., the negative of the gradient of the potential). During ABF simulation, the instantaneous mean force is calculated and recorded on-the-fly, and an external biasing force is adaptively applied to counter against it to encourage the crossing of energy barriers. Subsequently, free energy can be derived by integrating the force along the CV. The mean force gradually converges to the average force at equilibrium, so does the free energy. The procedure of calculating mean force and applying biasing force is shown in FIG. 7. To improve ABF's feasibility when high-dimensional CVs are involved in the practical applications, several enhancements of this method including the eABF [119, 120], gABF [121], and meta-eABF [122] have been proposed.

FIG. 7 Schematic representation of the ABF method. The purple dots represent different states along the CV. The blue and red arrows represent instantaneous mean forces and external biasing forces, respectively.

Compared with many other enhanced sampling methods, ABF is physically intuitive and depends little on specific information about the free energy profiles. As a consequence, ABF is widely applied in protein-ligand binding [1, 123], the host-guest system [124], the conformational transition of peptides and proteins [4, 125, 126], and the membrane permeability of small molecules [127, 128].


What we have depicted above highlights the features of various popular enhanced sampling schemes. For practical purposes, some of the aforementioned enhanced sampling algorithms can be combined in one simulation to further optimize the computational efficiency. For example, REMD simulations can improve the sampling of all degrees of freedom using multiple replicas whereas the number of replicas increases rapidly as the system size gets larger. On the other hand, MetaD is generally used to enhance the sampling along predefined CVs. To take advantage of both replica-exchange and MetaD, these two computational schemes are combined in bias-exchange metadynamics [129] (BE-MetaD). In BE-MetaD, multiple metadynamics simulations are performed in parallel replicas at the same temperature. Bias potentials are imposed on different CVs in different replicas. Configuration exchange between different replicas is attempted at regular intervals according to the bias potentials. Ultimately, a set of one-dimensional free energy surfaces recovered from each replica can be integrated to construct the multi-dimensional free energy surface of the studied system [130]. Due to the simultaneous multi-dimensional biasing acting on the same system, BE-MetaD markedly promotes MD's power in sampling complicated conformational transitions and its efficiency has been well validated by numerous applications [131-134].

There are also some other successful approaches in which diverse enhanced sampling methods are coupled to explore broader conformational space including replica-exchange metadynamics [135], meta-eABF [122], replica-exchange US [29], replica-exchange with collective variable tempering [136], replica averaged metadynamics [137], and MetaITS [138]. More details about their theoretical backgrounds, development process, and latest applications are reviewed in other articles [30, 83, 139, 140].


Many programs are under active development with the popularity of MD simulations, some of which such as Amber [141], Gromacs [142], NAMD [143], and OpenMM [144] are predominantly used for biomolecules. These long-accepted programs have already implemented several enhanced sampling methods. For example, Amber supports the usage of US, REMD, SMD, aMD, GaMD, RAMD, replica-exchange US, targeted MD [145], and self-guided Langevin dynamics [146]. For simulations with complicated reaction coordinates, the colvars module in NAMD is an excellent choice to define CVs and impose bias potentials during the MD run. Furthermore, a flexible and modularized plugin named plumed [147] can also be incorporated into most MD programs to analyze or bias the MD simulation on-the-fly.


In this review, we summarized the current enhanced sampling methods used to extend the timescales of standard MD simulations. Both constraint and unconstrained enhanced sampling methods are briefly overviewed with the focus on their origins, development process, and latest improvements. Although the efficiency of these methods has been continually demonstrated by numerous studies mentioned earlier in this review, there are still some challenges in bridging the timescale gaps between MD simulations and complicated biological processes. More and more biomolecules with millions of atoms have been resolved with the recent developments of cryo-EM [148-151], indicating the requirements of longer timescales and more massive computational cost in MD simulations. However, the intrinsic nature of conventional enhanced sampling methods such as the demand for multiple replicas in REMD further increases MD's computational expense. As a consequence, standard enhanced sampling methods should be coupled with other advanced techniques such as multiple time-stepping methods [152] and coarse-grained models [153-155] to further improve their sampling efficiency for extremely large-scale biomolecular systems. On the other hand, the precision of force fields is another critical factor that affects the accuracy of MD simulations. The most significant advantage of enhanced sampling can only be achieved when these methods are in conjunction with high-precision force fields [156]. As reported by Shaw and co-workers, none of the state-of-the-art force fields can simultaneously provide accurate descriptions for both folded and disordered proteins [157]. It suggests the essential urgency of developing higher-precision force fields besides the development of enhanced sampling methods. Therefore, with the help of other advanced techniques and higher-precision force fields, MD simulations with enhanced sampling methods will continue to provide more useful and reliable information for the computational modeling of biomolecules.


This work was supported by the National Natural Science Foundation of China (No.31700647, No.21625302, and No.21573217).

M. C. C. J. C. Ebert, J. G. Espinola, G. Lamoureux, and J. N. Pelletier, ACS Catal. 7, 6786(2017). DOI:10.1021/acscatal.7b02634
A. Wang, T. Song, Z. Wang, Y. Liu, Y. Fan, Y. Zhang, and Z. Zhang, Chem. Biol. Drug. Des. 87, 551(2016). DOI:10.1111/cbdd.2016.87.issue-4
M. Kamiya, and Y. Sugita, J. Chem. Phys. 149, 072304(2018). DOI:10.1063/1.5016222
C. Ng, P. N. Premnath, and O. Guvench, J. Comput. Chem. 38, 1438(2017). DOI:10.1002/jcc.v38.16
G. Palermo, Y. Miao, R. C. Walker, M. Jinek, and J. A. McCammon, Proc. Natl. Acad. Sci. USA 114, 7260(2017). DOI:10.1073/pnas.1707645114
P. Li, X. Jia, M. Wang, and Y. Mei, Chin. J. Chem. Phys. 30, 789(2017). DOI:10.1063/1674-0068/30/cjcp1711204
W. Lin, S. Tan, S. Zhou, X. Zheng, W. Wu, and K. Zheng, Chin. J. Chem. Phys. 30, 429(2017). DOI:10.1063/1674-0068/30/cjcp1704066
X. Pang, and J. Liu, Chin. J. Chem. Phys. 27, 29(2014). DOI:10.1063/1674-0068/27/01/29-38
J. Peng, W. Wang, Y. Yu, H. Gu, and X. Huang, Chin. J. Chem. Phys. 31, 404(2018). DOI:10.1063/1674-0068/31/cjcp1806147
Z. Yu, Y. Gao, X. Wang, G. Zhou, S. Zeng, and J. Chen, Chin. J. Chem. Phys. 31, 85(2018). DOI:10.1063/1674-0068/31/cjcp1707138
L. Cao, C. Zhang, D. Zhang, H. Chu, Y. Zhang, and G. Li, Acta Phys. Chim. Sin. 33, 1354(2017).
L. Cao, H. Ren, J. Miao, W. Guo, Y. Li, and G. Li, Front. Chem. Sci. Eng. 10, 203(2016). DOI:10.1007/s11705-016-1572-4
C. J. Chen, Y. Z. Huang, and Y. Xiao, J. Biomol. Struct. Dyn. 31, 206(2013). DOI:10.1080/07391102.2012.698244
J. N. Onuchic, Z. LutheySchulten, and P. G. Wolynes, Annu. Rev. Phys. Chem. 48, 545(1997). DOI:10.1146/annurev.physchem.48.1.545
K. Henzler-Wildman, and D. Kern, Nature 450, 964(2007). DOI:10.1038/nature06522
M. S. M, and V. S. Pande, J. Chem. Theory Comput. 13, 2440(2017). DOI:10.1021/acs.jctc.7b00182
J. McCarty, and M. Parrinello, J. Chem. Phys. 147, 204109(2017). DOI:10.1063/1.4998598
M. A. Rohrdanz, W. Zheng, and C. Clementi, Annu. Rev. Phys. Chem. 64, 295(2013). DOI:10.1146/annurev-physchem-040412-110006
G. M. Torrie, and J. P. Valleau, J. Chem. Phys. 23, 187(1977).
S. Kumar, J. M. Rosenberg, D. Bouzida, R. H. Swendsen, and P. A. Kollman, J. Comput. Chem. 13, 1011(2010).
J. Kastner, and W. Thiel, J. Chem. Phys. 123, 144104(2005). DOI:10.1063/1.2052648
M. R. Shirts, E. Bair, G. Hooker, and V. S. Pande, Phys. Rev. Lett. 91, 140601(2003). DOI:10.1103/PhysRevLett.91.140601
M. R. Shirts, and J. D. Chodera, J. Chem. Phys. 129, 124105(2008). DOI:10.1063/1.2978177
J. Kästner, WIRES. Comput. Mol. Sci. 1, 932(2011). DOI:10.1002/wcms.66
J. Higo, B. Dasgupta, T. Mashimo, K. Kasahara, Y. Fukunishi, and H. Nakamura, J. Comput. Chem. 36, 1489(2015). DOI:10.1002/jcc.v36.20
J. Higo, K. Kasahara, B. Dasgupta, and H. Nakamura, J. Chem. Phys. 146, 044104(2017). DOI:10.1063/1.4974087
S. Park, and W. Im, J. Chem. Theory Comput. 10, 2719(2014). DOI:10.1021/ct500504g
B. Dasgupta, H. Nakamura, and J. Higo, Chem. Phys. Lett. 662, 327(2016). DOI:10.1016/j.cplett.2016.09.059
F. Zeller, and M. Zacharias, J. Chem. Theory Comput. 10, 703(2014). DOI:10.1021/ct400689h
M. De Vivo, M. Masetti, G. Bottegoni, and A. Cavalli, J. Med. Chem. 59, 4035(2016). DOI:10.1021/acs.jmedchem.5b01684
D. Jiang, T. M. Gamal El-Din, C. Ing, P. Lu, R. Pomes, N. Zheng, and W. A. Catterall, Nature 557, 590(2018). DOI:10.1038/s41586-018-0120-4
H. Meshkin, and F. Zhu, J. Chem. Theory Comput. 13, 2086(2017). DOI:10.1021/acs.jctc.6b01171
E. Papaleo, G. Saladino, M. Lambrughi, K. Lindorff-Larsen, F. L. Gervasio, and R. Nussinov, Chem. Rev. 116, 6391(2016). DOI:10.1021/acs.chemrev.5b00623
X. Peng, Y. Zhang, H. Chu, Y. Li, D. Zhang, L. Cao, and G. Li, J. Chem. Theory Comput. 12, 2973(2016). DOI:10.1021/acs.jctc.6b00128
A. Laio, and M. Parrinello, Proc. Natl. Acad. Sci. USA 99, 12562(2002). DOI:10.1073/pnas.202427399
G. Bussi, A. Laio, and M. Parrinello, Phys. Rev. Lett. 96, 090601(2006). DOI:10.1103/PhysRevLett.96.090601
V. Van Speybroeck, K. De Wispelaere, J. Van der Mynsbrugge, M. Vandichel, K. Hemelsoet, and M. Waroquier, Chem. Soc. Rev. 43, 7326(2014). DOI:10.1039/C4CS00146J
A. Barducci, G. Bussi, and M. Parrinello, Phys. Rev. Lett. 100, 020603(2008). DOI:10.1103/PhysRevLett.100.020603
J. F. Dama, M. Parrinello, and G. A. Voth, Phys. Rev. Lett. 112, 240602(2014). DOI:10.1103/PhysRevLett.112.240602
D. Branduardi, G. Bussi, and M. Parrinello, J. Chem. Theory Comput. 8, 2247(2012). DOI:10.1021/ct3002464
A. Prakash, C. D. Fu, M. Bonomi, and J. Pfaendtner, J. Chem. Theory Comput. 14, 4985(2018). DOI:10.1021/acs.jctc.8b00448
J. Pfaendtner, and M. Bonomi, J. Chem. Theory Comput. 11, 5062(2015). DOI:10.1021/acs.jctc.5b00846
P. Tiwary, and M. Parrinello, Phys. Rev. Lett. 111, 230602(2013). DOI:10.1103/PhysRevLett.111.230602
Y. Wang, O. Valsson, P. Tiwary, M. Parrinello, and K. Lindorff-Larsen, J. Chem. Phys. 149, 072309(2018). DOI:10.1063/1.5024679
H. Sun, Y. Li, M. Shen, D. Li, Y. Kang, and T. Hou, J. Chem. Inf. Model 57, 1895(2017). DOI:10.1021/acs.jcim.7b00075
N. Saleh, P. Ibrahim, G. Saladino, F. L. Gervasio, and T. Clark, J. Chem. Inf. Model 57, 1210(2017). DOI:10.1021/acs.jcim.6b00772
R. Casasnovas, V. Limongelli, P. Tiwary, P. Carloni, and M. Parrinello, J. Am. Chem. Soc. 139, 4780(2017). DOI:10.1021/jacs.6b12950
A. J. Clark, P. Tiwary, K. Borrelli, S. Feng, E. B. Miller, R. Abel, R. A. Friesner, and B. J. Berne, J. Chem. Theory Comput. 12, 2990(2016). DOI:10.1021/acs.jctc.6b00201
E. Pfeiffenberger, and P. A. Bates, Proteins 87, 12(2019). DOI:10.1002/prot.v87.1
N. Gillet, M. Elstner, and T. Kubar, J. Chem. Phys. 149, 072328(2018). DOI:10.1063/1.5027100
F. S. Di Leva, S. Tomassi, S. Di Maro, F. Reichart, J. Notni, A. Dangi, U. K. Marelli, D. Brancaccio, F. Merlino, H. J. Wester, E. Novellino, H. Kessler, and L. Marinelli, Angew. Chem. Int. Ed. Engl. 57, 14645(2018). DOI:10.1002/anie.201803250
D. Kosztin, S. Izrailev, and K. Schulten, Biophys. J. 76, 188(1999). DOI:10.1016/S0006-3495(99)77188-2
E. L. Florin, V. T. Moy, and H. E. Gaub, Science 264, 415(1994). DOI:10.1126/science.8153628
Q. V. Vuong, T. T. Nguyen, and M. S. Li, J. Chem. Inf. Model 55, 2731(2015). DOI:10.1021/acs.jcim.5b00386
N. Okimoto, A. Suenaga, and M. Taiji, J. Biomol. Struct. Dyn. 35, 3221(2017). DOI:10.1080/07391102.2016.1251851
J. Zhu, Y. Lv, X. Han, D. Xu, and W. Han, Sci. Rep. 7, 12439(2017). DOI:10.1038/s41598-017-12031-0
D. S. Moore, C. Brines, H. Jewhurst, J. P. Dalton, and I. G. Tikhonova, PLoS Comput. Biol. 14, e1006525(2018). DOI:10.1371/journal.pcbi.1006525
S. Kalyaanamoorthy, and Y. P. Chen, Prog. Biophys. Mol. Biol. 114, 123(2014). DOI:10.1016/j.pbiomolbio.2013.06.004
X. Hu, S. Hu, J. Wang, Y. Dong, L. Zhang, and Y. Dong, J. Biomol. Struct. Dyn. 36, 3819(2018). DOI:10.1080/07391102.2017.1401002
F. Colizzi, R. Perozzo, L. Scapozza, M. Recanatini, and A. Cavalli, J. Am. Chem. Soc. 132, 7361(2010). DOI:10.1021/ja100259r
P. Stock, J. I. Monroe, T. Utzig, D. J. Smith, M. S. Shell, and M. Valtiner, ACS Nano 11, 2586(2017). DOI:10.1021/acsnano.6b06360
T. Yu, O. S. Lee, and G. C. Schatz, J. Phys. Chem. A 117, 7453(2013). DOI:10.1021/jp401508w
Y. Sugita, and Y. Okamoto, Chem. Phys. Lett. 314, 141(1999). DOI:10.1016/S0009-2614(99)01123-9
R. H. Swendsen, and J. S. Wang, Phys. Rev. Lett. 57, 2607(1986). DOI:10.1103/PhysRevLett.57.2607
D. R. Roe, C. Bergonzo, and T. E. Cheatham 3rd, J. Phys. Chem. B 118, 3543 (2014).
M. Meli, and G. Colombo, Int. J. Mol. Sci. 14, 12157(2013). DOI:10.3390/ijms140612157
J. Curuksu, and M. Zacharias, J. Chem. Phys. 130, 104110(2009). DOI:10.1063/1.3086832
P. Liu, B. Kim, R. A. Friesner, and B. J. Berne, Proc. Natl. Acad. Sci. USA 102, 13749(2005). DOI:10.1073/pnas.0506346102
L. Wang, R. A. Friesner, and B. J. Berne, J. Phys. Chem. B 115, 9431(2011). DOI:10.1021/jp204407d
T. Mori, J. Jung, and Y. Sugita, J. Chem. Theory Comput. 9, 5629(2013). DOI:10.1021/ct400445k
Y. L. Meng, and A. E. Roitberg, J. Chem. Theory Comput. 6, 1401(2010). DOI:10.1021/ct900676b
W. Jiang, M. Hodoscek, and B. Roux, J. Chem. Theory Comput. 5, 2583(2009). DOI:10.1021/ct900223z
K. Murata, Y. Sugita, and Y. Okamoto, J. Theor. Comput. Chem. 4, 411(2005). DOI:10.1142/S0219633605001611
Y. M. Rhee, and V. S. Pande, Biophys. J. 84, 775(2003). DOI:10.1016/S0006-3495(03)74897-8
Y. Ishikawa, Y. Sugita, T. Nishikawa, and Y. Okamoto, Chem. Phys. Lett. 333, 199(2001). DOI:10.1016/S0009-2614(00)01342-7
L. S. Stelzl, and G. Hummer, J. Chem. Theory Comput. 13, 3927(2017). DOI:10.1021/acs.jctc.7b00372
A. Tarakanova, G. C. Yeo, C. Baldock, A. S. Weiss, and M. J. Buehler, Macromol. Biosci. 19, e1800250(2018).
A. Tarakanova, G. C. Yeo, C. Baldock, A. S. Weiss, and M. J. Buehler, Proc. Natl. Acad. Sci. USA 115, 7338(2018). DOI:10.1073/pnas.1801205115
A. Srivastava, F. Tama, D. Kohda, and O. Miyashita, Proteins 87, 81(2019). DOI:10.1002/prot.v87.1
P. Frederix, I. Patmanidis, and S. J. Marrink, Chem. Soc. Rev. 47, 3470(2018). DOI:10.1039/C8CS00040A
T. R. Walsh, and M. R. Knecht, Chem. Rev. 117, 12641(2017). DOI:10.1021/acs.chemrev.7b00139
C. T. Leahy, A. Kells, G. Hummer, N. V. Buchete, and E. Rosta, J. Chem. Phys. 147, 152725(2017). DOI:10.1063/1.5004774
J. Sponer, G. Bussi, M. Krepl, P. Banas, S. Bottaro, R. A. Cunha, A. Gil-Ley, G. Pinamonti, S. Poblete, P. Jurecka, N. G. Walter, and M. Otyepka, Chem. Rev. 118, 4177(2018). DOI:10.1021/acs.chemrev.7b00427
S. H. Ahn, J. W. Grate, and E. F. Darve, J. Chem. Phys. 149, 072330(2018). DOI:10.1063/1.5024552
Y. Wang, C. B. Harrison, K. Schulten, and J. A. McCammon, Comput. Sci. Discov. 4, 015002(2011). DOI:10.1088/1749-4699/4/1/015002
D. Hamelberg, J. Mongan, and J. A. McCammon, J. Chem. Phys. 120, 11919(2004). DOI:10.1063/1.1755656
R. Singh, N. Ahalawat, and R. K. Murarka, J. Phys. Chem. B 119, 2806(2015). DOI:10.1021/jp509814n
C. Y. Yang, J. Delproposto, K. Chinnaswamy, W. C. Brown, S. Wang, J. A. Stuckey, and X. Wang, PLoS One 11, e0146522(2016). DOI:10.1371/journal.pone.0146522
L. C. Pierce, R. Salomon-Ferrer, F. d. O. C. Augusto, J. A. McCammon, and R. C. Walker, J. Chem. Theory Comput. 8, 2997(2012). DOI:10.1021/ct300284c
Y. Miao, F. Feixas, C. Eun, and J. A. McCammon, J. Comput. Chem. 36, 1536(2015). DOI:10.1002/jcc.v36.20
P. Lan, M. Tan, Y. Zhang, S. Niu, J. Chen, S. Shi, S. Qiu, X. Wang, X. Peng, G. Cai, H. Cheng, J. Wu, G. Li, and M. Lei, Science 362, eaat6678(2018). DOI:10.1126/science.aat6678
Y. Miao, and J. A. McCammon, Mol. Simul. 42, 1046(2016). DOI:10.1080/08927022.2015.1121541
X. Peng, Y. Zhang, Y. Li, Q. Liu, H. Chu, D. Zhang, and G. Li, J. Chem. Theory Comput. 14, 1216(2018). DOI:10.1021/acs.jctc.7b01211
Y. Miao, W. Sinko, L. Pierce, D. Bucher, R. C. Walker, and J. A. McCammon, J. Chem. Theory Comput. 10, 2677(2014). DOI:10.1021/ct500090q
Y. Miao, V. A. Feher, and J. A. McCammon, J. Chem. Theory Comput. 11, 3584(2015). DOI:10.1021/acs.jctc.5b00436
Y. M. Huang, M. A. Raymundo, W. Chen, and C. A. Chang, Biochemistry 56, 1311(2017). DOI:10.1021/acs.biochem.6b01112
W. Sinko, C. A. de Oliveira, L. C. Pierce, and J. A. McCammon, J. Chem. Theory Comput. 8, 17(2012). DOI:10.1021/ct200615k
M. Arrar, C. A. de Oliveira, M. Fajer, W. Sinko, and J. A. McCammon, J. Chem. Theory Comput. 9, 18(2013). DOI:10.1021/ct300896h
P. R. Markwick, L. C. Pierce, D. B. Goodin, and J. A. McCammon, J. Phys. Chem. Lett. 2, 158(2011). DOI:10.1021/jz101462n
J. W. Kaus, M. Arrar, and J. A. McCammon, J. Phys. Chem. B 118, 5109(2014). DOI:10.1021/jp502358y
D. Bucher, L. C. Pierce, J. A. McCammon, and P. R. Markwick, J. Chem. Theory Comput. 7, 890(2011). DOI:10.1021/ct100605v
U. Doshi, and D. Hamelberg, Biochim. Biophys. Acta Gen. Subj. 1850, 878(2015). DOI:10.1016/j.bbagen.2014.08.003
K. Kappel, Y. Miao, and J. A. McCammon, Q. Rev. Biophys. 48, 479(2015). DOI:10.1017/S0033583515000153
S. Mukherjee, R. K. Kar, R. P. R. Nanga, K. H. Mroue, A. Ramamoorthy, and A. Bhunia, Phys. Chem. Chem. Phys. 19, 19289(2017). DOI:10.1039/C7CP01941F
M. A. Nejad, C. Mücksch, and H. M. Urbassek, Chem. Phys. Lett. 670, 77(2017). DOI:10.1016/j.cplett.2017.01.002
C. Mucksch, and H. M. Urbassek, Langmuir 32, 9156(2016). DOI:10.1021/acs.langmuir.6b02229
S. K. Ludemann, V. Lounnas, and R. C. Wade, J. Mol. Biol. 303, 797(2000). DOI:10.1006/jmbi.2000.4154
J. Rydzewski, and W. Nowak, J. Chem. Phys. 143, 124101(2015). DOI:10.1063/1.4931181
M. Klvana, M. Pavlova, T. Koudelakova, R. Chaloupkova, P. Dvorak, Z. Prokop, A. Stsiapanava, M. Kuty, I. Kuta-Smatanova, J. Dohnalek, P. Kulhanek, R. C. Wade, and J. Damborsky, J. Mol. Biol. 392, 1339(2009). DOI:10.1016/j.jmb.2009.06.076
M. Pavlova, M. Klvana, Z. Prokop, R. Chaloupkova, P. Banas, M. Otyepka, R. C. Wade, M. Tsuda, Y. Nagata, and J. Damborsky, Nat. Chem. Biol. 5, 727(2009). DOI:10.1038/nchembio.205
W. Li, J. Shen, G. Liu, Y. Tang, and T. Hoshino, Proteins 79, 271(2011). DOI:10.1002/prot.22880
Z. Shen, F. Cheng, Y. Xu, J. Fu, W. Xiao, J. Shen, G. Liu, W. Li, and Y. Tang, PLoS One 7, e33500(2012). DOI:10.1371/journal.pone.0033500
P. Urban, T. Lautier, D. Pompon, and G. Truan, Int. J. Mol. Sci. 19, 1617(2018). DOI:10.3390/ijms19061617
S. Kalyaanamoorthy, and Y. P. Chen, J. Chem. Inf. Model 52, 589(2012). DOI:10.1021/ci200584f
J. Sgrignani, and A. Magistrato, J. Chem. Inf. Model 52, 1595(2012). DOI:10.1021/ci300151h
G. Wells, H. Yuan, M. J. McDaniel, H. Kusumoto, J. P. Snyder, D. C. Liotta, and S. F. Traynelis, Proteins 86, 1265(2018). DOI:10.1002/prot.v86.12
E. Darve, and A. Pohorille, J. Chem. Phys. 115, 9169(2001). DOI:10.1063/1.1410978
E. Darve, D. Rodriguez-Gomez, and A. Pohorille, J. Chem. Phys. 128, 144120(2008). DOI:10.1063/1.2829861
L. Zheng, and W. Yang, J. Chem. Theory Comput. 8, 810(2012). DOI:10.1021/ct200726v
A. Lesage, T. Lelievre, G. Stoltz, and J. Henin, J. Phys. Chem. B 121, 3676(2017). DOI:10.1021/acs.jpcb.6b10055
C. Chipot, and T. Lelièvre, SIAM J. Appl. Math. 71, 1673(2011). DOI:10.1137/10080600X
H. Fu, H. Zhang, H. Chen, X. Shao, C. Chipot, and W. Cai, J. Phys. Chem. Lett. 9, 4738(2018). DOI:10.1021/acs.jpclett.8b01994
Y. Niu, D. Shi, L. Li, J. Guo, H. Liu, and X. Yao, Sci. Rep. 7, 46547(2017). DOI:10.1038/srep46547
T. Zhao, X. Shao, and W. Cai, Mol. Simul. 43, 977(2017). DOI:10.1080/08927022.2017.1297533
D. Asthagiri, D. Karandur, D. S. Tomar, and B. M. Pettitt, J. Phys. Chem. B 121, 8078(2017). DOI:10.1021/acs.jpcb.7b05469
J. Comer, J. C. Gumbart, J. Henin, T. Lelievre, A. Pohorille, and C. Chipot, J. Phys. Chem. B 119, 1129(2015). DOI:10.1021/jp506633n
S. Wang, E. A. Orabi, S. Baday, S. Berneche, and G. Lamoureux, J. Am. Chem. Soc. 134, 10419(2012). DOI:10.1021/ja300129x
C. T. Lee, J. Comer, C. Herndon, N. Leung, A. Pavlova, R. V. Swift, C. Tung, C. N. Rowley, R. E. Amaro, C. Chipot, Y. Wang, and J. C. Gumbart, J. Chem. Inf. Model 56, 721(2016). DOI:10.1021/acs.jcim.6b00022
S. Piana, and A. Laio, J. Phys. Chem. B 111, 4553(2007). DOI:10.1021/jp067873l
F. Marinelli, F. Pietrucci, A. Laio, and S. Piana, PLoS Comput. Biol. 5, e1000452(2009). DOI:10.1371/journal.pcbi.1000452
R. Singh, R. Bansal, A. S. Rathore, and G. Goel, Biophys. J. 112, 1571(2017). DOI:10.1016/j.bpj.2017.03.015
Z. Cao, Y. Bian, G. Hu, L. Zhao, Z. Kong, Y. Yang, J. Wang, and Y. Zhou, Int. J. Mol. Sci. 19, 885(2018). DOI:10.3390/ijms19030885
M. Delor, J. Dai, T. D. Roberts, J. R. Rogers, S. M. Hamed, J. B. Neaton, P. L. Geissler, M. B. Francis, and N. S. Ginsberg, J. Am. Chem. Soc. 140, 6278(2018). DOI:10.1021/jacs.7b13598
A. Z. Guo, A. M. Fluitt, and J. J. de Pablo, J. Chem. Phys. 149, 025101(2018). DOI:10.1063/1.5033458
G. Bussi, F. L. Gervasio, A. Laio, and M. Parrinello, J. Am. Chem. Soc. 128, 13435(2006). DOI:10.1021/ja062463w
A. Gil-Ley, and G. Bussi, J. Chem. Theory Comput. 11, 1077(2015). DOI:10.1021/ct5009087
C. Camilloni, A. Cavalli, and M. Vendruscolo, J. Chem. Theory Comput. 9, 5610(2013). DOI:10.1021/ct4006272
Y. I. Yang, H. Niu, and M. Parrinello, J. Phys. Chem. Lett. 9, 6426(2018). DOI:10.1021/acs.jpclett.8b03005
V. Spiwok, Z. Sucur, and P. Hosek, Biotechnol. Adv. 33, 1130(2015). DOI:10.1016/j.biotechadv.2014.11.011
C. Abrams, and G. Bussi, Entropy 16, 163(2014).
R. Salomon-Ferrer, D. A. Case, and R. C. Walker, WIRES. Comput. Mol. Sci. 3, 198(2013). DOI:10.1002/wcms.1121
M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, and E. Lindahl, SoftwareX 1-2, 19(2015). DOI:10.1016/j.softx.2015.06.001
J. C. Phillips, R. Braun, W. Wang, J. Gumbart, E. Tajkhorshid, E. Villa, C. Chipot, R. D. Skeel, L. Kale, and K. Schulten, J. Comput. Chem. 26, 1781(2005). DOI:10.1002/(ISSN)1096-987X
P. Eastman, J. Swails, J. D. Chodera, R. T. McGibbon, Y. Zhao, K. A. Beauchamp, L. P. Wang, A. C. Simmonett, M. P. Harrigan, C. D. Stern, R. P. Wiewiora, B. R. Brooks, and V. S. Pande, PLoS Comput. Biol. 13, e1005659(2017). DOI:10.1371/journal.pcbi.1005659
M. Compoint, F. Picaud, C. Ramseyer, and C. Girardet, J. Chem. Phys. 122, 134707(2005). DOI:10.1063/1.1869413
X. Wu, B. R. Brooks, and E. Vanden-Eijnden, J. Comput. Chem. 37, 595(2016). DOI:10.1002/jcc.24015
G. A. Tribello, M. Bonomi, D. Branduardi, C. Camilloni, and G. Bussi, Comput. Phys. Commun. 185, 604(2014). DOI:10.1016/j.cpc.2013.09.018
X. C. Zhan, C. Y. Yan, X. F. Zhang, J. L. Lei, and Y. G. Shi, Science 359, 537(2018). DOI:10.1126/science.aar6401
C. Y. Yan, R. X. Wan, R. Bai, G. X. Y. Huang, and Y. G. Shi, Science 355, 149(2017). DOI:10.1126/science.aak9979
C. Y. Yan, J. Hang, R. X. Wan, M. Huang, C. C. L. Wong, and Y. G. Shi, Science 349, 1182(2015). DOI:10.1126/science.aac7629
R. Bai, R. X. Wan, C. Y. Yan, J. L. Lei, and Y. G. Shi, Science 360, 1423(2018). DOI:10.1126/science.aau0325
P. Y. Chen, and M. E. Tuckerman, J. Chem. Phys. 148, 024106(2018). DOI:10.1063/1.4999447
P. Cossio, F. Marinelli, A. Laio, and F. Pietrucci, J. Phys. Chem. B 114, 3259(2010). DOI:10.1021/jp907464b
K. Moritsugu, T. Terada, and A. Kidera, Chem. Phys. Lett. 616-617, 20(2014). DOI:10.1016/j.cplett.2014.10.009
W. Zhang, and J. Chen, J. Chem. Theory Comput. 10, 918(2014). DOI:10.1021/ct500031v
A. Wang, Z. Zhang, and G. Li, J. Phys. Chem. Lett. 9, 7110(2018). DOI:10.1021/acs.jpclett.8b03471
P. Robustelli, S. Piana, and D. E. Shaw, Proc. Natl. Acad. Sci. USA 115, E4758(2018). DOI:10.1073/pnas.1800690115
王安辉a,b , 张志超b , 李国辉a     
a. 中国科学院大连化学物理研究所,分子反应动力学国家重点实验室,大连 116023;
b. 大连理工大学化学学院,精细化工国家重点实验室,大连 116024
摘要: 分子动力学模拟能够描述蛋白质分子在行使生物学功能过程中涉及的构象变化,已发展成为生物学研究中重要的计算工具.由于生物分子的构象分布存在崎岖的自由能面,在较为复杂的生物体系的模拟中,传统的分子动力学模拟的构象采样能力受到极大限制,模拟的时间尺度与真实的生物学过程之间仍存在差距.增强采样是解决这一问题的有效手段.本文综述了两类增强采样方法即约束型和无约束型增强采样算法的理论基础、最新进展及其在生物分子中的典型应用,同时也简要总结了组合型增强采样算法近些年的发展.
关键词: 增强采样    伞形采样    副本交换    多元动力学    加速分子动力学