Skip to main content
banner image
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.
1. M. P. Allen and D. L. Tildesley, Computer Simulation of Liquids (Oxford University Press, 1989).
2. D. Frenkel and B. Smit, Understanding Molecular Simulation (Academic Press, 2001).
3. D. Marx and J. Hutter, Ab-Initio Molecular Dynamics (Cambridge University Press, 2009).
4. E. J. Bylaska, K. Glass, D. Baxter, S. B. Baden, and J. H. Weare, J. Phys.: Conf. Ser. 180, 012028 (2009), see
5. E. Bylaska, K. Tsemekhman, S. Baden, J. Weare, and H. Jonsson, J. Comput. Chem. 32, 54 (2011).
6. D. E. Shaw, P. Maragakis, K. Lindorff-Larsen, S. Piana, R. O. Dror, M. P. Eastwood, J. A. Bank, J. M. Jumper, J. K. Salmon, Y. B. Shan et al., Science 330, 341 (2010).
7. E. H. Lee, J. Hsin, M. Sotomajor, G. Comellas, and K. Schultan, Structure (London) 17, 1295 (2009).
8. J. D. Durrant and J. A. McCammon, BMC Evol. Biol. 9, 71 (2011).
9. B. C. Garrett, G. K. Schenter, and A. Morita, Chem. Rev. 106, 1355 (2006).
10. Z. C. Kramer, K. Takahashi, and R. T. Skodje, J. Am. Chem. Soc. 132, 15154 (2010).
11. Y. Miller, G. M. Chaban, J. Zhou, K. R. Asmis, D. M. Neumark, and R. B. Gerber, “Vibrational spectroscopy of (SO)·(HO) clusters, n = 1−5: Harmonic and anharmonic calculations and experiment,” J. Chem. Phys. 127, 094305 (2007).
12. A. Gutberlet, G. Schwaab, O. Birer, M. Masla, and A. Kaczmarek, Science 324, 1545 (2009).
13. H. Forbert, M. Masia, A. Kaczmarek-Kedziera, N. N. Nair, and D. Marx, J. Am. Chem. Soc. 133, 4062 (2011).
14. Q. Wan, L. Spanu, and G. Galli, J. Phys. Chem. B 116, 9460 (2012).
15. M. Lundstrom, P. Cummings, and M. Alam, “Investigative Tools: Theory, Modeling, and Simulation,” in Nanotechnology Research Directions for Societal Needs in 2020 (Springer Netherlands, 2011), pp. 2969.
16. X. L. Hu, S. Piccinin, A. Laio, and S. Fabris, ACS Nano 6, 10497 (2012).
17. B. G. Fitch, A. Rayshubskiy, M. Eleftheriou, T. J. C. Ward, M. E. Giampapa, M. C. Pitman, J. W. Pitera, W. C. Swope, and R. S. Germain, IBM J. Res. Dev. 52, 145 (2008).
18. F. Gygi, IBM J. Res. Dev. 52, 137 (2008).
19. W. A. De Jong, E. Bylaska, N. Govind, C. L. Janssen, K. Kowalski, T. Müller, I. M. Nielsen, H. J. van Dam, V. Veryazov, and R. Lindh, Phys. Chem. Chem. Phys. 12, 6896 (2010).
20. J. P. Camden and G. C. Schatz, J. Phys. Chem. A 110, 13681 (2006).
21. Y. Miller and R. B. Gerber, Phys. Chem. Chem. Phys. 10, 1091 (2008).
22. R. P. Steele, M. Head-Gordon, and J. C. Tully, J. Phys. Chem. A 114, 11853 (2010).
23. U. Lourderaj, K. Song, T. L. Windus, Y. Zhuang, and W. L. Hase, J. Chem. Phys. 126, 044105 (2007).
24. T. D. Kühne, M. Krack, F. R. Mohamed, and M. Parrinello, Phys. Rev. Lett. 98, 066401 (2007).
25. M. Cawkwell and A. M. Niklasson, J. Chem. Phys. 137, 134105 (2012).
26. J. Nievergelt, Commun. ACM 7, 731 (1964).
27. J. L. Lions, Y. Maday, and G. Turinici, C. R. Math. Acad. Sci. 332, 661 (2001).
28. Y. Maday and G. Turinici, “A parallel in time approach for quantum control: the parareal algorithm,” in Proceedings of the 41st IEEE Conference on Decision and Control, 2002 (IEEE, 2002), Vol. 1, pp. 6266.
29. Y. Maday and G. Turinici, C. R. Math. 335, 387 (2002).
30. Y. Maday and G. Turinici, Domain Decomposition Methods in Science and Engineering, Lecture Notes in Computational Science and Engineering (Springer, 2005), Vol. 40, pp. 441448.
31. L. Baffico, S. Bernard, Y. Maday, G. Turinici, and G. Zerah, Phys. Rev. E 66, 057701 (2002).
32. G. Bal and Y. Maday, “A ‘parareal’ time discretization for non-linear PDE's with application to the pricing of an American put,” in Recent Developments in Domain Decomposition Methods (Springer Berlin Heidelberg, 2002), pp. 189202.
33. I. Garrido, B. Lee, G. E. Fladmark, and M. S. Espedal, Math. Comput. 75, 1403 (2006).
34. P. Fischer, in Proceedings of the 15th Domain Decomposition Conference, Berlin, July 2003 (Springer, Berlin-Heidelberg-New York, 2004).
35. P. F. Fischer, F. Hecht, and Y. Maday, “A parareal in time semi-implicit approximation of the Navier-Stokes equations,” in Domain Decomposition Methods in Science and Engineering (Springer Berlin Heidelberg, 2005), pp. 433440.
36. C. Farhat and M. Chandesris, Int. J. Numer. Methods Eng. 58, 1397 (2003).
37. M. L. Minion and S. A. Williams, “Parareal and spectral deferred corrections,” AIP Conf. Proc. 1048, 388 (2008).
38. N. R. Nassif, N. M. Karam, and Y. Soukiassian, “A new approach for solving evolution problems in time-parallel way,” in Computational Science–ICCS 2006 (Springer Berlin Heidelberg, 2006), pp. 148155.
39. L. A. Berry, W. Elwasif, J. M. Reynolds-Barredo, D. Samaddar, R. Sanchez, and D. E. Newman, J. Comput. Phys. 231, 5945 (2012).
40. G. Horton and R. Knirsch, Lect. Notes Comput. Sci. 591, 401 (1992).
41. M. La Scala and A. Bose, IEEE Trans. Circuits Syst., I: Fundam. Theory Appl. 40, 317 (1993).
42. B. Leimkuhler, SIAM (Soc. Ind. Appl. Math.) J. Numer. Anal. 35, 31 (1998).
43. M. J. Gander and S. Vandewalle, SIAM J. Sci. Comput. (USA) 29, 556 (2007), see
44. M. J. Gander and E. Hairer, “Nonlinear convergence analysis for the parareal algorithm,” in Domain Decomposition Methods in Science and Engineering XVII (Springer Berlin Heidelberg, 2008), pp. 4556.
45. C. Audouze, M. Massot, and S. Volz, CCSd Centre pour la communication scientifique directe, see (2009).
46. L. P. He, J. Comput. Math. 28, 676 (2010).
47. G. Bal and Q. Wu, Lecture Notes in Computational Science and Engineering (Springer, 2008), Vol. 60, p. 401, see
48. Y. Miller, B. J. Findlayson-Pitts, and B. Gerber, J. Am. Chem. Soc. 131, 12180 (2009).
49. R. L. Burden and J. D. Faires, Numerical Analysis (Brooks/Cole, 2010).
50. M. Maienschein-Cline and L. R. Scott, Technical Report 2011-01, University of Chicago, 2011.
51. J. Nocedal and S. J. Wright, Numerical Optimization (Springer verlag, 1999).
52. G. A. Staff and E. M. Ronquist, Lecture Notes in Computational Science and Engineering (Springer, 2003), Vol. 40, p. 449, see
53. E. D. Rainville, P. E. Bedient, and R. E. Bedient, Elementary Differential Equations (Prentice Hall, 1997), Vol. 8.
54. F. H. Stillinger and T. A. Weber, Phys. Rev. B 31, 5262 (1985), see
55. J. J. Dongarra, I. Foster, G. C. Fox, W. Gropp, K. Kennedy, L. Torczon, and A. White, Source Book of Parallel Computing (Morgan Kaufman, 2002).
56. J. L. Klepeis, K. Lindorff-Larsen, R. O. Dror, and D. E. Shaw, Curr. Opin. Struct. Biol. 19, 120 (2009).
57. V. S. Pande, K. Beauchamp, and G. R. Bowman, Methods 52, 99 (2010).
58. A. Szabo and N. S. Ostland, Modern Quantum Chemistry (Dover, 1996).
59. T. Helgaker, P. Jorgensen, and J. Olsen, Molecular Electronic-Structure Theory (John Wiley and Sons, 2000).
60. K. R. Leopold, Annu. Rev. Phys. Chem. 62, 327 (2011).
61. C. Moller and M. S. Plesset, Phys. Rev. 46, 618 (1934).
62. W. J. Hehre, R. Ditchfield, and J. A. Pople, J. Chem. Phys. 56, 2257 (1972), see
63. M. M. Francl, W. J. Pietro, W. J. Hehre, J. S. Binkley, M. S. Gordon, D. J. DeFrees, and J. A. Pople, J. Chem. Phys. 77, 3654 (1982).
64. R. Krishnan, J. S. Binkley, R. Seeger, and J. A. Pople, J. Chem. Phys. 72, 650 (1980).
65. A. D. McLean and G. S. Chandler, J. Chem. Phys. 72, 5639 (1980).
66. Ł. Walewski, H. Forbert, and D. Marx, J. Phys. Chem. Lett. 2, 3069 (2011).
67. M. Valiev, E. J. Bylaska, N. Govind, K. Kowalski, T. P. Straatsma, H. J. J. Van Dam, D. Wang, J. Nieplocha, E. Apra, and T. L. Windus, Comput. Phys. Commun. 181, 1477 (2010), see
68. H. J. J. Van Dam, W. A. De Jong, E. Bylaska, N. Govind, K. Kowalski, T. P. Straatsma, and M. Valiev, “NWChem: scalable parallel computational chemistry,” Wiley Interdisciplinary Reviews: Computational Molecular Science 1(6), 888894 (2011).
69. M. W. Schmidt, K. K. Baldridge, J. A. Boatz, S. T. Elbert, M. S. Gordon, J. H. Jensen, S. Koseki, N. Matsunaga, K. A. Nguyen, S. Su et al., J. Comput. Chem. 14, 1347 (1993), see
70. M. J. Frisch, G. W. Trucks, H. B. Schlegel et al., GAUSSIAN 09, Revision A.1, Gaussian, Inc., Wallingford, CT, 2009.
71. N. J. Higham, SIAM J. Sci. Comput. (USA) 16, 400 (1995).
72. E. S. Quintana, G. Quintana, X. Sun, and R. van de Geijn, SIAM J. Sci. Comput. (USA) 22, 1762 (2001).
73. We note that simplest guess for a trajectory is to copy the initial positions and velocities to each time step in the time interval.
74. There are efficient parallel methods for inverting lower triangular matrices.50
75. Other update methods were also tested, e.g., BFGS,51 and they were found to produce the same results.
76. See supplementary material at for the and python programs used to run the MP2 examples. [Supplementary Material]
77. X. Dai, C. Le Bris, F. Legoll, and Y. Maday, “Symmetric parareal algorithms for Hamiltonian systems,” Math. Model. Numer. Anal. 47, 717742 (2013).

Data & Media loading...


Article metrics loading...



Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, (e.g., Verlet algorithm), is available to propagate the system from time (trajectory positions and velocities = ( , )) to time ( ) by = ( ), the dynamics problem spanning an interval from can be transformed into a root finding problem, () = [ ( )] = , for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of (). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4HO AIMD simulation at the MP2 level. The maximum speedup ( ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4HO at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.


Full text loading...


Access Key

  • FFree Content
  • OAOpen Access Content
  • SSubscribed Content
  • TFree Trial Content
752b84549af89a08dbdd7fdb8b9568b5 journal.articlezxybnytfddd