Computational mathematics for model reduction and predictive modelling in molecular and complex systems

21/05/2019 - 29/05/2019

EPFL, CIB

August 21, 2019

Eric Cances Ecole des Ponts ParisTech

From Density Functional Theory to Tight-Binding models for multilayer 2D materials

2D materials such as graphene have fascinating electronic and optical properties. Multilayer 2D materials are obtained by stacking several layers of possibly different 2D materials. Their study is one of the current hot topics in physics and materials science. The numerical simulation of such systems is made difficult by incommensurabilities originating from lattice mismatches and twisting angles. In this talk, I will first present a model reduction technique suitable to compute the electronic and optical properties of multilayer 2D materials from first-principle simulations.


Michele Ceriotti EPFL

Atomistic Machine Learning between Physics and Data

Statistical regression techniques have become very fashionable as a tool to predict the properties of systems at the atomic scale, sidestepping much of the computational cost of first-principles simulations and making it possible to perform simulations that require thorough statistical sampling without compromising on the accuracy of the electronic structure model.
In this talk I will argue how data-driven modelling can be rooted in a mathematically rigorous and physically-motivated framework, and how this is beneficial to the accuracy and the transferability of the model. I will also highlight how machine learning - despite amounting essentially to data interpolation - can provide important physical insights on the behavior of complex systems, on the synthesizability and on the structure-property relations of materials.
I will give examples concerning all sorts of atomistic systems, from semiconductors to molecular crystals, and properties as diverse as drug-protein interactions, dielectric response of aqueous systems and NMR chemical shielding in the solid state.


Christoph Dellago Universität Wien

Machine learning in atomistic simulations: from reaction pathways to phase diagrams

Atomistic computer simulations of condensed matter systems are challenging for several distinct but related reasons. For large systems, the accurate calculation of energies and forces needed in molecular dynamics simulations may be computationally demanding, particularly if electronic structure calculations are used for this purpose. Other difficulties arising in the dynamical simulation of condensed matter processes consist in detecting local structures characteristic for stable or metastable phases and in identifying important degrees of freedom that capture the essential physics of the process under study. In this talk, I will discuss how these problems can be addressed using machine learning approaches. I particular, I will focus on a computational study of water and ice based on a high-dimensional neural network potential trained with ab initio reference data.


Hamid Ghasemi Howard University

Predicting mechanical properties of carbon nanotubes using machine learning

Department of Civil and Environmental Engineering, Howard University, Washington, DC, USA

Properties of materials are a function of many parameters such as dimensions and temperature. There are laboratory and numerical methods (e.g. density functional theory and molecular dynamics) that can accurately determine the properties of materials given a set of these parameters. However, these methods are typically resource- and computationally intensive and involve a high degree of trial-and-error, serendipity an incremental advances, increasingly making the materials research community to use artificial intelligence to develop models that will efficiently and adequately accurately predict materials’ properties and shorten the time-to-market of innovative materials. In this research, a machine learning (ML) model is developed to predict the mechanical properties of carbon nanotubes (CNTs). A comprehensive set of molecular dynamics simulations is carried out to produce the numerical and imagery data required to train and develop the ML model. The ML model consists of two sub-models: 1) a particular deep learning model known as convolutional neural network (CNN) that identifies the size and chirality of a CNT from its image and 2) artificial neural network (ANN) and support vector machine (SVM) models that combine the data output by the CNN with that corresponding to the environmental and loading conditions (e.g. temperature and load type) to predict the mechanical properties of the CNT.

This is joint work with Hessam Yazdani


François Gygi University of California Davis

First-Principles Molecular Dynamics: uncertainty quantification and opportunities for model reduction

Francois Gygi, Department of Computer Science, University of California Davis, USA

First-Principles molecular dynamics simulations based on Density Functional Theory (DFT) are now commonly used in predictive simulations of solids, liquids and molecules, and provide a consistent picture of both structural and electronic properties. However, the choice of a specific flavor of density functional can strongly affect the accuracy of the simulations, emphasizing the need for systematic validation of density functionals. We discuss issues related to the comparison and validation of density functionals in molecular dynamics simulations of liquids, focusing on water. Structural and electronic properties are considered, as well as infrared and Raman spectra. When using "hybrid" density functionals, MD simulations become very costly due to the computation of Hartree-Fock exchange integrals. In this context, we discuss the use of a compression scheme (recursive subspace bisection) that allows for a representation of one-electron orbitals in spatially localized form, thus dramatically reducing the computational cost, while preserving control of the accuracy. Applications to the computation of optical absorption spectra using the Bethe-Salpeter equation will be presented.

*Supported by the Midwest Integrated Center for Computational Materials (MICCoM), as part of the Computational Materials Sciences Program funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences.


Jiequn Han Princeton University

Uniformly Accurate Hydrodynamic Models for Kinetic Equations Using Machine Learning

This talk introduces a new methodology based on machine learning for multi-scale modeling. Our two main interests are developing macro-scale models in situations without scale separation and meanwhile building interpretable and truly reliable physical models with the help of machine learning. Taking the example of machine-learned hydrodynamic models for kinetic equations, we will illustrate what we consider to be the main issues that are involved.


Vangelis Harmandaris University of Crete & IACM/FORTH

Hierarchical Multiscale Modeling of Molecular Systems: Current Trends and Challenges

Atomistic molecular simulations have been proved a very powerful tool to predict quantitatively the properties of complex molecular systems [1,2]. However, due to the enormous range of spatiotemporal scales characterizing such materials dimensionality reduction coarse-grained (CG) approaches that describe a given system through fewer degrees of freedom are required [1]. A main goal is to develop CG models that can provide quantitative predictions, beyond the range of conditions in which they have been parametrized. Structural and dynamical material properties need to be considered. The former are related to the ability of the CG model to predict free energies, whereas the latter to its accuracy with respect the friction and dissipation terms. Here, we give an overview of different CG methodologies with particular emphasis on the above challenges.
We, first, discuss various ways for obtaining parametrized CG (effective) interactions, starting from detailed atomistic representation for high dimensional molecular systems [1-4]. Methods such as inverse Boltzmann, force matching, and information theory (relative entropy), provide parameterizations of CG models at equilibrium by minimizing a fitting functional over a parameter space. All the methods mentioned in principle are employed to approximate a many body potential, the (n-body) potential of mean force, describing the equilibrium distribution of CG sites observed in simulations of atomically detailed models [3,4]. We also discuss a systematic CG strategy based on cluster expansion techniques, providing a hierarchy of CG Hamiltonians with interaction potentials consisting of two, three and higher body interactions [5]. In this way, the suggested model becomes computationally tractable, since no information from long n-body (bulk) simulations is required, while retaining the fluctuations at the CG level.
Next, we discuss the ability of CG models to describe the dynamics of molecular systems. Time mapping approaches, as well as Mori-Zwanzig methodologies are presented. In addition, we further extend force matching and information theory based CG models, using path-space methods for systems under non-equilibrium conditions [3,4]. All the above methods are applied in several systems, such as water, alkanes, polymers, and hybrid polymer/solid nanostructured systems. Finally, we close by presenting challenges and open questions in the field, focusing on hybrid physics-based data-driven methodologies.

References:
[1] V. Harmandaris et al., Macromolecules, 39, 6708 (2006); Macromolecules, 42, 791 (2009).
[2] P. Bačová, et al., ACS Nano, 13, 2, 2439 (2019); M. Gulde, et al., Nano Letters, 16, 6994 (2016).
[3] E. Kalligiannaki, et al., J. Chem. Phys., 143, 084105 (2015); J. Comp. Phys., 314, 355 (2016).
[4] E. Kalligiannaki, et al., Europ. Phys. J. Special Topics, 225, 1347 (2016).
[5] A. Tsourtis, V. Harmandaris, D. Tsagkarogiannis, Entropy, 19, 395, (2017).


Carsten Hartmann BTU Cottbus-Senftenberg

Bilinear and stochastic balanced model reduction for controlled molecular systems: theory and numerics

Due to the growing ability to accurately manipulate single molecules by spectroscopic techniques, numerical methods for the control of molecular systems have recently attracted a lot of attention. In this talk, I will discuss different model reduction techniques for bilinear systems, specifically generalized balancing and interpolation-based model reduction, and apply it to semi-discretised controlled Fokker-Planck and Liouville-von Neumann equations.
I will discuss aspects of structure preservation, Hardy space error bounds, application to infinite-dimensional evolution equations, and various numerical issues.

This is joint work with Simon Becker (Cambridge), Peter Benner (Magdeburg), Tobias Breiten (Graz) and Burkhard Schmidt (Berlin).


Gerhard Hummer Max Planck Institute of Biophysics

Brownian dynamics Monte Carlo integrators with exact equilibria and accurate diffusion dynamics

We develop a Markov-chain Monte Carlo algorithm to simulate Brownian dynamics. Our focus is on the rotational and translational diffusion of rigid molecules. The integrator generates accurate diffusion dynamics while strictly preserving the Boltzmann equilibrium distribution. We compare the new and existing algorithms with regards to accuracy and efficiency. As illustrative examples, we study the binding and dissociation of protein-protein complexes.

(Co-auhors: Max Linke, Jürgen Köfinger)


Efthimios Kaxiras Harvard University

“Electron localization in incommensurately stacked layers of 2D materials”

The stacking of individual layers of two-dimensional materials can be experimentally controlled with remarkable precision on the order of a tenth of a degree. The relative orientation of successive layers introduces variations in the electronic properties that can be controlled by the twist angle. We use simple theoretical models and accurate electronic structure calculations to predict that the electronic density in stacked 2D layers can vary in real space in a manner similar to the band-structure in momentum-space, creating moire super-lattices. A direct consequence of the patterns is the localization of electronic states. We demonstrate this effect in graphene, a semi-metal, and representative materials of the transition metal dichalcogenide family of semiconductors. This effect has important consequences in electronic behavior, including superconductivity, as has been recently reported for twisted bilayer graphene.


James Kermode University of Warwick

Extending the applicability of QM/MM simulation of chemo-mechanical processes in materials with machine learning and uncertainty quantification

Fracture and plasticity are the dominant failure processes underlying many materials reliability issues. They are also some of the most challenging multiscale modelling problems, requiring both an accurate description of chemical processes occurring near crack tips or dislocation cores and the inclusion of much larger model systems. These requirements can be met simultaneously by combining quantum mechanical descriptions of chemically active regions with classical atomistic models that capture the long-range elastic behaviour of the surrounding crystal matrix, using approaches such as the `Learn on the Fly’ (LOTF) scheme. I will review recent methodological advances that: (i) improve the efficiency of the scheme using machine learning and identify remaining limitations with uncertainty quantification approaches; (ii) extend to rare events by computing minimum energy paths in multiscale sytems; (iii) extend to metallic systems. Together, these enable processes such as dislocation motion in the presence of impurities to be described.


Petros Koumoutsakos ETHZ

Data as Models: Bayesian Inference for Molecular Simulations

Atomistic and mesoscale simulations of molecular systems rely on several empirical assumptions and parameters that have been petrified over the last fifty years.
Can we augment these empirical developments through data science?
In this talk, I will discuss the incorporation of experimental and quantum mechanics data in Bayesian inference for the classical Lennard-Jones potential and the selection of coarse-grained models of water. Our results suggest that the type and quality of data are as important as the way in which they are included in the modeling process.


Phaedon-Stelios Koutsourelakis Technische Universität München

Deep Variational Coarse-Graining and Collective Variable Discovery without any Data

The exploration of Boltzmann densities which characterize assemblies of atoms at equilibrium is generally performed by Molecular Dynamics or Monte Carlo-based techniques. While these allow great generality and guarantee asymptotically unbiased estimates of any observable, they are impracticable when the interatomic potential is characterized by many wells i.e. the corresponding Boltzmann density is multimodal.
It has long been recognized that the discovery of appropriate collective variables (CVs) is the key to exploring such landscapes as well as providing insight in the physical and chemical mechanisms at play. Such collective variables provide not only a dimension-reduced description of an, in general very high-dimensional atomistic ensemble, but in addition are the key enabler in biasing the dynamics so as to escape the aforementioned wells.

In cases where physical insight is absent, a prominent strategy for the discovery of CVs is based on data i.e. simulation data obtained by MD. Such techniques require very long MD series which are frequently pre-pocessed in order to ensure the right statistical properties are retained. In most cases, the simulation data should have visited all the aforementioned wells which negates their utility.
Even if CVs have been identified through a nonlinear dimensionality reduction technique, it is not generally straightforward to incorporate them in the enhanced sampling as most current techniques require CVs given as differentiable functions of the original description.
The advances in deep learning and machine learning more broadly have also impacted this problem and a large number of papers have appeared in just the last few years. While these techniques exploit the increased expressivity of deep NNs they still rely on MD simulation data. In many ways, one could say that the data-generation process and the coarse-graining are two separated procedures.


The present paper offers a fundamentally different perspective in employing data-driven, deep-learning procedures. Rather than relying on the independent generation of simulation data on which appropriate statistical learning is performed, we embed the physical model i.e. the Boltzmann density in the machine learning objective.
The learning algorithm dictates at which configurations potential and/or force computations are needed. Hence one does not really perform MD but pings the physical model in order to acquire the necessary information. We embed in the formulation an automatic dimensionality-reduction procedure that simultaneously with the learning of the Boltzmann density discovers a set of reduced coordinates.
We demonstrate this in synthetic as well as physical problems (i.e. the alanine dipetide).


Tony Lelièvre Ecole des Ponts ParisTech

Hybrid Monte Carlo methods for sampling probability measures on submanifolds

Probability measures supported on submanifolds can be sampled by adding an extra momentum variable to the state of the system, and discretizing the associated Hamiltonian dynamics with some stochastic perturbation in the extra variable. In order to avoid biases in the invariant probability measures sampled by discretizations of these stochastically perturbed Hamiltonian dynamics, a Metropolis rejection procedure can be considered. The so-obtained scheme belongs to the class of generalized Hybrid Monte Carlo (GHMC) algorithms. We show how to generalize to GHMC a procedure suggested by Goodman, Holmes-Cerfon and Zappa for Metropolis random walks on submanifolds, where a reverse projection check is performed to enforce the reversibility of the algorithm for large timesteps and hence avoid biases in the invariant measure. We also provide a full mathematical analysis of such procedures, as well as numerical experiments demonstrating the importance of the reverse projection check on simple toy examples.

This a joint work with M. Rousset and G. Stoltz

Ref. TL, M. Rousset and G. Stoltz, Hybrid Monte Carlo methods for sampling probability measures on submanifolds, https://hal.archives-ouvertes.fr/hal-01832820v1 .


Mitchell Luskin University of Minnesota

Mathematical Modeling and Numerical Analysis for Incommensurate 2D Materials

Two dimensional crystalline layers of atoms have recently been exfoliated. Stacking a few layers of these 2D materials such as graphene or molybdenum disulfide at controlled twist angle has opened the possibility of tuning the electronic and optical properties of 2D materials. One of the main issues encountered in the mathematical and computational modeling of 2D materials is that lattice mismatch and rotations between the layers destroys the periodic character of the system.

Basic concepts like elastic relaxation, electronic density of states (eigenvalue distribution of the Hamiltonian), and transport (Green-Kubo formula) will be formulated and analyzed in the incommensurate (aperiodic) setting. We have developed a novel variational model for the elastic relaxation and new methods to compute electronic density of states and transport for the incommensurate Hamiltonian. New computational approaches will be presented, and the validity and efficiency of these approximations will be examined from mathematical and numerical analysis perspectives.


Laurent Michel Université de Bordeaux

Sharp spectral gap for a non-reversible metastable diffusion

Consider the overdamped Langevin equation $dx_t=-U_h(x_t)+\sqrt{2h}dB_t$ associated to
a vector field $U_h(x)$ in the law temperature regime $h\rightarrow 0$. We study the spectrum of the associated diffusion $L_h=-h\Delta+U_h\cdot\nabla$ under the assumption that it admits
a stationary distribution $e^{-V/h}$ for some smooth function $V:\mathbb{R}^d\rightarrow\mathbb{R}$. Assuming additionally that the function $V$ is a Morse function we prove that $L_h$ admits exactly $n_0$ eigenvalues in the strip $\{\operatorname{Re}(z)< Ch\}$ (where $n_0$ is the number of minima of $V$) and that these eigenvalues have exponentially small modulus. Under a generic assumption we also give a sharp asymptotics of these eigenvalues in terms of Arrhenius numbers. This is a joint work with D. Le Peutrec.


Christoph Ortner University of Warwick

Mathematics+Physics+Data-Driven Interatomic Potentials

Accurate molecular simulation requires prohibitively expensive quantum chemistry models that accurately capture the interaction between nuclei and electrons. The past decade has seen a revival of interatomic potentials, re-casting their construction as an approximation problem (or, “machine learning”). In this lecture, I plan to give an overview of MLIPs, and motivate our own strategy which pursues a partial return to classical potentials. First I will show how to construct systematically improvable MLIPs using a cluster expansion and symmetric polynomials to approximate the individual components. In the second step, I will show how this adherence to "simple" (low-dimensional) functional forms allows us to effectively regularise our IPs, to obtain “transferrable” potentials that are suitable for a wide range of modelling scenarios. (joint work with Genevieve Dusson, Cas van den Oort, Gabor Csanyi)


Yannis Pantazis FORTH - IACM

Dealing with Latent Confounders in Sparse Inference of Dynamical Systems

A common assumption in structure inference of dynamical systems from temporal measurements is the absence of latent confounders. A latent confounder is an unobserved quantity that directly affects at least two measured ones. In this talk, we will briefly introduce the problem of sparse learning of dynamical systems and then present a novel methodology on how to deal with latent confounders. A mathematical theory which determines how many latent confounders are present as well as an algorithm that infers both the dynamical system and the latent confounders will be given. We demonstrate the capabilities of the proposed approach on synthetic examples from non-linear biochemical reaction networks as well as on real temporal data.


Paul Patrone NIST

Uncertainty quantification for coarse-grained molecular dynamics: the role of asymptotics in model reduction

Coarse-grained (CG) molecular dynamics (MD) simulations have gained widespread attention for their promise to decrease the cost of modeling complicated systems, thereby facilitating tasks such as materials discovery and development. However, for many coarse-graining strategies, the formulation of a priori uncertainty estimates remains an open problem, so that the accuracy of the corresponding reduced-order models can only be assessed through comparison with their fully atomistic counterparts. This problem in particular has limited the adoption of CG techniques in industrial and manufacturing settings because the cost to validate such models invariably outweighs later computational gains.


Motivated by these observations, I will discuss asymptotics and perturbation theory as tools to: (I) understand the implicit assumptions underlying CG methods that project out atomistic degrees of freedom; and (II) address challenges associated with uncertainty quantification (UQ) of CG models. In particular, I will consider our recent progress on generalized multipole expansions as a route for coarse-graining. The main idea behind this approach is to replace a collection of interatomic potentials between atoms of different molecules with a formally exact intermolecular potential. Notably, the latter can be expressed as a series expansion whose terms depend on increasingly complicated geometric moments of the molecular structure. One can show how specific truncations of this expansion lead to a popular class of coordinate-reduction schemes, thereby revealing the origin and magnitude of errors in a CG model relative to its atomistic counterpart. In this context, I will also consider causes of the transferability problem for CG MD and discuss the need for a more precise definition of this concept as it pertains to UQ.


Luc Rey-Bellet University of Massachusetts Amherst

Information theory methods for uncertainty quantification

Given a baseline model $P$ and some quantity of interest (for example computing the expected value of some observable) we derive bounds on the bias when using an alternative model $Q$ which is specified to belong in some (relative entropy) neighborhood of $P$. Our bounds are derive using the Gibbs variational representation of the relative entropy and are tight. Moreover they scale properly with time and/or the size of the system. They are made into computable bounds by using concentration inequalities. We also discuss how Renyi entropy can be used to quantify uncertainty of rare event.



Jutta Rogal Ruhr-Universität Bochum

Path collective variables for sampling structural phase transformations

Obtaining atomistic insight into the fundamental processes during structural phase transformations and their dynamical evolution up to experimental timescales remains one of the great challenges in materials modelling. In particular, if the mechanisms of the phase transformations are governed by so-called rare events, the timescales of interest will reach far beyond the applicability of regular molecular dynamics simulations. In addition to the timescale problem the simulations provide a vast amount of data within the high-dimensional phase space. A meaningful physical interpretation of these data requires the projection into a low-dimensional space and the identification of suitable reaction coordinates.

Here, I will present an approach to rigorously construct a one-dimensional path collective variable that can be used to sample phase transformations in materials. Together with enhanced sampling schemes such as the driven adiabatic free energy dynamics (d-AFED)/temperature accelerated molecular dynamics (TAMD) or metadynamics an efficient exploration of the high-dimensional phase space is achieved. As an example I will discuss the analysis of phase boundary migration during a solid-solid transformation in molybdenum between the topologically close-packed A15 and the body-centred cubic phase. The transformation proceeds via collective displacements of atoms in the interface region. The associated effective energy barrier that determines the mobility of the phase boundary is not associated with specific atomistic processes, but results from the characteristic features of the complex energy landscape that the system explores during the transformation.


Emanuel Rubensson Uppsala University

Localized electronic structure methods

We propose a new type of divide and conquer methods for electronic structure calculations based on for example Hartree-Fock or Kohn-Sham density functional theory. The calculation is split into two disconnected calculations for two subsystems that are followed by a correction step whose cost is negligible compared to the overall cost for sufficiently large systems. This binary split continues recursively which gives a method of divide and conquer type. Since the subproblems at each level of the recursion are disconnected they can be solved in parallel without any communication in between.

We develop such divide and conquer methods both for the inverse factorization of the basis set overlap matrix and for the computation of the density matrix for a given effective Hamiltonian matrix. The inverse factor is used to transform the quantum mechanical eigenvalue problem to standard form, which is needed for efficient density matrix methods.

We focus here mainly on our new method for the inverse factorization and present theoretical and experimental results regarding convergence, stability, and the decay properties of the involved matrices.

We will also present results regarding the scaling of the computational cost with system size as well as the parallel performance of our Chunks and Tasks implementation. The Chunks and Tasks parallel programming model supports implementation of locality-aware communication avoiding algorithms. We show that a substantial reduction of the communication volume is achieved in practice.


Matthias Sachs Duke University

Ergodic Properties of Non-equilibrium (Generalized) Langevin Equations

In this talk I discuss ergodic properties for systems which can be described by a quasi-Markovian generalized Langevin equation. Traditionally, in thermal equilibrium, one assumes (i) the forces in the (generalized) Langevin equation are given as the gradient of a potential and (ii) a fluctuation-dissipation relation holds between stochastic and dissipative forces; these assumptions ensure that the system samples a prescribed Gibbs-Boltzmann distribution for a specified target temperature. In this talk I will relax these assumptions allowing for non-stationary noise and temperature parameters as well as non-conservative force fields.


Gideon Simpson Drexel University

Sampling from Rough Energy Landscapes

Rough energy landscapes appear in a variety of applications including disordered media and soft matter. In this work, we examine challenges to sampling from Boltzmann distributions associated with rough energy landscapes. Here, the roughness will correspond to highly oscillatory, but bounded, perturbations of a fundamentally smooth landscape. Through a combination of numerical experiments and asymptotic analysis, we demonstrate that the performance of Metropolis Adjusted Langevin Algorithm can be severely attenuated as the roughness increases. In contrast, we prove, rigorously, that Random Walk Metropolis is insensitive to such roughness. We also formulate two alternative sampling strategies that incorporate large scale features of the energy landscape, while resisting the impact of roughness; these also outperform Random Walk Metropolis. Numerical experiments on these landscapes are presented that confirm our predictions. Open analysis questions and numerical challenges are also highlighted.

This is joint work with P. Plechac (Delaware).


Michail Stamatakis University College London

Modelling the Kinetics of Heterogeneously Catalysed Reactions: Predictive Power at Low Computational Cost

Modelling catalytic kinetics is indispensable for the design of reactors and chemical processes. Yet, developing accurate and computationally efficient kinetic models remains challenging. Deterministic mean-field models are computationally efficient but may incur large errors as they omit spatial correlations between adsorbates, and treat the effect of lateral interactions on reaction barriers (coverage effects) in a simplistic way. Kinetic Monte Carlo (KMC) approaches provide high accuracy with a detailed treatment of these effects (Nielsen et al., 2013), but at a significant computational expense.

Motivated by this challenge, we first discuss the necessity of high-fidelity KMC descriptions, and then showcase the development of approximations that provide accuracy comparable to that of KMC at a remarkably lower computational cost. We demonstrate our findings on chemistries relevant to emissions control, in particular CO oxidation and NO_2 reduction. We show that for the case of CO oxidation, the observed half-order kinetics at high temperatures arise from the effect of coverage on the reaction barrier, rather than oxygen island formation as previously thought (Stamatakis & Piccinin, 2016). We further use the NO_2 reduction and NO oxidation reactions as a benchmark for our novel approximations, which can efficiently capture coverage effects. While more computationally intense than the rather inaccurate mean-field model, these approximations still enable significant computational savings compared to a KMC simulation (Pineda and Stamatakis, 2017), thereby paving the road for employing them in multiscale modelling frameworks for first-principles-based reactor design.

References:

J. Nielsen, M. d’Avezac, J. Hetherington, M. Stamatakis. (2013). “Parallel Kinetic Monte Carlo Simulation Framework Incorporating Accurate Models of Adsorbate Lateral Interactions”. The Journal of Chemical Physics, 139(22), 224706. doi:10.1063/1.4840395

M. Stamatakis, S. Piccinin. (2016). “Rationalising the relation between adlayer structure and observed kinetics in catalysis”. ACS Catalysis, 6(3), 2105-2111. doi:10.1021/acscatal.5b02876

M. Pineda, M. Stamatakis (2017). “Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics”. The Journal of Chemical Physics, 147(2): 024105. doi: 10.1063/1.4991690


Gabriel Stoltz Ecole des Ponts ParisTech

Convergence of Adaptive Langevin (and other dynamics) using hypocoercivity

Adaptive Langevin dynamics allow to sample the Boltzmann--Gibbs distribution at a prescribed temperature for underdamped Langevin dynamics with unknown fluctuation magnitudes. The latter situation appears for instance when sampling a posteriori distributions of parameters in Bayesian inference, in cases when there are many data points and one uses a mini-batching strategy to approximate the gradient of the likelihood function. The idea of Adaptive Langevin is to consider the friction in underdamped Langevin dynamics as a dynamical variable, updated according to some Nose-Hoover feedback. We show here using techniques from hypocoercivity that the law of Adaptive Langevin dynamics converges exponential fast to equilibrium, with a rate which can be quantified in terms of the key parameters of the dynamics (mass of the extra variable and magnitude of the fluctuation in the Langevin dynamics). This allows us in particular to obtain a Central Limit Theorem on time averages along realizations of the dynamics.

Depending on time, I will discuss how hypocoercive techniques can be used for other dynamics such as Langevin dynamics with non-quadratic kinetic energies, nonequilibrium Langevin dynamics, spectral discretization of Langevin dynamics and Temperature accelerated molecular dynamics.


Krzystof Szalewicz University of Delaware

Modelling of complex materials based on force fields from quantum mechanics

Computational modelling of complex materials and material design have become mainstream methods in science and engineering and are expected to become even more important as computers pass the exaflop performance threshold. All such calculations require values of interaction potentials that produce forces acting on nuclei. Currently, these values are mostly either taken from empirical potential energy surfaces (EPESs), called also force fields, or computed on-the-fly using density-functional theory (DFT) methods (an approach referred to as ab initio molecular dynamics, AIMD). An alternative to these two approaches are physics-based PESs fitted to ab initio calculations (APES) for dimers and trimers. In particular, symmetry-adapted perturbation theory (SAPT) [1] is well suited for such calculations due to its seamless connection to asymptotic interaction energies. APESs predict interaction energies that are much more accurate than either those from empirical PESs or from DFT calculations. At the same time, the resources needed for nuclear dynamics with APES are essentially the same as in the case of EPESs. The APES approach has been used less frequently than the other two mainly since the generation of first-principles PESs used to be very human-time consuming. This roadblock was recently removed by the development of an automatic PES generation methodology [2]. This approach has been used successfully to predict structure of molecular crystals.
The main weakness of current AIMD methods is low accuracy of DFT. One reason are problems of DFT in recovering dispersion energies resulting from nonlocal electron correlation effects. While a pragmatic solution consisting of adding to DFT interaction energies atom-atom functions fitted to wave-function-based benchmark interaction energies, the so-called DFT+D approach, partly alleviates this problem, this solution is unsatisfactory on physical grounds [3]. Physics-based nonlocal functionals have been proposed, but give dispersion energies with errors as large as 30% or more [4]. A new nonlocal functional [5] will be described, which matches accuracy of DFT+D methods.

[1] K. Szalewicz, Wiley Interdisc. Rev.–Comp. Mol. Sci. 2, 254 (2012).
[2] M. P. Metz, K. Piszczatowski, and K. Szalewicz, J. Chem. Theory Comput. 12, 5895 (2016).
[3] M. Shahbaz and K. Szalewicz, Phys. Rev. Lett. 121, 113402 (2018).
[4] M. Shahbaz and K. Szalewicz, Theor. Chem. Acc. 138, 25 (2019).
[5] M. Shahbaz and K. Szalewicz, “Damped asymptotic dispersion energy from local polarizability densities,” (2019), submitted.


Brian Van Koten University of Massachusetts Amherst

Stratification and Markov Chain Monte Carlo

In stratified survey sampling, one divides a population into homogeneous subgroups and then draws samples from each subgroup independently. Stratification often permits accurate computation of statistics from a sample much smaller than required otherwise.
One can stratify Markov chain Monte Carlo (MCMC) simulations as well as surveys. This idea arose in computational statistical physics, and stratified MCMC has been instrumental in resolving important questions related to ion channels and protein folding. I will explain how to use stratified MCMC for a broad class of problems, including both the computation of averages with respect to an arbitrary target distribution and the computation of dynamical quantities such as rates of chemical reactions. I will then present theoretical results and numerical experiments which demonstrate the advantages of stratified MCMC.


Eric Vanden-Eijnden New York University

Dynamical Computation of the Density of States and Bayes Factors Using Nonequilibrium Importance Sampling

Nonequilibrium sampling is potentially much more versatile than its equilibrium counterpart, but it comes with challenges because the invariant distribution is not typically known when the dynamics breaks detailed balance. Here, I will present a generic importance sampling technique that leverages the statistical power of configurations transported by nonequilibrium trajectories and can be used to compute averages with respect to arbitrary target distributions. As a dissipative reweighting scheme, the method is related to the annealed importance sampling (AIS) method that relies Jarzynski equality. Unlike AIS, the approach gives an unbiased estimator, with a provably lower variance. I will illustrate the properties of estimators relying on this sampling technique in the context of density of state calculations, showing that it scales favorable with dimensionality—for example, I will show that it can be used to compute the phase diagram of the mean-field Ising model from a single nonequilibrium trajectory. I will also demonstrate the robustness and efficiency of the approach with an application to a Bayesian model comparison problem of the type encountered in astrophysics and machine learning.

This is joint work with Grant Rotskoff, Phys. Rev. Lett. 122, 150602 (2019).


Gilles Vilmart UNIGE

High-order integrators for sampling the invariant distribution of stiff ergodic stochastic differential equations

The preservation of geometric structures, such as the symplecticity of the flow for deterministic Hamiltonian systems, often reveals essential for an accurate numerical integration, and this is the aim of geometric integration. In this talk we highlight the role that some geometric integration tools that were originally introduced in the deterministic setting play in the design of new accurate integrators to sample the invariant distribution of ergodic systems of stochastic ordinary and partial differential equations. In particular, we show how the ideas of modified differential equations and processing techniques permit to increase at a negligible overcost the order of accuracy of stiff integrators, including implicit schemes and explicit stabilized schemes.

This talk is based on joint works with Assyr Abdulle (EPF Lausanne), Ibrahim Almuslimani (Univ. Geneva), Charles-Edouard Béhier (Univ. Lyon), Adrien Laurent (Univ. Geneva), Gregorios A. Pavliotis (Imperial College London), Konstantinos C. Zygalakis (Univ. Edinburgh).
Preprints available at http://www.unige.ch/~vilmart


Jonathan Weare New York University

Fast randomized iterative numerical linear algebra for quantum chemistry and other applications

I will discuss a family of recently developed stochastic techniques for linear algebra problems involving very large matrices. These methods can be used to, for example, solve linear systems, estimate eigenvalues/vectors, and apply a matrix exponential to a vector, even in cases where the desired solution vector is too large to store. The first incarnations of this idea appear for dominant eigenproblems arising in statistical physics and in quantum chemistry and were inspired by the real space diffusion Monte Carlo algorithm which has been used to compute chemical ground states for small systems since the 1970's. I will discuss our own general framework for fast randomized iterative linear algebra as well share a very partial explanation for its effectiveness. I will also report on the progress of an ongoing collaboration aimed at developing fast randomized iterative schemes specifically for applications in quantum chemistry. This talk is based on joint work with Lek-Heng Lim, Timothy Berkelbach, Sam Greene, and Rob Webber.


Thomas Weymuth ETHZ

Quantifying the Uncertainty of Quantum Chemical Calculations

In order to assess the reliability of a given quantum chemical method for a certain application, it is crucial to know its (systematic) error. However, all quantum chemical methods rely on a number of approximations, the combined effects of which are often unpredictable. Therefore, it is challenging to accurately quantify the uncertainty of a certain method [1, 2].

This talk will present recent contributions of our group to the automatic uncertainty quantification of quantum chemical methods. After a general introduction how to treat inaccuracies of physico-chemical models in a statistically rigorous fashion [3], we present an approach to construct system-specific density functionals with Bayesian error estimation [4] as well as a new way to apply Gaussian processes in order to continuously estimate the uncertainty and improve the accuracy of quantum chemical results while exploring a chemical reaction network [5]. Finally, we will present our developments to assess the uncertainty of the semiclassical dispersion corrections developed by Grimme and coworkers [6].

[1] J. Proppe, T. Husch, G. N. Simm, M. Reiher, Faraday Discuss., 2016, 195, 497.
[2] G. N. Simm, J. Proppe, M. Reiher, Chimia, 2017, 71, 202.
[3] J. Proppe, M. Reiher, J. Chem. Theory Comput., 2017, 13, 3297.
[4] G. N. Simm, M. Reiher, J. Chem. Theory Comput., 2016, 12, 2762.
[5] G. N. Simm, M. Reiher, J. Chem. Theory Comput., 2018, 14, 5238.
[6] T. Weymuth, J. Proppe, M. Reiher, J. Chem. Theory Comput., 2018, 14, 2480.


Hessam Yazdani Howard University

Predicting mechanical properties of carbon nanotubes using machine learning

Properties of materials are a function of many parameters such as dimensions and temperature. There are laboratory and numerical methods (e.g. density functional theory and molecular dynamics) that can accurately determine the properties of materials given a set of these parameters. However, these methods are typically resource- and computationally intensive and involve a high degree of trial-and-error, serendipity and incremental advances, increasingly making the materials research community to use artificial intelligence to develop models that will efficiently and adequately accurately predict materials’ properties and shorten the time-to-market of innovative materials. In this research, a machine learning (ML) model is developed to predict the mechanical properties of carbon nanotubes (CNTs). A comprehensive set of molecular dynamics simulations is carried out to produce the numerical and imagery data required to train and develop the ML model. The ML model consists of two sub-models: 1) a particular deep learning model known as convolutional neural network (CNN) that identifies the size and chirality of a CNT from its image and 2) artificial neural network (ANN) and support vector machine (SVM) models that combine the data output by the CNN with that corresponding to the environmental and loading conditions (e.g. temperature and load type) to predict the mechanical properties of the CNT.