Hosts: Prof. Dr. R. Klein (FU), Prof. Dr. R. Kornhuber (FU), Prof. Dr. C. Schütte (FU/ZIB)
Location: Freie Universität Berlin, Institut für Mathematik, Arnimallee 6, 14195 Berlin-Dahlem, Room: 031 ground floor
Time: The seminar takes place on Thursday at 4:00 pm
Monday, 17.10.2016: Lecture
Omar Knio, King Abdullah University of Science & Technology
Surrogate based approaches to parameter inference in ocean models
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
Thursday, 03.11.2016: Lecture
Peter Düben, University of Oxford, Department of Physics
To reduce numerical precision to achieve higher accuracy in weather and climate modelling
In atmosphere and ocean models, values of relevant physical parameters are often uncertain by more than 100% and weather forecast skill is decreasing significantly after a couple of days. Still, numerical operations are typically calculated with 15 decimal digits of numerical precision for real numbers. If we reduce numerical precision, we can reduce power consumption and increase computational performance significantly. Savings can be reinvested to allow simulations at higher resolution that would hopefully allow improved predictions of future weather and climate at higher accuracy.
We aim to reduce numerical precision to the minimal level that can be justified by information content in the different components of weather and climate models. But how can we identify the optimal precision for a complex model with chaotic dynamics? We found that a comparison between the impact of rounding errors and the influence of sub-grid-scale variability can provide valuable information on the level of precision that should be used. We also found that the influence of rounding errors can actually be beneficial for simulations since variability is increased and that numerical precision can be reduced with forecast lead time as forecast errors are increasing. Our physical understanding of the system also guided us to investigate a scale-selective approach that is using high precision to integrate large-scale dynamics and low precision to integrate small-scale dynamics.
We have performed multiple studies that investigate the use of reduced numerical precision for atmospheric applications of different complexity (from Lorenz'95 to a full global circulation model) and cooperated with computing scientists to study the use of real hardware that allows to trade numerical precision and against performance in atmospheric applications. Results will be presented during the talk.
Thursday, 17.11.2016: Colloquium
1) Luigi Sbailò, Freie Universität Berlin
"What is... greens function reaction dynamics"
"Particle-based reaction-diffusion simulations have been widely used to reproduce several biological systems. In contrast to other molecular kinetics simulations, the trajectory of all mesoscopic particles is resolved, which provides the finest spatio-temporal resolution at this scale. Mesoscopic particles diffuse according to the Langevin equation, and whenever they get close to each other reactions can happen. The most straightforward approach consists in a brute-force integration of the Langevin equation, where all particles are simultaneously propagated over a fixed integration step.
Unfortunately, a brute-force approach is notoriously inefficient in dilute systems, since, in that case, most of the computational time is spent simulating the free particles diffusion. Simulations of multi-scale systems are also challenging. The choice of the integration step size depends, indeed, on the particles mobility and the fastest particles generally determine the integration step size. Hence, the propagation of slow particles can be forced to integration steps shorter than what they would need.
Green's functions reaction-diffusion (GFRD) algorithms are event-based schemes that directly sample events as the encountering time between two particles. These schemes avoid the simulation of all steps in free-particle diffusion, which results in a tremendous improvement on the CPU performances."
2) Frank Noé and Luigi Sbailò
"... more on particle-based reaction kinetics"
Thursday, 01.12.2016: Colloquium
1) Arthur Straube, Freie Universität Berlin
"What is... the scaling hypothesis"
2) Felix Höfling, Freie Universität Berlin
Anomalous transport in the Lorentz model and in crowded cells
Anomalous, slow transport is widely observed for tracer particles in heterogeneous host structures, which are typically found in (quasi-)arrested liquids and, approximately, in densely crowded cellular fluids. I will discuss the Lorentz model for transport in such structures, which serves as paradigm for a classical localisation transition. Here, an underlying percolation phenomenon entails critical slowing down of the tracer motion, visible as subdiffusion at criticality. Guided by large-scale simulations, we have developed an elaborate scaling description of the anomalous transport. I will also make connections to random resistor networks and to the "ant in the labyrinth" and discuss universality. Eventually, I will apply the obtained insight to cellular fluids, where anomalous transport of, e.g., proteins can be measured on a spatio-temporal level by variable-lengthscale fluorescence correlation spectroscopy.
Thursday, 15.12.2016: Colloquium
1) Stefan Klus, Freie Universität Berlin. "What is... the Koopman operator?"
2) Thomas von Larcher, Freie Universität Berlin. "What is... Dynamic Mode Decomposition?"
Dynamic mode decomposition (DMD) was originally developed by the fluid dynamics community as a tool to identify coherent structures in fluid flows. Since its introduction, several variants and extensions have been proposed, which have been successfully used for the analysis of complex dynamical systems such as reaction-diffusion, neuroscience, and molecular dynamics systems. It was shown that DMD and its generalizations are related to Koopman operator approximation techniques. In the first talk, we will outline the DMD algorithm and show applications. In the second talk, we will introduce the Koopman operator and illustrate the relationship with (extensions of) DMD.
Thursday, 05.01.2017: Lecture
Helmut Harbrecht, University of Basel
Modelling and simulation of elliptic diffusion problems on random domains
There are basically two methods to deal with elliptic diffusion problems on random domains.
On the one hand, the perturbation method is based on a prescribed perturbation field of the boundary and uses a shape Taylor expansion with respect to this perturbation field to approximately represent the random solution. This yields a simple approach which, however, induces a model error.
On the other hand, in the domain mapping approach, the random domain is mapped on a nominal, fixed domain. This requires that the perturbation field is also known in the interior of the domain but the resulting partial differential equation with random diffusion matrix and random load can be solved without systematic errors. In this talk, we present theoretical and practical results for both methods. In particular, we discuss their advantages and disadvantages.
Thursday, 19.01.2017: Mini Project Presentation
1) Luca Donati, Freie Universität Berlin:
Transition rate matrix in molecular kinetics by Square Root Approximation
We have studied a method to construct the transition rate matrix of a molecular system. The method considers a Voronoi discretization of the conformational space and exploits the Gauss theorem, to write the instantaneous rate between adjacent cells in terms of the Boltzmann weight of the intersecting surface. This quantity can be approximated by the geometric average of the Boltzmann weights of the cells. By Perron cluster analysis we can also identify the metastable sets and obtain the rates between conformations. This rate matrix is associated to the transition probability matrix between metastable states and can be used to estimate the implied timescales. The method can be used to study the effect of small perturbations in the potential energy function. We present results for 2d diffusion process and Alanine dipeptide.
The study was carried out in a miniproject of two months at ZIB under the supervision of PD Dr. Marcus Weber. It has been particularly useful to deepen theoretical aspects of molecular dynamics and also to learn new mathematical methods. The project also improved the collaboration between the project B05 and A05 inside the CRC1114.
2) Marco Reidelbach, Freie Universität Berlin:
Proton Transfer in Cytochrome c Oxidase - Calculation and Prediction of Transition Networks
In Cytochrome c Oxidase protons are transferred from the complex surface towards the interior via two distinct channels. How the proton transfer occurs inside the channels, however, is unclear. The Transition Network Approach allows the identification of such proton transfer pathways without introducing any bias. Thereby, a pathway displaying several experimental characteristics was identified. To circumvent numerous Transition Network re-calculations, e.g. for different redox-states, Transition Network predictions via Committor Probabilities, Mean First Passage Times and Minimum Spanning Trees were tested in a Mini-Project with Marcus Weber. Using Minimum Spanning Tree sensitivities proved to be efficient.
Thursday, 26.01.2017: Extra Lecture
Birgit Strodel, Universität Düsseldorf / FZ Jülich
Transition networks and Markov state models for protein aggregation
Thursday, 02.02.2017: Colloquium
Illia Horenko, University of Lugano
What is the Bayesian relation model? On a data-driven Bayesian model reduction and applications.
In the "What is ..." part of the talk Bayesian relation models (BRM) and Bayesian networks, their relation to Markov processes will be introduced. Next, issues related to a data-driven inference of BRMs (like "curse of dimension", quantification of uncertainty and computational cost) will be discussed. A particular focus will be on the model reduction aspect - since applicability of many computational approaches for multiscale systems is dwelling on identification of reduced dynamical models defined on a small set of collective variables (colvars). The popular approaches to Bayesian and Markovian model reduction rely on the knowledge of the full matrix of relations between the systems components. In many application areas these matrices are not directly available and must first be estimated from the data, resulting in the uncertainty of the obtained models and colvars that can grow exponentially with the physical dimension of the system.
A simple-to-implement but still rigorous clustering methodology for probability-preserving identification of reduced dynamical models and colvars directly from the data will be presented - not relying on the availability of full relation matrices or models at any stage of the resulting algorithm. Newly-developed open-access Bayesian Model Reduction Toolbox in Matlab will be introduced. Methodology will be demonstrated on an application to analysis and modeling of interactions between global teleconnections in the atmosphere and on a biomolecular dynamics application.
The talk will describe the work from the following papers:
 S. Gerber and I. Horenko: On inference of causality for discrete state models in a multiscale context, PNAS, 111(41), pp.14651–14656, 2014.
 S. Gerber and I. Horenko: Improving clustering by imposing network information, Science Advances (AAAS), 1(7), 2015.
 S. Gerber and I. Horenko: Towards a direct identification of reduced dynamical models for categorical processes, under review in PNAS, 2017
Thursday, 09.02.2017: Extra Lecture
Tiejun Li, Peking University
Energy landscape and rare events for biological jump processes.
The construction of energy landscape for bio-dynamics is attracting more and more attention recent years. In this talk, I will introduce the strategy to construct the landscape from its connection with rare events, which relies on the large deviation theory for Gillespie-type jump dynamics. In the application to a typical genetic switching model, the two-scale large deviation theory is developed to take into account the fast switching of DNA states. The comparison with other proposals are also discussed. We demonstrate different diffusive limits arise when considering different regimes for genetic translation and switching processes. I will also talk about its applications in understanding the S-phase checkpoint activation mechanism for budding yeast. This is a joint work with Fangting Li, Xiaoguang Li, Cheng Lv and Peijie Zhou.
Thursday, 16.02.2017: Colloquium
1) Ludwig Gauckler, Freie Universität Berlin:
"What is ... the stochastic simulation algorithm?"
In preparation of Sebastian Matera's talk, a very short introduction to the stochastic simulation algorithm is given.
2) Sebastian Matera, Freie Universität Berlin:
Bridging between electronic and continuum scales for understanding heterogeneous catalysis
Heterogeneous catalysis is one of the central technologies in our today's chemical industry, affecting most aspects of our daily life. A chemical understanding of the involved surface reaction steps can best be obtained on the length scales of bond making and bond breaking, i.e. electronic time and length scales. On the other hand, catalytic performance results from the interplay of many reaction events. And, finally, real life, in situ experiments take place in macroscopic reaction chambers and are usually affected by larger scale effects, macroscopic heat and mass transport.
In this talk, I will present an approach (or better a chain of approaches), which can bridge between these very disparate time and length scales. This procedure starts from electronic structure methods for the description of the interaction of atoms and molecules with the surface. Based on these findings, a mesoscopic Markov model for the surface chemistry is derived. This is finally coupled to a continuum level description of the macroscopic transport in the chamber.
I will demonstrate how this methodology can help to unveil the atomistic driving forces behind macroscopic reactivity and to answer some of the open questions in the field.