Scalable hierarchical algorithms for PDEs and UQ By Dr. Alexander Litvinenko and Dr. Rio Yokota

Hierarchy is a key ingredient for achieving optimal arithmetic complexity and scalable communication complexity in algorithms. Fast Multipole Methods (FMM) and H-matrices share many common features that arise from their hierarchical nature. In this talk, Rio Yokota will illustrate the common features between FMM and H-matrices and how these features are favorable on computer architectures of the next generation.

Overview

Abstract

Hierarchy is a key ingredient for achieving optimal arithmetic complexity and scalable communication complexity in algorithms. Fast Multipole Methods (FMM) and H-matrices share many common features that arise from their hierarchical nature. In this talk, Rio Yokota will illustrate the common features between FMM and H-matrices and how these features are favorable on computer architectures of the next generation. Alexander Litvinenko will talk about the use of H-matrices in applications such as PDEs and uncertainty quantification (UQ). Discretization of PDEs at high fidelity leads to large matrices (stiffness, covariance, Jacobian, etc.), where the amount of data is so big that it is difficult or even impossible to store in the computer memory. However, such matrices often have a special structure that can be used to reduce the storage cost from O(n^2) to O(nlogn), where n is the number of columns/rows. This data-sparse/low-rank format can be extended to all linear algebra operations (addition, multiplication, inverse).

Brief Biography

Alexander Litvinenko joined the Stochastic Numerics Group and Strategic Initiative in Uncertainty Quantification at KAUST in 2013.  He specializes in efficient numerical methods for stochastic PDEs, uncertainty quantification, and multi-linear algebra.  He is involved in Bayesian update methods for solving inverse problems, with the goal of reducing the complexity of both the stochastic forward problem as well as the Bayesian update by a low-rank (sparse) tensor data approximation. A further goal is the optimal design of experiments to maximize the information gain for a given amount of experimental effort. Applications include aerodynamics, subsurface flow, and climate prediction. Alexander earned B.S. (2000) and M.S. (2002) degrees in mathematics at Novosibirsk State University, and his Ph.D. (2006) in the group of Prof. Hackbusch at Max-Planck-Institut in Leipzig, on the combination of domain decomposition methods and hierarchical matrices for solving elliptic PDEs with jumping and oscillatory coefficients. From 2007-2013 he was a Postdoctoral Research Fellow at the TU Braunschweig in Germany where he became interested in his current research focus area of low-rank/sparse methods for uncertainty quantification as well as Bayesian updating for multi-parametric problems.​

Brief Biography

Rio Yokota is a Research Scientist in the Strategic Initiative for Extreme Computing at KAUST, where is also advising several KAUST students in their MS and Ph.D. research. He currently works on fast multipole methods (FMM), and their implementation on large-scale heterogeneous architectures. During his Ph.D. in Mechanical Engineering at Keio University, he worked on the implementation of fast multipole methods on special purpose machines such as MDGRAPE-3, and then on GPUs after CUDA was released. During a post-doc at the University of Bristol, he continued to work on FMM and was part of the team that won the Gordon Bell prize for price/performance in 2009 using 760 GPUs. During a postdoc with Lorena Barba at Boston University, he developed an open-source parallel FMM code -- ExaFMM. He is now running this code on full nodes of the TSUBAME 2.0 and K computer in Japan, and also on large Blue Gene and Cray machines, for applications in molecular dynamics and sparse preconditioners.  Rio is a member of ACM and SIAM.​

Presenters