In the age of `data science', most important inverse problems encountered in applied mathematics, for instance when one wants to recover parameter coefficients of PDEs/diffusions/jump processes, are naturally modelled to include statistical noise or random measurement error. Standard numerical methods to solve inverse problems are generally not robust to the presence of noise, particularly for non-linear problems. In the last years the Bayesian approach has been put forward as a general paradigm to solve statistical inverse problems in a generic way, most notably by Andrew Stuart and co-authors, who have devised successful MCMC algorithms to compute posterior distributions in such settings. A key mathematical question is what recovery guarantees can be given for these algorithms and the associated statistical inference ('uncertainty quantification') procedures. We will discuss some recent theorems that indeed demonstrate that the Bayes formalism successfully, and optimally, solves many relevant inverse problems, based on proving frequentist contraction and Laplace-Bernstein-von Mises results for relevant Bayesian posterior distributions.

No prior knowledge of mathematical statistics will be required for this talk