Speaker
Description
In order to solve tasks like uncertainty quantification or hypothesis tests in Bayesian imaging inverse problems that go beyond the computation of point estimates, we have to draw samples from the posterior distribution. For log-concave but usually high-dimensional posteriors, Markov chain Monte Carlo methods based on time discretizations of Langevin diffusion are a popular tool. If the potential defining the distribution is non-smooth, as is the case for many relevant imaging problems, these discretizations are usually of an implicit form. This leads to Langevin sampling algorithms that require the evaluation of proximal operators, which is, for some of the potentials relevant in imaging problems, only possible approximately using an iterative scheme. We investigate the behaviour of a proximal Langevin algorithm under the presence of errors in the evaluation of the proximal mappings. We generalize existing non-asymptotic and asymptotic convergence results of the exact algorithm to our inexact setting and quantify the additional bias between the target and the algorithm's stationary distribution due to the errors. We show that the additional bias stays bounded for bounded errors and converges to zero for decaying errors in a strongly convex setting. We show numerical results where we apply the inexact algorithm to sample from the posterior of typical imaging inverse problems in which we can only approximate the proximal operator by an iterative scheme.