2nd AlpsAdriatic Inverse Problems Workshop 2021 (AAIP 2021) Chemnitz Inverse Problems Symposium on tour
from
Wednesday, 22 September 2021 (08:00)
to
Friday, 24 September 2021 (17:00)
Monday, 20 September 2021
Tuesday, 21 September 2021
Wednesday, 22 September 2021
08:00
Registration
Registration
08:00  08:45
Room: HS 1
08:45
Opening
Opening
08:45  09:00
Room: HS 1
09:00
Variational Data Assimilation and lowrank solvers

Melina Freitag
Variational Data Assimilation and lowrank solvers
Melina Freitag
09:00  09:50
Room: HS 1
Weak constraint fourdimensional variational data assimilation is an important method for incorporating observations into a (usually imperfect) model. The resulting minimisation process takes place in very high dimensions. In this talk we present two approaches for reducing the dimension and thereby the computational cost and storage of this optimisation problem. The first approach formulates the linearised system as a saddle point problem. We present a lowrank approach which exploits the structure of the saddle point system using techniques and theory from solving large scale matrix equations and lowrank Krylov subspace methods. The second approach uses projection methods for reducing the system dimension. Numerical experiments with the linear advectiondiffusion equation, and the nonlinear Lorenz95 model demonstrate the effectiveness of both approaches.
09:50
Coffee break
Coffee break
09:50  10:20
Room: HS 1
10:20
The tangential cone condition for EIT

Stefan Kindermann
(Johannes Kepler University Linz)
The tangential cone condition for EIT
Stefan Kindermann
(Johannes Kepler University Linz)
10:20  10:50
Room: HS 1
The tangential cone conditions (TCCs) are sufficient conditions on a nonlinear forward operator for proving convergence of various iterative nonlinear regularization schemes such as Landweber iteration. Especially for parameter identification problems with boundary data, they have not been verified yet, even though numerical results for nonlinear iterative regularization method usually show the expected convergence behavior. In this talk we analyze the tangential cone conditions for the classical impedance tomography problem (EIT) and state sufficient conditions when they hold, although a general result on the validity of the TCCs remains open. An important tool is the use of Loewner monotonicity, which allows us to prove the TCC in situations, e.g., when the conductivities are pointwisely above or below the true conductivity. This talk is a summary of the arXiv article [1] [1] S. Kindermann, On the tangential cone condition for electrical impedance tomography, Preprint on arXiv, 2021. https://arxiv.org/abs/2105.02635
10:50
Uniqueness and global convergence for inverse coefficient problems with finitely many measurements

Bastian Harrach
(Goethe University Frankfurt)
Uniqueness and global convergence for inverse coefficient problems with finitely many measurements
Bastian Harrach
(Goethe University Frankfurt)
10:50  11:20
Room: HS 1
Several applications in medical imaging and nondestructive material testing lead to inverse elliptic coefficient problems, where an unknown coefficient function in an elliptic PDE is to be determined from partial knowledge of its solutions. This is usually a highly nonlinear illposed inverse problem, for which unique reconstructability results, stability estimates and global convergence of numerical methods are very hard to achieve. In this talk we will consider an inverse coefficient problem with finitely many measurements and a finite desired resolution. We will present a criterion based on monotonicity, convexity and localized potentials arguments that allows us to explicitly estimate the number of measurements that is required to achieve the desired resolution. We also obtain an error estimate for noisy data, and overcome the problem of local minima by rewriting the problem as an equivalent uniquely solvable convex nonlinear semidefinite optimization problem. **References** 1. B. Harrach, Uniqueness, stability and global convergence for a discrete inverse elliptic Robin transmission problem, *Numer. Math.* **147** (2021), pp. 2970, [https://doi.org/10.1007/s00211020011628][1] 2. B. Harrach, Solving an inverse elliptic coefficient problem by convex nonlinear semidefinite programming, arXiv preprint (2021), [arXiv:2105.11440][2] [1]: https://doi.org/10.1007/s00211020011628 [2]: http://arxiv.org/abs/2105.11440
11:20
MonotonicityBased Regularization for Shape Reconstruction in Linear Elasticity

Sarah Eberle
(Goethe University Frankfurt)
MonotonicityBased Regularization for Shape Reconstruction in Linear Elasticity
Sarah Eberle
(Goethe University Frankfurt)
11:20  11:40
Room: HS 1
We deal with the shape reconstruction of inclusions in elastic bodies and solve the inverse problem by means of a monotonicitybased regularization. In more detail, we show how the monotonicity methods can be converted into a regularization method for a datafitting functional without losing the convergence properties of the monotonicity methods. In doing so, we introduce constraints on the minimization problem of the residual based on the monotonicity methods and prove the existence and uniqueness of a minimizer as well as the convergence of the method for noisy data. In addition, we compare numerical reconstructions of inclusions based on the monotonicitybased regularization with a standard approach (onestep linearization with Tikhonovlike regularization), which also shows the robustness of our method regarding noise in practice.
11:40
Diffractive tensor field tomography as an inverse problem for a transport equation

Lukas Vierus
(Saarland University)
Diffractive tensor field tomography as an inverse problem for a transport equation
Lukas Vierus
(Saarland University)
11:40  12:00
Room: HS 1
We consider a holistic approach to find a closed formula for the generalized ray transform of a tensor field. This means that we take refraction, attenuation and timedependence into account. We model the refraction by an appropriate Riemannian metric which leads to an integration along geodesics. The absorption appears as an attenuation coefficient in an exponential factor. The derived explicit integral formula solves a transport equation whose boundary conditions are given by the measured data. Deriving the weak formulation of the problem, we obtain solutions in SobolevBochner spaces. Whereas it fails to guarantee a unique solution of the implied initial boundary value problem (IBVP), it is possible to prove uniqueness of viscosity solutions by using the LaxMilgramtheorem. For this, however, certain restrictions on the refractive index and the attenuation coefficient must be assumed. Considering the parametertosolution map as the forward operator, the inverse problem can be solved by minimizing a Tikhonov functional. Here the adjoint operator can also be identified as a solution of an IBVP.
12:00
An Inverse Magnetization Problem on the Sphere with Localization Constraints

Xinpeng Huang
(TU Bergakademie Freiberg, Institute of Geophysics and Geoinformatics)
An Inverse Magnetization Problem on the Sphere with Localization Constraints
Xinpeng Huang
(TU Bergakademie Freiberg, Institute of Geophysics and Geoinformatics)
12:00  12:20
Room: HS 1
We study an inverse magnetization problem arising in geo and planetary magnetism. This problem is nonunique and the null space can be characterized by the HardyHodge decomposition. The additional assumption that the underlying magnetization is spatially localized in a subdomain of the sphere (which can be justified when interested, e.g., in regional magnetic anomalies) ameliorates the nonuniqueness issue so that only the tangential divergencefree contribution remains undetermined. In a previous reconstruction approach, we addressed the localization by including an additional penalty term in the minimizing functional. This, however, requires the coestimation of the undetermined divergencefree contribution. Here, we present a first attempt at more directly including the localization constraint without requiring such a coestimation. In addition, we show that the localization constraint is closely connected to the problem of extrapolation in Hardy spaces.
12:20
Lunch
Lunch
12:20  13:50
Room: HS 1
13:50
Bayesian nonlinear inversion problems and PDEs: progress and challenges

Richard Nickl
(University of Cambridge)
Bayesian nonlinear inversion problems and PDEs: progress and challenges
Richard Nickl
(University of Cambridge)
13:50  14:40
Room: HS 1
We review the Bayesian approach to inverse problems, and describe recent progress in our theoretical understanding of its performance in nonlinear situations. Statistical and computational guarantees for such algorithms will be provided in highdimensional, nonconvex scenarios, and model examples from elliptic and transport (Xray type) PDE problems will be discussed. The connection between MCMC and other existing iterative methods will be touched upon, and several open mathematical problems will be described.
14:40
A discrepancytype stopping rule for conjugate gradients under white noise

Markus Reiß
(HumboldtUniversität zu Berlin)
A discrepancytype stopping rule for conjugate gradients under white noise
Markus Reiß
(HumboldtUniversität zu Berlin)
14:40  15:10
Room: HS 1
We consider a linear inverse problem of the form $y=Ax+\epsilon \dot W$ where the action of the operator (matrix) $A$ on the unknown $x$ is corrupted by white noise (a standard Gaussian vector) $\dot W$ of level $\epsilon>0$. We study the candidate solutions $\hat x_m$ provided by the $m$th conjugate gradient CGNE iterates. Refining Nemirovskii's trick, we are able to provide explicit error bounds for the best (oracle) iterate along the iteration path. This yields optimal estimation rates over polynomial source conditions. In a second step we identify monotonic proxies for bias (approximation error) and variance (stochastic error) of the nonlinear estimators $\hat x_m$ and develop a residualbased stopping rule for a datadriven choice $\hat m$ of the number of iterations. This yields a stochastic version of the discrepancy principle. Using tools from concentration of measure and extending deterministic ideas by Hanke, we can provide an oracletype inequality for the prediction error $E[\A(\hat x_{\hat m}x)\^2]$ (nontrivial under white noise), which gives rateoptimality up to a dimensionality effect. Finally, we provide partial results also for the estimation error $E[\\hat x_{\hat m}x\^2]$, discussing the challenges generated by the statistical noise.
15:10
A Bregman Learning Framework for Sparse Neural Networks

Tim Roith
(FriedrichAlexanderUniversität ErlangenNürnberg)
A Bregman Learning Framework for Sparse Neural Networks
Tim Roith
(FriedrichAlexanderUniversität ErlangenNürnberg)
15:10  15:30
Room: HS 1
I will present a novel learning framework based on stochastic Bregman iterations. It allows to train sparse neural networks with an inverse scale space approach, starting from a very sparse network and gradually adding significant parameters. Apart from a baseline algorithm called LinBreg, I will also speak about an accelerated version using momentum, and AdaBreg, which is a Bregmanized generalization of the Adam algorithm. I will present a statistically profound sparse parameter initialization strategy, stochastic convergence analysis of the loss decay, and additional convergence proofs in the convex regime. The Bregman learning framework can also be applied to Neural Architecture Search and can, for instance, unveil autoencoder architectures for denoising or deblurring tasks.
15:30
iPALMbased unsupervised energy disaggregation

Christian Aarset
(University of Graz)
iPALMbased unsupervised energy disaggregation
Christian Aarset
(University of Graz)
15:30  15:50
Room: HS 1
With smart energy meters increasingly available to private households, new applications arise, such as identifying main power consuming devices and predicting human activity. One major obstacle is that smart energy meters typically provide *aggregated* data, where each source of energy consumption is summed. Further, obtaining training data can be intrusive. To counteract this, we propose an unsupervised minimization approach based on the Inertial Proximal Alternating Linearized Minimization (iPALM) algorithm, utilising convolutional sparse coding to represent individual device energy signatures as atoms convolved with sparse coefficient vectors.
15:50
Coffee break
Coffee break
15:50  16:10
Room: HS 1
16:10
Parameter identification for PDEs: From neuralnetworkbased learning to discretized inverse problems

Tram Nguyen
Parameter identification for PDEs: From neuralnetworkbased learning to discretized inverse problems
Tram Nguyen
16:10  16:30
Room: HS 1
We investigate the problem of learning an unknown nonlinearity in parameterdependent PDEs. The nonlineartiy is represented via a neural network of an unknown state. The learninginformed PDE model has three unknowns: physical parameter, state and nonlinearity. We propose an allatonce approach to the minimization problem. (Joint work: Martin Holler, Christian Aarset) More generally, the representation via neural networks can be realized as a discretization scheme. We study convergence of Tikhonov and Landweber methods for the discretized inverse problems, and prove convergence when the discretization error approaches zero. (Joint work: Barbara Kaltenbacher)
16:30
On a regularization of unsupervised domain adaptation in RKHS

Duc Hoan Nguyen
(Johann Radon Institute)
On a regularization of unsupervised domain adaptation in RKHS
Duc Hoan Nguyen
(Johann Radon Institute)
16:30  16:50
Room: HS 1
We analyze the use of the socalled general regularization scheme in the scenario of unsupervised domain adaptation under the covariate shift assumption. Learning algorithms arising from the above scheme are generalizations of importance weighted regularized least squares method, which up to now is among the most used approaches in the covariate shift setting. We explore a link between the considered domain adaptation scenario and estimation of RadonNikodym derivatives in reproducing kernel Hilbert spaces, where the general regularization scheme can also be employed and is a generalization of the kernelized unconstrained leastsquares importance fitting. We estimate the convergence rates of the corresponding regularized learning algorithms and discuss how to resolve the issue with the tuning of their regularization parameters. The theoretical results are illustrated by numerical examples, one of which is based on real data collected for automatic stenosis detection in cervical arteries.
16:50
A Generative Variational Model for Inverse Problems in Imaging

Andreas Habring
(University of Graz)
A Generative Variational Model for Inverse Problems in Imaging
Andreas Habring
(University of Graz)
16:50  17:10
Room: HS 1
In recent years deep/machine learning methods using convolutional networks have become increas ingly popular also in inverse problems mainly due to their practical performance [1]. In many cases these methods outperform conventional regularization methods, such as total variation regulariza tion, in particular when applied to more complicated data such as images containing texture. A major downside of machine learning methods, however, is the need for large sets of training data, which are often not available in the necessary extent. Moreover, the level of analytic understanding of machine learning methods, in particular in view of an analysis for inverse problems in function space, is still far from the one of conventional variational methods. In this talk, we propose a novel regularization method for solving inverse problems in imaging, which is inspired by the architecture of convolutional neural networks as seen in many in deep learning approaches. In the model, the unknown is generated from a variable in latent space via multilayer convolutions and nonlinear penalties. In contrast to conventional deep learning methods, however, the convolution kernels are learned directly from the given (possibly noisy) data, such that no training is required. In the talk, we will motivate the model and provide theoretical results about existence/stability of solutions and convergence for vanishing noise in function space. Afterwards, in a discretized setting, we will show practical results of our method in comparison to a state of the art deep learning method [1]. [1] V. Lempitsky, A. Vedaldi, and D. Ulyanov, Deep image prior, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
17:10
Generalized conditional gradient methods for variational inverse problems with convex regularizers

Marcello Carioni
(University of Cambridge)
Generalized conditional gradient methods for variational inverse problems with convex regularizers
Marcello Carioni
(University of Cambridge)
17:10  17:30
Room: HS 1
In this talk, we propose and analyze a generalized conditional gradient method for infinite dimensional variational inverse problems written as the sum of a smooth, convex loss function and a, possibly nonsmooth, convex regularizer. Our method relies on the mutual update of a sequence of extremal points of the unit ball of the regularizer and a sparse iterate given as a suitable linear combination of such extreme points. We show that under standard hypotheses on the minimization problem, our algorithm converges sublinearly to a solution of the inverse problem. Moreover, we demonstrate that by imposing additional assumptions on the structure of the minimizers, the associated dual variables and the nondegeneracy of the problem, we can improve such convergence result to a linear rate. Then we apply our generalized conditional gradient method to solve dynamic inverse problems regularized with the BenamouBrenier energy. Relying on recent results about the characterization of the extremal points for the ball of the BenamouBrenier energy, we show that our algorithm can be applied to this specific example to reconstruct the motion of heavily undersampled dynamic data together with the presence of noise.
19:00
Conference Dinner (Restaurant Felsenkeller)
Conference Dinner (Restaurant Felsenkeller)
19:00  21:00
Room: HS 1
Thursday, 23 September 2021
09:00
Infinitedimensional inverse problems with finite measurements

Giovanni Alberti
(University of Genova)
Infinitedimensional inverse problems with finite measurements
Giovanni Alberti
(University of Genova)
09:00  09:50
Room: HS 1
In this talk I will discuss uniqueness, stability and reconstruction for infinitedimensional nonlinear inverse problems with finite measurements, under the a priori assumption that the unknown lies in, or is wellapproximated by, a finitedimensional subspace or submanifold. The methods are based on the interplay of applied harmonic analysis, in particular sampling theory and compressed sensing, and the theory of inverse problems for partial differential equations. Several examples, including the Calderón problem and scattering, will be discussed.
09:50
Coffee break
Coffee break
09:50  10:20
Room: HS 1
10:20
Convergence Rate of RungeKuttaType Regularization for Nonlinear IllPosed Problems under Logarithmic Source Condition

Christine Böckmann
(University of Potsdam, Institute of Mathematics, KarlLiebknechtStr. 2425, 14476 Potsdam, Gemany)
Convergence Rate of RungeKuttaType Regularization for Nonlinear IllPosed Problems under Logarithmic Source Condition
Christine Böckmann
(University of Potsdam, Institute of Mathematics, KarlLiebknechtStr. 2425, 14476 Potsdam, Gemany)
10:20  10:50
Room: HS 1
We present two families of regularization method for solving nonlinear illposed problems between Hilbert spaces by applying the family of Runge–Kutta methods to an initial value problem, in particular, to the asymptotical regularization method. In Hohage [1], a systematic study of convergence rates for regularization methods under logarithmic source condition including the case of operator approximations for a priori and a posteriori stopping rules is provided. We prove the logarithmic convergence rate of the families of usual and modified iterative RungeKutta methods under the logarithmic source condition, and numerically verify the obtained results. The iterative regularization is terminated by the a posteriori discrepancy principle, Pornsawad, et al. [2]. Up to now, the logarithmic convergence rate under logarithmic source condition has only been investigated for particular examples, namely, the Levenberg–Marquardt method [3] and the modified Landweber method [4]. Here, we extended the results to the whole family of RungeKuttatype methods with and without modification. [1] Hohage, T., Regularization of exponentially illposed problems. Numer. Funct. Anal. Optimiz. 2000, 21, 439–464. [2] Pornsawad, P., Resmerita, E., Böckmann, C., Convergence Rate of RungeKuttaType Regularization for Nonlinear IllPosed Problems under Logarithmic Source Condition, Mathematics 2021, 9, 1042. [3] Böckmann, C., Kammanee, A., Braunß, A., Logarithmic convergence rate of Levenberg–Marquardt method with application to an inverse potential problem. J. Inv. IllPosed Probl. 2011, 19, 345–367. [4] Pornsawad, P., Sungcharoen, P., Böckmann, C., Convergence rate of the modified Landweber method for solving inverse potential problems. Mathematics 2020, 8, 608.
10:50
Frame Decompositions and Inverse Problems

Simon Hubmer
(Johann Radon Institute Linz)
Frame Decompositions and Inverse Problems
Simon Hubmer
(Johann Radon Institute Linz)
10:50  11:10
Room: HS 1
The singularvalue decomposition (SVD) is an important tool for the analysis and solution of linear illposed problems in Hilbert spaces. However, it is often difficult to derive the SVD of a given operator explicitly, which limits its practical usefulness. An alternative in these situations are frame decompositions (FDs), which are a generalization of the SVD based on suitably connected families of functions forming frames. Similar to the SVD, these FDs encode information on the structure and illposedness of the problem and can be used as the basis for the design and implementation of efficient numerical solution methods. Crucially though, FDs can be derived explicitly for a wide class of operators, in particular for those satisfying a certain stability condition. In this talk, we consider various theoretical aspects of FDs such as recipes for their construction and some properties of the reconstruction formulae induced by them. Furthermore, we present convergence and convergence rates results for continuous regularization methods based on FDs under both apriori and aposteriori parameter choice rules. Finally, we consider the practical utility of FDs for solving inverse problems by considering two numerical examples from computerized and atmospheric tomography.
11:10
A modified discrepancy principle to attain optimal rates for polynomially and exponentially illposed problems under white noise

Tim Jahn
A modified discrepancy principle to attain optimal rates for polynomially and exponentially illposed problems under white noise
Tim Jahn
11:10  11:30
Room: HS 1
We consider a linear illposed equation in the Hilbert space setting under white noise. Known convergence results for the discrepancy principle are either restricted to HilbertSchmidt operators (and they require a selfsimilarity condition for the unknown solution additional to a classical source condition) or to polynomially illposed operators (excluding exponentially illposed problems). In this work we show optimal convergence for a modified discrepancy principle for both polynomially and exponentially illposed operators (without further restrictions) solely under either Höldertype or logarithmic source conditions. In particular, the method includes only a single simple hyper parameter, which does not need to be adapted to the type of illposedness.
11:30
Convergence rates for oversmoothing Banach space regularization

Philip Miller
(Institute for Numerical and Applied Mathematics, University of Göttingen, Germany)
Convergence rates for oversmoothing Banach space regularization
Philip Miller
(Institute for Numerical and Applied Mathematics, University of Göttingen, Germany)
11:30  11:50
Room: HS 1
We show convergence rates results for Banach space regularization in the case of oversmoothing, i.e. if the penalty term fails to be finite at the unknown solution. We present a flexible approach based on Kinterpolation theory which provides more general and complete results than classical variational regularization theory based on various types of source conditions for true solutions contained in the penalty's domain. In particular, we prove order optimal convergence rates for bounded variation regularization. Moreover, we show a result for sparsity promoting wavelet regularization and demonstrate in numerical simulations for a parameter identification problem in a differential equation that our theoretical results correctly predict rates of convergence for piecewise smooth unknown coefficients.
11:50
Regularisation of certain nonlinear problems in $L^{\infty}$

Lukas Pieronek
(KIT)
Regularisation of certain nonlinear problems in $L^{\infty}$
Lukas Pieronek
(KIT)
11:50  12:10
Room: HS 1
In many cases the parameters of interest in inverse problems arise as coefficients of PDE models for which $L^{\infty}$ is one of the most natural spaces. Despite its formal connection to the regular and regularisationapproved $L^p$spaces, $L^{\infty}$ itself is nonsmooth, nonreflexive and nonseparable. Hence, standard Banach space methods generally fail and the need of discretisation in practice makes it even hopeless to aim for good reconstructions in the strong topology. In this talk we present a novel regularisation method which generates uniformly bounded iterates as approximate solutions to locally illposed equations and for which the regularisation property then holds with respect to weak$\ast$ convergence. Numerical examples will complete our analysis.
12:10
Variational analysis of a dynamic PET reconstruction model with optimal transport regularization

Marco Mauritz
(University of Münster, Institute for Analysis and Numerics)
Variational analysis of a dynamic PET reconstruction model with optimal transport regularization
Marco Mauritz
(University of Münster, Institute for Analysis and Numerics)
12:10  12:30
Room: HS 1
We consider the dynamic Positron Emission Tomography (PET) reconstruction method proposed by Schmitzer et al. $[1]$ that particularly aims to reconstruct the temporal evolution of single or small numbers of cells by leveraging optimal transport. Using a MAP estimate the cells' evolution is reconstructed by minimizing a functional $\mathcal{E}_n$  composed of a KulbackLeiblertype data fidelity term and the BenamourBrenier functional  over the space of positive Radon measures. This choice of the regularization ensures temporal consistency between different time points. The PET measurements in our forward model are described by Poisson point processes with a given intensity $q_n$. In the talk we show $\Gamma$convergence of the stochastic functionals $\mathcal{E}_n$ to a deterministic limit functional for $q_n\to \infty$. This helps understanding the properties of the considered reconstruction method for an increasing SNR. To compute the $\Gamma$limit we show convergence of Poisson point processes for intensities growing to infinity as well as convergence of the optimal transport regularization. The latter requires the approximation of arbitrary Radon measures by ones satisfying the continuity equation while controlling the BenamouBrenier energy. Reference: $[1]$ B. Schmitzer, K. P. Schäfers, and B. Wirth. Dynamic Cell Imaging in PET with Optimal Transport Regularization. IEEE Transactions on Medical Imaging, 2019.
12:30
Lunch
Lunch
12:30  14:00
Room: HS 1
14:00
Illposedness effects for wellposed problems

Arnd Rösch
(Universität DuisburgEssen)
Illposedness effects for wellposed problems
Arnd Rösch
(Universität DuisburgEssen)
14:00  14:30
Room: HS 1
In this talk we study the discretization of a wellposed nonlinear problem. It may happen that discretized solutions do not converge. However, this effect disappears for a suitable chosen optimal control problem.
14:30
Adaptive Spectral Decomposition for Inverse Scattering Problems

Yannik G. Gleichmann
Adaptive Spectral Decomposition for Inverse Scattering Problems
Yannik G. Gleichmann
14:30  14:50
Room: HS 1
A nonlinear optimization method is proposed for inverse scattering problems, when the unknown medium is characterized by one or several spatially varying parameters. The inverse medium problem is formulated as a PDEconstrained optimization problem and solved by an inexact truncated Newtontype method. Instead of a gridbased discrete representation, each parameter is projected to a separate fnitedimensional subspace, which is iteratively adapted during the optimization. Each subspace is spanned by the first few eigenfunctions of a linearized regularization penalty functional chosen a priori. The (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonovtype regularization and, in practice, appears more robust to missing data or added noise. Numerical results illustrate the accuracy and efficiency of the resulting adaptive spectral regularization for inverse scattering problems for the wave equation in time domain.
14:50
An inverse source problem for vector field

David Omogbhe
(Johann Radon Instute for Computational and Applied Mathematics(RICAM))
An inverse source problem for vector field
David Omogbhe
(Johann Radon Instute for Computational and Applied Mathematics(RICAM))
14:50  15:10
Room: HS 1
We consider an inverse source problem in the stationary radiating transport through a two dimensional absorbing and scattering medium. The attenuation and scattering properties of the medium are assumed known and the unknown vector field source is isotropic. For scattering kernels of finite Fourier content in the angular variable, we show how to recover the isotropic vector field sources from boundary measurements. The approach is based on the Cauchy problem for a Beltramilike equation associated with $A$analytic maps in the sense of Bukhgeim. This is a joint work with Kamran Sadiq (RICAM).
15:10
Constrained consensusbased optimization via penalization

Giacomo Borghi
(RWTH Aachen University)
Constrained consensusbased optimization via penalization
Giacomo Borghi
(RWTH Aachen University)
15:10  15:30
Room: HS 1
Constrained optimization problems represent a challenge when the objective function is nondifferentiable, multimodal and the feasible region lacks regularity. In our talk, we will introduce a swarmbased optimization algorithm which is capable of handling generic nonconvex constraints by means of a penalization technique. The method extends the class of consensusbased optimization (CBO) methods to the constrained settings, a class where a swarm of interactive particles explores the objective function landscape following a consensus dynamics. In our algorithm, we perform a time discretization of the system evolution and tune the parameters to effectively avoid nonadmissible regions of the domain. While the particle dynamics may appear simple, recovering convergence guarantees represents the real difficulty when dealing with swarmbased methods. In the talk, we will present the essential meanfield tools that allowed us to theoretically analyze the algorithm and obtain convergence results of its meanfield counterpart under mild assumptions. To conclude, we will discuss both the algorithm performance on benchmark problems and numerical experiments of the meanfield dynamics.
15:30
Coffee break
Coffee break
15:30  15:50
Room: HS 1
15:50
HolmgrenJohn unique continuation for viscoelastic equation

Gen Nakamura
HolmgrenJohn unique continuation for viscoelastic equation
Gen Nakamura
15:50  16:20
Room: HS 1
We concern on the HolmgrenJohn unique continuation theorem for a viscoelastic equation with a memory term when the coefficients of the equation are analytic. This is a special case of the general unique continuation property (UCP) for the equation if its coefficients are analytic. This equation describes viscoelastic behavior of a medium. In this talk we will present the UCP for the viscoelastic equation when the relaxation tensor is analytic and allowed to be fully anisotropic. We will describe the UCP in terms of a distance defined by the travel time of the slowest wave associated to the elastic part of this equation. The collaborators of this study are Maarten de Hoop (Rice University), Matthias Eller (Georgetown University) and ChingLung Lin (National ChengKung University).
16:20
Radonbased image reconstruction in magnetic particle imaging using an FFLscanner

Stephanie Blanke
(Universität Hamburg)
Radonbased image reconstruction in magnetic particle imaging using an FFLscanner
Stephanie Blanke
(Universität Hamburg)
16:20  16:40
Room: HS 1
Reliable and fast medical imaging techniques are indispensable for diagnostics in clinical everyday life. A promising example of those is given by magnetic particle imaging (MPI) invented by Gleich and Weizenecker [1]. MPI is a tracerbased imaging method allowing for the reconstruction of the spatial distribution of magnetic nanoparticles via exploiting their nonlinear magnetization response to changing magnetic fields. We dedicate ourselves towards MPI using a fieldfree line (FFL) for spatial encoding [2]. For data acquisition the FFL is moved through the field of view resulting in a scanning geometry resembling the one in computerized tomography. Indeed, in the ideal setting, corresponding MPI data can be traced back to the Radon transform of the particle concentration [3]. We jointly reconstruct Radon data and particle concentration by means of total variation regularization and have a look at some numerical examples. We conclude with problems that arise when leaving the ideal setting. For example, in practice, we are confronted with imperfections of the applied magnetic fields leading to deformed lowfield volumes and, when ignored, image artifacts. *References:* [1] Gleich B and Weizenecker J 2005 Tomographic imaging using the nonlinear response of magnetic particles *Nature* 435 12141217 (https://doi.org/10.1038/nature03808) [2] Weizenecker J, Gleich B, and Borgert J 2008 Magnetic particle imaging using a field free line *J. Phys. D: Appl. Phys.* 41 105009 (https://doi.org/10.1088/00223727/41/10/105009) [3] Knopp T, Erbe M, Sattel T F, Biederer S, and Buzug T M 2011 A Fourier slice theorem for magnetic particle imaging using a fieldfree line *Inverse Problems* 27 095004 (https://doi.org/10.1088/02665611/27/9/095004)
16:40
From displacement field to parameter estimation: theory and application

Ekaterina Sherina
(University of Vienna)
From displacement field to parameter estimation: theory and application
Ekaterina Sherina
(University of Vienna)
16:40  17:00
Room: HS 1
Diseases like cancer or arteriosclerosis often cause changes of tissue stiffness on the micrometer scale. Elastography is a common technique for medical diagnostics developed to detect these changes. We consider a complex problem of estimating both the internal displacement field and the material parameters of an object which is being subjected to a deformation. In particular, we present our recently developed elastographic optical flow method (EOFM) for motion detection from optical coherence tomography images. This method takes into account experimental constraints, such as appropriate boundary conditions, the use of speckle information, as well as the inclusion of structural information derived from knowledge of the background material. Furthermore, we present numerical results based on both simulated and experimental data from an elastography experiment and discuss the material parameter estimation from these data.
17:00
Recent analytical progress on some nonlinear tomography problems

Jan Bohr
(University of Cambridge)
Recent analytical progress on some nonlinear tomography problems
Jan Bohr
(University of Cambridge)
17:00  17:20
Room: HS 1
We consider a class of nonlinear inverse problems, encompassing e.g. Polarimetric Neutron Tomography (PNT), where one seeks to recover a magnetic field by probing it with Neutron beams and measuring the resulting spin change. In recent years there has been great progress on fundamental theoretical questions regarding injectivity and stability properties for PNT and we survey some of the latest results, including a novel range characterisation for the forward map. One of the drivers behind these results is the desire to give rigorous guarantees for the statistical performance of Bayesian algorithms. The talk is based on joint work with Gabriel Paternain and Richard Nickl.
19:30
GIP Meeting
GIP Meeting
19:30  21:30
Room: HS 1
Friday, 24 September 2021
09:00
Stable determination of a rigid scatterer in elastodynamics

Eva Sincich
(University of Trieste)
Stable determination of a rigid scatterer in elastodynamics
Eva Sincich
(University of Trieste)
09:00  09:50
Room: HS 1
We deal with an inverse elastic scattering problem for the shape determination of a rigid scatterer in the timeharmonic regime. We prove a local stability estimate of log log type for the identification of a scatterer by a single farfield measurement. The needed a priori condition on the closeness of the scatterers is estimated by the universal constant appearing in the Friedrichs inequality. This is based on a joint work with Luca Rondi and Mourad Sini.
09:50
break
break
09:50  10:20
Room: HS 1
10:20
Consistency of Bayesian inference with Gaussian process priors for a parabolic inverse problem

Hanne Kekkonen
(Delft University of Technology)
Consistency of Bayesian inference with Gaussian process priors for a parabolic inverse problem
Hanne Kekkonen
(Delft University of Technology)
10:20  10:50
Room: HS 1
We consider the statistical nonlinear inverse problem of recovering the absorption term f > 0 in the heat equation, with given boundary and initial value functions, from N discrete noisy point evaluations of the solution u_f. We study the statistical performance of Bayesian nonparametric procedures based on Gaussian process priors, that are often used in practice. We show that, as the number of measurements increases, the resulting posterior distributions concentrate around the true parameter f* that generated the data, and derive a convergence rate for the reconstruction error of the associated posterior means. We also consider the optimality of the contraction rates and prove a lower bound for the minimax convergence rate for inferring f from the data, and show that optimal rates can be achieved with truncated Gaussian priors.
10:50
A model reduction approach for inverse problems with operator valued data

Matthias Schlottbom
(University of Twente)
A model reduction approach for inverse problems with operator valued data
Matthias Schlottbom
(University of Twente)
10:50  11:20
Room: HS 1
We study the efficient numerical solution of linear inverse problems with operator valued data which arise, e.g., in seismic exploration, inverse scattering, or tomographic imaging. The highdimensionality of the data space implies extremely high computational cost already for the evaluation of the forward operator, which makes a numerical solution of the inverse problem, e.g., by iterative regularization methods, practically infeasible. To overcome this obstacle, we develop a novel model reduction approach that takes advantage of the underlying tensor product structure of the problem and which allows to obtain lowdimensional certified reduced order models of quasioptimal rank. The theoretical results are illustrated by application to a typical model problem in fluorescence optical tomography.
11:20
Regularization as an approximation problem

Daniel Gerth
Regularization as an approximation problem
Daniel Gerth
11:20  11:50
Room: HS 1
Classically, regularization methods are often divided into three frameworks: variational regularization, iterative regularization, and regularization by projection. In this talk we consider regularization as an approximation problem in the classical Hilbert space setting. This enables us to treat all three categories in the same framework which we demonstrate on Tikhonov regularization and Landweber iteration. Our approach provides new insight on the way regularization works, helps understanding parameter choice rules, naturally includes discrete (finite dimensional) problems and, maybe most importantly, yields a numerically observable and computable quantity, namely a source element for the regularized solutions, that contains information about the smoothness of the unknown solution and the noise.
11:50
Lunch
Lunch
11:50  13:00
Room: HS 1
13:00
Beating the Saturation Phenomenon of Stochastic Gradient Descent

Zehui Zhou
(Department of Mathematics, The Chinese University of Hong Kong)
Beating the Saturation Phenomenon of Stochastic Gradient Descent
Zehui Zhou
(Department of Mathematics, The Chinese University of Hong Kong)
13:00  13:20
Room: HS 1
Stochastic gradient descent (SGD) is a promising method for solving largescale inverse problems, due to its excellent scalability with respect to data size. The current mathematical theory in the lens of regularization theory predicts that SGD with a polynomially decaying stepsize schedule may suffer from an undesirable saturation phenomenon, i.e., the convergence rate does not further improve with the solution regularity index when it is beyond a certain range. In this talk, I will present our recent results on beating this saturation phenomenon: (i) (By using small initial stepsize.) We derive a refined convergence rate analysis of SGD, which shows that saturation actually does not occur if the initial stepsize of the schedule is sufficiently small. (ii) (By using Stochastic variance reduced gradient (SVRG), a popular variance reduction technique for SGD.) We prove that, for a suitable constant step size schedule, SVRG can achieve an optimal convergence rate in terms of the noise level (under suitable regularity condition), which means the saturation does not occur.
13:20
On the geometric structures of Laplacian eigenfunctions and applications to inverse scattering problems

XINLIN CAO
On the geometric structures of Laplacian eigenfunctions and applications to inverse scattering problems
XINLIN CAO
13:20  13:40
Room: HS 1
In this talk, we present some novel findings on the geometric structures of Laplacian eigenfunctions and their deep relationship to the quantitative behaviours of the eigenfunctions. The studies reveal that the intersecting angle between two lines (nodal lines, singular lines and generalized singular lines) is closely related to the vanishing order of the eigenfunction at the intersecting point in R^2. And in R^3, the analytic behaviors of a Laplacian eigenfunction depends on the geometric quantities at the corresponding corner point (edge corner and vertex corner). The theoretical findings can be applied directly to some physical problems including the inverse obstacle scattering problem. Taking twodimensional case for example, it is shown in a certain polygonal setup that one can recover the support of the unknown scatterer as well as the surface impedance parameter by finitely many farfield patterns. Indeed, at most two farfield patterns are sufficient for some important applications.
13:40
The ensemble Kalman filter applied to inverse problems: a neural network based oneshot formulation

Simon Weissmann
(Heidelberg University)
The ensemble Kalman filter applied to inverse problems: a neural network based oneshot formulation
Simon Weissmann
(Heidelberg University)
13:40  14:00
Room: HS 1
The ensemble Kalman filter (EnKF) is a widely used metheodology for data assimilation problems and has been recently generalized to inverse problems, known as ensemble Kalman inversion (EKI). We view the method as a derivative free optimization method for a leastsquares misfit functional and we present various variants of the scheme such as regularized EKI methods. This opens up the perspective to use the method in various areas of applications such as imaging, groundwater flow problems, biological problems as well as in the context of the training of neural networks. In particular, we will present application of the EKI to recent machine learning approaches, where we consider the incorporation of neural networks into inverse problems. We replace the complex forward model by a neural network acting as a physicsinformed surrogate model, which will be trained in a oneshot fashion. This means we train the unknown parameter and the neural network at once, i.e. the neural network is only trained for the underlying unknown parameter. We connect the neural network based oneshot formulation to the Bayesian approach for inverse problems and apply the ensemble Kalman inversion in order to solve the optimization problem. Furthermore, we provide numerical experiments to highlight the promising direction of neural network based oneshot formulation together with the application of the ensemble Kalman inversion.
14:00
Vector Spline Approximation on the $3d$Ball for IllPosed Functional Inverse Problems in Medical Imaging

Sarah Leweke
(University of Siegen)
Vector Spline Approximation on the $3d$Ball for IllPosed Functional Inverse Problems in Medical Imaging
Sarah Leweke
(University of Siegen)
14:00  14:20
Room: HS 1
Human brain activity is based on electrochemical processes, which can only be measured invasively. For this reason, induced quantities such as magnetic flux density (via MEG) or electric potential differences (via EEG) are measured noninvasively in medicine and research. The reconstruction of the neuronal current from the measurements is a severely illposed problem though the visualization of the cerebral activity is one of the main tools in brain science and diagnosis. Using an isotropic multipleshell model for the geometry of the human head and a quasistatic approach for modelling the electromagnetic processes, a singularvalue decomposition of the continuous forward operator between infinitedimensional Hilbert spaces is derived. Due to a full characterization of the operator null space, it is revealed that only the harmonic and solenoidal component of the neuronal current affects the measurements. Uniqueness of the problem can be achieved by a minimumnorm condition. The instability of the inverse problem caused by exponentially decreasing singular values requires a stable and robust regularization method. The few available measurements per time step ($\approx 100$) are irregularly distributed with larger gaps in the facial area. On these grounds, a vector spline method for regularized functional inverse problems based on reproducing kernel Hilbert spaces is derived for dealing with these difficulties. Combined with several parameter choice methods, numerical results are shown for synthetic test cases with and without additional Gaussian white noise. The relative normalized root mean square error of the approximation as well as the relative residual do not exceed the noise level. Finally, also results for real data are demonstrated. They can be computed with only a short delay time and are reasonable with respect to physiological expectations.
14:20
Algorithmic improvements via a dictionary learning addon

Naomi Schneider
(University of Siegen, Geomathematics Group)
Algorithmic improvements via a dictionary learning addon
Naomi Schneider
(University of Siegen, Geomathematics Group)
14:20  14:40
Room: HS 1
In the last 10 years, the Inverse Problem Matching Pursuits (IPMPs) were proposed as alternative solvers for linear inverse problems on the sphere and the ball, e.g. from the geosciences. They were constantly further developed and tested on diverse applications, e.g. on the downward continuation of the gravitational potential. This task remains a priority in geodesy due to significant contemporary challenges like the climate change. It is wellknown that, for linear inverse problems on the sphere, there exist a variety of global as well as local basis systems, e.g. spherical harmonics, Slepian functions as well as radial basis functions and wavelets. All of these system have their specific pros and cons. Nonetheless, approximations are often represented in only one of the systems. On the contrary, as matching pursuits, the IPMPs realize the following line of thought: an approximation is built in a socalled best basis, i.e. a mixture of diverse trial functions. Such a basis is chosen iteratively from an intentionally overcomplete dictionary which contains several types of the mentioned global and local functions. The choice of the next best basis element aims to reduce the Tikhonov functional. In practice, an apriori, finite set of trial functions was usually used which was highly inefficient. We developed a learning addon which enables us to work with an infinite dictionary instead while simultaneously reducing the computational cost. Moreover, it automatized the dictionary choice as well. The addon is implemented as constrained nonlinear optimization problems with respect to the characteristic parameters of the different basis systems. In this talk, we explain the learning addon and show recent numerical results with respect to the downward continuation of the gravitational potential.
14:40
Stability estimates for a special class of anisotropic conductivities with an adhoc functional

Sonia Foschiatti
(Università degli Studi di Trieste)
Stability estimates for a special class of anisotropic conductivities with an adhoc functional
Sonia Foschiatti
(Università degli Studi di Trieste)
14:40  15:00
Room: HS 1
The Calderon problem, known also as the inverse conductivity problem, regards the determination of the conductivity inside a domain by the knowledge of the boundary data. For the isotropic case, the stability issue is almost solved. However, for the anisotropic case things get more complicated, since Tartar observation that any diffeomorphism of the domain which keeps the boundary points fixed has the property of leaving the DirichlettoNeumann map unchanged, whereas the conductivity tensor is modified. In this talk we will introduce a special class of anisotropic conductivities for which we can prove a stability estimate. The novelty of this result lies in the fact that the stability is proved using an adhoc functional. As a corollary, we derive a Lipschitz stability estimate in terms of the classical DirichettoNeumann map. This talk is based on a joint work with Eva Sincich and Romina Gaburro.
15:00
Direct regularized reconstruction for the threedimensional Calderón problem

Aksel Rasmussen
(Technical University of Denmark)
Direct regularized reconstruction for the threedimensional Calderón problem
Aksel Rasmussen
(Technical University of Denmark)
15:00  15:20
Room: HS 1
Electrical Impedance Tomography gives rise to the severely illposed Calderón problem of determining the electrical conductivity distribution in a bounded domain from knowledge of the associated DirichlettoNeumann map for the governing equation. The electrical conductivity of an object is of interest in many fields, notably medical imaging, where applications may vary from stroke detection to early detection of breast cancer. The uniqueness and stability questions for the threedimensional problem were largely answered in the affirmative in the 1980's using complex geometrical optics solutions, and this led further to a direct reconstruction method relying on a nonphysical scattering transform. In this talk we look at a direct reconstruction algorithm for the threedimensional Calderón problem in the scope of regularization. Indeed, a suitable and explicit truncation of the scattering transform gives a stable and direct reconstruction method that is robust to small perturbations of the data. Numerical tests on simulated noisy data illustrate the feasibility and regularizing effect of the method, and suggest that the numerical implementation performs better than predicted by theory.
15:20
Stochastic EM methods with Variance Reduction for Penalised PET Reconstructions

Zeljko Kereta
(UCL)
Stochastic EM methods with Variance Reduction for Penalised PET Reconstructions
Zeljko Kereta
(UCL)
15:20  15:40
Room: HS 1
Expectationmaximization (EM) is a popular and wellestablished method for image reconstruction in positron emission tomography (PET) due to its simple form and desirable properties. But, it often suffers from slow convergence, and full batch computations are often infeasible due to large data sizes in modern scanners. Ordered subsets EM (OSEM) is an effective mitigation scheme that provides significant acceleration during initial iterations, but it has been observed to enter a limit cycle. Another difficulty for EM methods is the incorporation of a regularising penalty, which poses additional difficulties for the maximisation step. In this work, we investigate two classes of algorithms for accelerating OSEM based on variance reduction for penalised PET reconstructions. The first is a stochastic variance reduced EM algorithm, termed as SVREM, which is an extension of the classical EM to the stochastic context that combines classical OSEM with insights from variance reduction techniques for gradient descent and facilitates the computation of the Mstep through parabolic surrogates for the penalty. The second views OSEM as a preconditioned stochastic gradient ascent, and applies variance reduction techniques, i.e., SAGA and SVRG, to estimate the update direction. We present several numerical experiments to illustrate the efficiency and accuracy of the two methodologies. The numerical results show that these approaches significantly outperform existing OSEM type methods for penalised PET reconstructions, and hold great potential.
15:40
Deterministic Dynamics of Ensemble Kalman Inversion

Leon Bungert
(University of Bonn)
Deterministic Dynamics of Ensemble Kalman Inversion
Leon Bungert
(University of Bonn)
15:40  16:00
Room: HS 1
The Ensemble Kalman inversion (EKI) is a powerful tool for the solution of Bayesian inverse problems of type $y=Au^\dagger+\varepsilon$, with $u^\dagger$ being an unknown parameter and $y$ a given datum subject to measurement noise $\varepsilon$. It evolves an ensemble of particles, sampled from a prior measure, towards an approximate solution of the inverse problem. In this talk I will provide a complete description of the dynamics of EKI, utilizing a spectral decomposition of the particle covariance. In particular, I will demonstrate that, despite the common folklore that EKI creates samples from the posterior measure, this is only true for its mean field limit and will suggest modifications of EKI that overcome this drawback.