2nd Alps-Adriatic Inverse Problems Workshop 2021 (AAIP 2021) Chemnitz Inverse Problems Symposium on tour

Europe/Vienna
HS 1 (Alpen-Adria-Universität Klagenfurt)

HS 1

Alpen-Adria-Universität Klagenfurt

Description

Program Overview:

The 2nd Alps-Adriatic Inverse Problems Workshop will be held as a Chemnitz Symposium on tour at the  Department of Mathematics of the Alpen-Adria-Universitaet Klagenfurt  (and to some extent online, depending on the pandemic situation) during September 22-24, 2021.

The aim of this workshop is to gather  scientists working on the theory and applications of inverse problems in  academia and industry, in order to present their research, exchange  ideas, and start new collaborations. Scientists at an early stage of the  career (PhD students, postdocs) are particularly encouraged to  participate. 

The workshop will be held in hybrid form. 

We will possibly not be able to accommodate talks by each participant. If selection is necessary, we will prefer in person talks, but remaining slots will be assigned for online presentations. 

Important Dates:

    Registration: August 16, 2021

    Abstract submission: August 16, 2021

    Acceptance of abstract: August 23, 2021

    Payment of fee: September 1, 2021

Invited Speakers:

  • Giovanni Alberti, Genova
  • Melina Freitag, Potsdam
  • Richard Nickl, Cambridge
  • Eva Sincich, Trieste

Local Organizing Committee:

  • Kristian Bredies, Graz
  • Markus Haltmeier, Innsbruck
  • Barbara Kaltenbacher, Klagenfurt
  • Jan Frederick Pietschmann, Chemnitz
  • Ronny Ramlau, Linz
  • Elena Resmerita, Klagenfurt
  • Otmar Scherzer, Wien

Scientific Committee:

  • Elisa Francini, Firenze
  • Thorsten Hohage, Göttingen
  • Bernd Hofmann, Chemnitz
  • Angkana Rüland, Heidelberg

Information regarding 3-G-regulations see here

Program (overview):

All talks take place at room HS 1 (near the main entrance of the University) and the breaks in room Z.1.29 (right next to HS 1)

Wednesday, September 22

8:00-8:45 Registration

8:45-9:00 Opening

9:00-9:50 Melina Freitag (invited speaker)

Coffee break

10:20-10:50 Stefan Kindermann

10:50-11:20 Bastian Harrach

11:20-11:40 Sarah Eberle

11:40-12:00 Lukas Vierus

12:00-12:20 Xinpeng Huang

Lunch

13:50-14:40 Richard Nickl (invited speaker)

14:40-15:10 Markus Reiß

15:10-15:30 Tim Roith

15:30-15:50 Christian Aarset

Coffee break

16:10-16:30 Thi Ngoc Tram Nguyen

16:30-16:50 Duc Hoan Nguyen

16:50-17:10 Andreas Habring

17:10-17:30 Marcello Carioni

19:00 Conference dinner (Restaurant Felsenkeller; for bus information please look to "Venue" written below)

 

Thursday, September 23

9-9:50 Giovanni Alberti (invited speaker)

Coffee break

10:20-10:50 Christine Böckmann

10:50-11:10 Simon Hubmer

11:10-11:30 Tim Jahn

11:30-11:50 Philip Miller

11:50-12:10 Lukas Pieronek

12:10-12:30 Marco Mauritz

Lunch

14:00-14:30 Arnd Rösch

14:30-14:50 Yannick Gleichmann

14:50-15:10 David Omogbhe

15:10-15:30 Giacomo Borghi

Coffee break

15:50-16:20 Gen Nakamura

16:20-16:40 Stephanie Blanke

16:40-17:00 Ekaterina Sherina

17:00-17:20 Jan Bohr

 

19:30 GIP meeting

 

Friday, September 24 (online)

9-9:50 Eva Sincich (invited speaker)

break

10:20-10:50 Hanne Kekkonen

10:50-11:20 Matthias Schlottbom

11:20-11:50 Daniel Gerth

Lunch

13:00-13:20 Zehui Zhou

13:20-13:40 Xinlin Cao

13:40-14:00 Simon Weissmann

14:00-14:20 Sarah Leweke

14:20-14:40 Naomi Schneider

14:40-15:00 Sonia Foschiatti

15:00-15:20 Aksel Rasmussen

15:20-15:40 Zeljko Kereta

15:40-16:00 Leon Bungert

 

The AAIP 2021 is supported by:

  

Participants
  • Aksel Rasmussen
  • Alexander Schlüter
  • Andreas Habring
  • Annalena Albicker
  • Arnd Rösch
  • Barbara Kaltenbacher
  • Bastian Harrach
  • Bernd Hofmann
  • Bernhard Stankewitz
  • Christian Aarset
  • Christian Clason
  • Christian Gerhards
  • Christine Böckmann
  • Daniel Gerth
  • David Omogbhe
  • Douglas Pacheco
  • Duc Hoan Nguyen
  • Ekaterina Sherina
  • Elena Resmerita
  • Eric Gutierrez
  • Eva Sincich
  • Frank Hettlich
  • Frank Werner
  • Gen Nakamura
  • Giacomo Borghi
  • Giovanni Alberti
  • Giovanni Covi
  • Hanne Kekkonen
  • Jan Bohr
  • Jan-Frederik Pietschmann
  • Kamran Sadiq
  • Leon Bungert
  • Leonie Fink
  • Lisa Schätzle
  • Lukas Pieronek
  • Lukas Vierus
  • Marcello Carioni
  • Marco Mauritz
  • Markus Reiß
  • Marvin Knöller
  • María Ángeles García-Ferrero
  • Matthias Schlottbom
  • Melina Freitag
  • Naomi Schneider
  • Philip Miller
  • Phuoc Truong Huynh
  • Pornsarp Pornsawad
  • Richard Nickl
  • Robin Herz
  • sadia sadique
  • Sarah Eberle
  • Sarah Leweke
  • Simon Hubmer
  • Simon Weissmann
  • Sonia Foschiatti
  • Stefan Kindermann
  • Stephanie Blanke
  • Teresa Rauscher
  • Thomas Schuster
  • Thorsten Hohage
  • Tilo Arens
  • Tim Jahn
  • Tim Roith
  • Tobias Wolf
  • Tom Lahmer
  • Tram Nguyen
  • Volker Michel
  • XINLIN CAO
  • Xinpeng Huang
  • Yannik Gleichmann
  • Zehui Zhou
  • Zeljko Kereta
    • 08:00 08:45
      Registration 45m
    • 08:45 09:00
      Opening 15m
    • 09:00 09:50
      Variational Data Assimilation and low-rank solvers 50m

      Weak constraint four-dimensional variational data assimilation is an important method for incorporating observations into a (usually
      imperfect) model. The resulting minimisation process takes place in very high dimensions. In this talk we present two approaches for reducing the dimension and thereby the computational cost and storage of this optimisation problem. The first approach formulates the linearised system as a saddle point problem. We present a low-rank approach which exploits the structure of the saddle point system using techniques and theory from solving large scale matrix equations and low-rank Krylov subspace methods. The second approach uses projection methods for reducing the system dimension. Numerical experiments with the linear advection-diffusion equation, and the nonlinear Lorenz-95 model demonstrate the effectiveness of both approaches.

      Speaker: Melina Freitag
    • 09:50 10:20
      Coffee break 30m
    • 10:20 10:50
      The tangential cone condition for EIT 30m

      The tangential cone conditions (TCCs) are sufficient conditions on a nonlinear forward operator for proving convergence of various iterative nonlinear regularization schemes such as Landweber iteration. Especially for parameter identification problems with boundary data, they have not been verified yet, even though numerical results for nonlinear iterative regularization method
      usually show the expected convergence behavior. In this talk we analyze the tangential cone conditions for the classical impedance tomography problem (EIT) and state sufficient conditions when they hold, although a general result on the validity of the TCCs remains open. An important tool is the use of Loewner monotonicity, which allows us to prove the TCC in situations, e.g., when the conductivities are pointwisely above or below the true conductivity. This talk is a summary of the arXiv article [1]

      [1] S. Kindermann, On the tangential cone condition for electrical impedance tomography,
      Preprint on arXiv, 2021. https://arxiv.org/abs/2105.02635

      Speaker: Stefan Kindermann (Johannes Kepler University Linz)
    • 10:50 11:20
      Uniqueness and global convergence for inverse coefficient problems with finitely many measurements 30m

      Several applications in medical imaging and non-destructive material testing lead to inverse elliptic coefficient problems, where an unknown coefficient function in an elliptic PDE is to be determined from partial knowledge of its solutions. This is usually a highly non-linear ill-posed inverse problem, for which unique reconstructability results, stability estimates and global convergence of numerical methods are very hard to achieve.

      In this talk we will consider an inverse coefficient problem with finitely many measurements and a finite desired resolution. We will present a criterion based on monotonicity, convexity and localized potentials arguments that allows us to explicitly estimate the number of measurements that is required to achieve the desired resolution. We also obtain an error estimate for noisy data, and overcome the problem of local minima by rewriting the problem as an equivalent uniquely solvable convex non-linear semidefinite optimization problem.

      References
      1. B. Harrach, Uniqueness, stability and global convergence for a discrete inverse elliptic Robin transmission problem, Numer. Math. 147 (2021), pp. 29-70, https://doi.org/10.1007/s00211-020-01162-8
      2. B. Harrach, Solving an inverse elliptic coefficient problem by convex non-linear semidefinite programming, arXiv preprint (2021), arXiv:2105.11440

      Speaker: Bastian Harrach (Goethe University Frankfurt)
    • 11:20 11:40
      Monotonicity-Based Regularization for Shape Reconstruction in Linear Elasticity 20m

      We deal with the shape reconstruction of inclusions in elastic bodies and solve the inverse problem by means of a monotonicity-based regularization. In more detail, we show how the monotonicity methods can be converted into a regularization method for a data-fitting functional without losing the convergence properties of the monotonicity methods. In doing so, we introduce constraints on the minimization problem of the residual based on the monotonicity methods and prove the existence and uniqueness of a minimizer as well as the convergence of the method for noisy data. In addition, we compare numerical reconstructions of inclusions based on the monotonicity-based regularization with a standard approach (one-step linearization with Tikhonov-like regularization), which also shows the robustness of our method regarding noise in practice.

      Speaker: Sarah Eberle (Goethe University Frankfurt)
    • 11:40 12:00
      Diffractive tensor field tomography as an inverse problem for a transport equation 20m

      We consider a holistic approach to find a closed formula for the generalized ray transform of a tensor field. This means that we take refraction, attenuation and time-dependence into account. We model the refraction by an appropriate Riemannian metric which leads to an integration along geodesics. The absorption appears as an attenuation coefficient in an exponential factor. The derived explicit integral formula solves a transport equation whose boundary conditions are given by the measured data. Deriving the weak formulation of the problem, we obtain solutions in Sobolev-Bochner spaces. Whereas it fails to guarantee a unique solution of the implied initial boundary value problem (IBVP), it is possible to prove uniqueness of viscosity solutions by using the Lax-Milgram-theorem. For this, however, certain restrictions on the refractive index and the attenuation coefficient must be assumed. Considering the parameter-to-solution map as the forward operator, the inverse problem can be solved by minimizing a Tikhonov functional. Here the adjoint operator can also be identified as a solution of an IBVP.

      Speaker: Lukas Vierus (Saarland University)
    • 12:00 12:20
      An Inverse Magnetization Problem on the Sphere with Localization Constraints 20m

      We study an inverse magnetization problem arising in geo- and planetary magnetism. This problem is non-unique and the null space can be characterized by the Hardy-Hodge decomposition. The additional assumption that the underlying magnetization is spatially localized in a subdomain of the sphere (which can be justified when interested, e.g., in regional magnetic anomalies) ameliorates the non-uniqueness issue so that only the tangential divergence-free contribution remains undetermined. In a previous reconstruction approach, we addressed the localization by including an additional penalty term in the minimizing functional. This, however, requires the coestimation of the undetermined divergence-free contribution. Here, we present a first attempt at more directly including the localization constraint without requiring such a coestimation. In addition, we show that the localization constraint is closely connected to the problem of extrapolation in Hardy spaces.

      Speaker: Xinpeng Huang (TU Bergakademie Freiberg, Institute of Geophysics and Geoinformatics)
    • 12:20 13:50
      Lunch 1h 30m
    • 13:50 14:40
      Bayesian non-linear inversion problems and PDEs: progress and challenges 50m

      We review the Bayesian approach to inverse problems, and describe recent progress in our theoretical understanding of its performance in non-linear situations. Statistical and computational guarantees for such algorithms will be provided in high-dimensional, non-convex scenarios, and model examples from elliptic and transport (X-ray type) PDE problems will be discussed. The connection between MCMC and other existing iterative methods will be touched upon, and several open mathematical problems will be described.

      Speaker: Prof. Richard Nickl (University of Cambridge)
    • 14:40 15:10
      A discrepancy-type stopping rule for conjugate gradients under white noise 30m

      We consider a linear inverse problem of the form $y=Ax+\epsilon \dot W$ where the action of the operator (matrix) $A$ on the unknown $x$ is corrupted by white noise (a standard Gaussian vector) $\dot W$ of level $\epsilon>0$. We study the candidate solutions $\hat x_m$ provided by the $m$-th conjugate gradient CGNE iterates. Refining Nemirovskii's trick, we are able to provide explicit error bounds for the best (oracle) iterate along the iteration path. This yields optimal estimation rates over polynomial source conditions.
      In a second step we identify monotonic proxies for bias (approximation error) and variance (stochastic error) of the nonlinear estimators $\hat x_m$ and develop a residual-based stopping rule for a data-driven choice $\hat m$ of the number of iterations. This yields a stochastic version of the discrepancy principle. Using tools from concentration of measure and extending deterministic ideas by Hanke, we can provide an oracle-type inequality for the prediction error $E[\|A(\hat x_{\hat m}-x)\|^2]$ (non-trivial under white noise), which gives rate-optimality up to a dimensionality effect. Finally, we provide partial results also for the estimation error $E[\|\hat x_{\hat m}-x\|^2]$, discussing the challenges generated by the statistical noise.

      Speaker: Markus Reiß (Humboldt-Universität zu Berlin)
    • 15:10 15:30
      A Bregman Learning Framework for Sparse Neural Networks 20m

      I will present a novel learning framework based on stochastic Bregman iterations. It allows to train sparse neural networks with an inverse scale space approach, starting from a very sparse network and gradually adding significant parameters. Apart from a baseline algorithm called LinBreg, I will also speak about an accelerated version using momentum, and AdaBreg, which is a Bregmanized generalization of the Adam algorithm. I will present a statistically profound sparse parameter initialization strategy, stochastic convergence analysis of the loss decay, and additional convergence proofs in the convex regime. The Bregman learning framework can also be applied to Neural Architecture Search and can, for instance, unveil autoencoder architectures for denoising or deblurring tasks.

      Speaker: Tim Roith (Friedrich-Alexander-Universität Erlangen-Nürnberg)
    • 15:30 15:50
      iPALM-based unsupervised energy disaggregation 20m

      With smart energy meters increasingly available to private households, new applications arise, such as identifying main power consuming devices and predicting human activity. One major obstacle is that smart energy meters typically provide aggregated data, where each source of energy consumption is summed. Further, obtaining training data can be intrusive. To counteract this, we propose an unsupervised minimization approach based on the Inertial Proximal Alternating Linearized Minimization (iPALM) algorithm, utilising convolutional sparse coding to represent individual device energy signatures as atoms convolved with sparse coefficient vectors.

      Speaker: Christian Aarset (University of Graz)
    • 15:50 16:10
      Coffee break 20m
    • 16:10 16:30
      Parameter identification for PDEs: From neural-network-based learning to discretized inverse problems 20m

      We investigate the problem of learning an unknown nonlinearity in parameter-dependent PDEs. The nonlineartiy is represented via a neural network of an unknown state. The learning-informed PDE model has three unknowns: physical parameter, state and nonlinearity. We propose an all-at-once approach to the minimization problem. (Joint work: Martin Holler, Christian Aarset)
      More generally, the representation via neural networks can be realized as a discretization scheme. We study convergence of Tikhonov and Landweber methods for the discretized inverse problems, and prove convergence when the discretization error approaches zero. (Joint work: Barbara Kaltenbacher)

      Speaker: Tram Nguyen
    • 16:30 16:50
      On a regularization of unsupervised domain adaptation in RKHS 20m

      We analyze the use of the so-called general regularization scheme in the scenario of unsupervised domain adaptation under the covariate shift assumption. Learning algorithms arising from the above scheme are generalizations of importance weighted regularized least squares method, which up to now is among the most used approaches in the covariate shift setting. We explore a link between the considered domain adaptation scenario and estimation of Radon-Nikodym derivatives in reproducing kernel Hilbert spaces, where the general regularization scheme can also be employed and is a generalization of the kernelized unconstrained least-squares importance fitting. We estimate the convergence rates of the corresponding regularized learning algorithms and discuss how to resolve the issue with the tuning of their regularization parameters. The theoretical results are illustrated by numerical examples, one of which is based on real data collected for automatic stenosis detection in cervical arteries.

      Speaker: Duc Hoan Nguyen (Johann Radon Institute)
    • 16:50 17:10
      A Generative Variational Model for Inverse Problems in Imaging 20m

      In recent years deep/machine learning methods using convolutional networks have become increas- ingly popular also in inverse problems mainly due to their practical performance [1]. In many cases these methods outperform conventional regularization methods, such as total variation regulariza- tion, in particular when applied to more complicated data such as images containing texture. A major downside of machine learning methods, however, is the need for large sets of training data, which are often not available in the necessary extent. Moreover, the level of analytic understanding of machine learning methods, in particular in view of an analysis for inverse problems in function space, is still far from the one of conventional variational methods.
      In this talk, we propose a novel regularization method for solving inverse problems in imaging, which is inspired by the architecture of convolutional neural networks as seen in many in deep learning approaches. In the model, the unknown is generated from a variable in latent space via multi-layer convolutions and non-linear penalties. In contrast to conventional deep learning methods, however, the convolution kernels are learned directly from the given (possibly noisy) data, such that no training is required.
      In the talk, we will motivate the model and provide theoretical results about existence/stability of solutions and convergence for vanishing noise in function space. Afterwards, in a discretized setting, we will show practical results of our method in comparison to a state of the art deep learning method [1].

      [1] V. Lempitsky, A. Vedaldi, and D. Ulyanov, Deep image prior, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

      Speaker: Andreas Habring (University of Graz)
    • 17:10 17:30
      Generalized conditional gradient methods for variational inverse problems with convex regularizers 20m

      In this talk, we propose and analyze a generalized conditional gradient method for infinite dimensional variational inverse problems written as the sum of a smooth, convex loss function and a, possibly non-smooth, convex regularizer.
      Our method relies on the mutual update of a sequence of extremal points of the unit ball of the regularizer and a sparse iterate given as a suitable linear combination of such extreme points.
      We show that under standard hypotheses on the minimization problem, our algorithm converges sublinearly to a solution of the inverse problem. Moreover, we demonstrate that by imposing additional assumptions on the structure of the minimizers, the associated dual variables and the nondegeneracy of the problem, we can improve such convergence result to a linear rate.
      Then we apply our generalized conditional gradient method to solve dynamic inverse problems regularized with the Benamou-Brenier energy. Relying on recent results about the characterization of the extremal points for the ball of the Benamou-Brenier energy, we show that our algorithm can be applied to this specific example to reconstruct the motion of heavily undersampled dynamic data together with the presence of noise.

      Speaker: Marcello Carioni (University of Cambridge)
    • 19:00 21:00
      Conference Dinner (Restaurant Felsenkeller) 2h
    • 09:00 09:50
      Infinite-dimensional inverse problems with finite measurements 50m

      In this talk I will discuss uniqueness, stability and reconstruction for infinite-dimensional nonlinear inverse problems with finite measurements, under the a priori assumption that the unknown lies in, or is well-approximated by, a finite-dimensional subspace or submanifold. The methods are based on the interplay of applied harmonic analysis, in particular sampling theory and compressed sensing, and the theory of inverse problems for partial differential equations. Several examples, including the Calderón problem and scattering, will be discussed.

      Speaker: Giovanni Alberti (University of Genova)
    • 09:50 10:20
      Coffee break 30m
    • 10:20 10:50
      Convergence Rate of Runge-Kutta-Type Regularization for Nonlinear Ill-Posed Problems under Logarithmic Source Condition 30m

      We present two families of regularization method for solving nonlinear ill-posed problems between Hilbert spaces by applying the family of Runge–Kutta methods to an initial value problem, in particular, to the asymptotical regularization method.
      In Hohage [1], a systematic study of convergence rates for regularization methods under logarithmic source condition including the case of operator approximations for a priori and a posteriori stopping rules is provided.
      We prove the logarithmic convergence rate of the families of usual and modified iterative Runge-Kutta methods under the logarithmic source condition, and numerically verify the obtained results. The iterative regularization is terminated by the a posteriori discrepancy principle, Pornsawad, et al. [2]. Up to now, the logarithmic convergence rate under logarithmic source condition has only been investigated for particular examples, namely, the Levenberg–Marquardt method [3] and the modified Landweber method [4]. Here, we extended the results to the whole family of Runge-Kutta-type methods with and without modification.

      [1] Hohage, T., Regularization of exponentially ill-posed problems. Numer. Funct. Anal. Optimiz. 2000, 21, 439–464.
      [2] Pornsawad, P., Resmerita, E., Böckmann, C., Convergence Rate of Runge-Kutta-Type Regularization for Nonlinear Ill-Posed Problems under Logarithmic Source Condition, Mathematics 2021, 9, 1042.
      [3] Böckmann, C., Kammanee, A., Braunß, A., Logarithmic convergence rate of Levenberg–Marquardt method with application to an inverse potential problem. J. Inv. Ill-Posed Probl. 2011, 19, 345–367.
      [4] Pornsawad, P., Sungcharoen, P., Böckmann, C., Convergence rate of the modified Landweber method for solving inverse potential problems. Mathematics 2020, 8, 608.

      Speaker: Christine Böckmann (University of Potsdam, Institute of Mathematics, Karl-Liebknecht-Str. 24-25, 14476 Potsdam, Gemany)
    • 10:50 11:10
      Frame Decompositions and Inverse Problems 20m

      The singular-value decomposition (SVD) is an important tool for the analysis and solution of linear ill-posed problems in Hilbert spaces. However, it is often difficult to derive the SVD of a given operator explicitly, which limits its practical usefulness. An alternative in these situations are frame decompositions (FDs), which are a generalization of the SVD based on suitably connected families of functions forming frames. Similar to the SVD, these FDs encode information on the structure and ill-posedness of the problem and can be used as the basis for the design and implementation of efficient numerical solution methods. Crucially though, FDs can be derived explicitly for a wide class of operators, in particular for those satisfying a certain stability condition. In this talk, we consider various theoretical aspects of FDs such as recipes for their construction and some properties of the reconstruction formulae induced by them. Furthermore, we present convergence and convergence rates results for continuous regularization methods based on FDs under both a-priori and a-posteriori parameter choice rules. Finally, we consider the practical utility of FDs for solving inverse problems by considering two numerical examples from computerized and atmospheric tomography.

      Speaker: Dr Simon Hubmer (Johann Radon Institute Linz)
    • 11:10 11:30
      A modified discrepancy principle to attain optimal rates for polynomially and exponentially ill-posed problems under white noise 20m

      We consider a linear ill-posed equation in the Hilbert space setting under white noise. Known convergence results for the discrepancy principle are either restricted to Hilbert-Schmidt operators (and they require a self-similarity condition for the unknown solution additional to a classical source condition) or to polynomially ill-posed operators (excluding exponentially ill-posed problems). In this work we show optimal convergence for a modified discrepancy principle for both polynomially and exponentially ill-posed operators (without further restrictions) solely under either Hölder-type or logarithmic source conditions. In particular, the method includes only a single simple hyper parameter, which does not need to be adapted to the type of ill-posedness.

      Speaker: Tim Jahn
    • 11:30 11:50
      Convergence rates for oversmoothing Banach space regularization 20m

      We show convergence rates results for Banach space regularization in the case of oversmoothing, i.e. if the penalty term fails to be finite at the unknown solution. We present a flexible approach based on K-interpolation theory which provides more general and complete results than classical variational regularization theory based on various types of source conditions for true solutions contained in the penalty's domain. In particular, we prove order optimal convergence rates for bounded variation regularization. Moreover, we show a result for sparsity promoting wavelet regularization and demonstrate in numerical simulations for a parameter identification problem in a differential equation that our theoretical results correctly predict rates of convergence for piecewise smooth unknown coefficients.

      Speaker: Philip Miller (Institute for Numerical and Applied Mathematics, University of Göttingen, Germany)
    • 11:50 12:10
      Regularisation of certain non-linear problems in $L^{\infty}$ 20m

      In many cases the parameters of interest in inverse problems arise as coefficients of PDE models for which $L^{\infty}$ is one of the most natural spaces. Despite its formal connection to the regular and regularisation-approved $L^p$-spaces, $L^{\infty}$ itself is non-smooth, non-reflexive and non-separable. Hence, standard Banach space methods generally fail and the need of discretisation in practice makes it even hopeless to aim for good reconstructions in the strong topology. In this talk we present a novel regularisation method which generates uniformly bounded iterates as approximate solutions to locally ill-posed equations and for which the regularisation property then holds with respect to weak-$\ast$ convergence. Numerical examples will complete our analysis.

      Speaker: Dr Lukas Pieronek (KIT)
    • 12:10 12:30
      Variational analysis of a dynamic PET reconstruction model with optimal transport regularization 20m

      We consider the dynamic Positron Emission Tomography (PET) reconstruction method proposed by Schmitzer et al. $[1]$ that particularly aims to reconstruct the temporal evolution of single or small numbers of cells by leveraging optimal transport. Using a MAP estimate the cells' evolution is reconstructed by minimizing a functional $\mathcal{E}_n$ - composed of a Kulback-Leibler-type data fidelity term and the Benamour-Brenier functional - over the space of positive Radon measures. This choice of the regularization ensures temporal consistency between different time points.

      The PET measurements in our forward model are described by Poisson point processes with a given intensity $q_n$. In the talk we show $\Gamma$-convergence of the stochastic functionals $\mathcal{E}_n$ to a deterministic limit functional for $q_n\to \infty$. This helps understanding the properties of the considered reconstruction method for an increasing SNR. To compute the $\Gamma$-limit we show convergence of Poisson point processes for intensities growing to infinity as well as convergence of the optimal transport regularization. The latter requires the approximation of arbitrary Radon measures by ones satisfying the continuity equation while controlling the Benamou-Brenier energy.

      Reference:
      $[1]$ B. Schmitzer, K. P. Schäfers, and B. Wirth. Dynamic Cell Imaging in PET with
      Optimal Transport Regularization. IEEE Transactions on Medical Imaging,
      2019.

      Speaker: Marco Mauritz (University of Münster, Institute for Analysis and Numerics)
    • 12:30 14:00
      Lunch 1h 30m
    • 14:00 14:30
      Ill-posedness effects for well-posed problems 30m

      In this talk we study the discretization of a well-posed nonlinear problem.
      It may happen that discretized solutions do not converge. However, this effect disappears for a suitable chosen optimal control problem.

      Speaker: Arnd Rösch (Universität Duisburg-Essen)
    • 14:30 14:50
      Adaptive Spectral Decomposition for Inverse Scattering Problems 20m

      A nonlinear optimization method is proposed for inverse scattering problems, when the unknown medium is characterized by one or several spatially varying parameters. The inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type method. Instead of a grid-based discrete representation, each parameter is projected to a separate fnite-dimensional subspace, which is iteratively adapted during the optimization. Each subspace is spanned by the first few eigenfunctions of a linearized regularization penalty functional chosen a priori. The (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization and, in practice, appears more robust to missing data or added noise. Numerical results illustrate the accuracy and efficiency of the resulting adaptive spectral regularization for inverse scattering problems for the wave equation in time domain.

      Speaker: Yannik G. Gleichmann
    • 14:50 15:10
      An inverse source problem for vector field 20m

      We consider an inverse source problem in the stationary radiating transport through a two dimensional absorbing and scattering medium. The attenuation and scattering properties of the medium are assumed known and the unknown vector field source is isotropic. For scattering kernels of finite Fourier content in the angular variable, we show how to recover the isotropic vector field sources from boundary measurements. The approach is based on the Cauchy problem for a Beltrami-like equation associated with $A$-analytic maps in the sense of Bukhgeim. This is a joint work with Kamran Sadiq (RICAM).

      Speaker: David Omogbhe (Johann Radon Instute for Computational and Applied Mathematics(RICAM))
    • 15:10 15:30
      Constrained consensus-based optimization via penalization 20m

      Constrained optimization problems represent a challenge when the objective function is non-differentiable, multimodal and the feasible region lacks regularity. In our talk, we will introduce a swarm-based optimization algorithm which is capable of handling generic non-convex constraints by means of a penalization technique. The method extends the class of consensus-based optimization (CBO) methods to the constrained settings, a class where a swarm of interactive particles explores the objective function landscape following a consensus dynamics.
      In our algorithm, we perform a time discretization of the system evolution and tune the parameters to effectively avoid non-admissible regions of the domain. While the particle dynamics may appear simple, recovering convergence guarantees represents the real difficulty when dealing with swarm-based methods. In the talk, we will present the essential mean-field tools that allowed us to theoretically analyze the algorithm and obtain convergence results of its mean-field counterpart under mild assumptions. To conclude, we will discuss both the algorithm performance on benchmark problems and numerical experiments of the mean-field dynamics.

      Speaker: Mr Giacomo Borghi (RWTH Aachen University)
    • 15:30 15:50
      Coffee break 20m
    • 15:50 16:20
      Holmgren-John unique continuation for viscoelastic equation 30m

      We concern on the Holmgren-John unique continuation theorem for a visco-elastic equation with a memory term when the coefficients of the equation are analytic. This is a special case of the general unique continuation property (UCP) for the equation if its coefficients are analytic. This equation describes visco-elastic behavior of a medium. In this talk we will present the UCP for the viscoelastic equation when the relaxation tensor is analytic and allowed to be fully anisotropic. We will describe the UCP in terms of a distance defined by
      the travel time of the slowest wave associated to the elastic part of this equation.

      The collaborators of this study are Maarten de Hoop (Rice University), Matthias Eller (Georgetown University) and Ching-Lung Lin (National Cheng-Kung University).

      Speaker: Gen Nakamura
    • 16:20 16:40
      Radon-based image reconstruction in magnetic particle imaging using an FFL-scanner 20m

      Reliable and fast medical imaging techniques are indispensable for diagnostics in clinical everyday life. A promising example of those is given by magnetic particle imaging (MPI) invented by Gleich and Weizenecker [1]. MPI is a tracer-based imaging method allowing for the reconstruction of the spatial distribution of magnetic nanoparticles via exploiting their non-linear magnetization response to changing magnetic fields. We dedicate ourselves towards MPI using a field-free line (FFL) for spatial encoding [2]. For data acquisition the FFL is moved through the field of view resulting in a scanning geometry resembling the one in computerized tomography. Indeed, in the ideal setting, corresponding MPI data can be traced back to the Radon transform of the particle concentration [3]. We jointly reconstruct Radon data and particle concentration by means of total variation regularization and have a look at some numerical examples. We conclude with problems that arise when leaving the ideal setting. For example, in practice, we are confronted with imperfections of the applied magnetic fields leading to deformed low-field volumes and, when ignored, image artifacts.

      References:
      [1] Gleich B and Weizenecker J 2005 Tomographic imaging using the nonlinear response of magnetic particles Nature 435 1214-1217
      (https://doi.org/10.1038/nature03808)
      [2] Weizenecker J, Gleich B, and Borgert J 2008 Magnetic particle imaging using a field free line J. Phys. D: Appl. Phys. 41 105009
      (https://doi.org/10.1088/0022-3727/41/10/105009)
      [3] Knopp T, Erbe M, Sattel T F, Biederer S, and Buzug T M 2011 A Fourier slice theorem for magnetic particle imaging using a field-free line Inverse Problems 27 095004
      (https://doi.org/10.1088/0266-5611/27/9/095004)

      Speaker: Ms Stephanie Blanke (Universität Hamburg)
    • 16:40 17:00
      From displacement field to parameter estimation: theory and application 20m

      Diseases like cancer or arteriosclerosis often cause changes of tissue stiffness on the micrometer scale. Elastography is a common technique for medical diagnostics developed to detect these changes. We consider a complex problem of estimating both the internal displacement field and the material parameters of an object which is being subjected to a deformation. In particular, we present our recently developed elastographic optical flow method (EOFM) for motion detection from optical coherence tomography images. This method takes into account experimental constraints, such as appropriate boundary conditions, the use of speckle information, as well as the inclusion of structural information derived from knowledge of the background material. Furthermore, we present numerical results based on both simulated and experimental data from an elastography experiment and discuss the material parameter estimation from these data.

      Speaker: Dr Ekaterina Sherina (University of Vienna)
    • 17:00 17:20
      Recent analytical progress on some nonlinear tomography problems 20m

      We consider a class of nonlinear inverse problems, encompassing e.g. Polarimetric Neutron Tomography (PNT), where one seeks to recover a magnetic field by probing it with Neutron beams and measuring the resulting spin change. In recent years there has been great progress on fundamental theoretical questions regarding injectivity and stability properties for PNT and we survey some of the latest results, including a novel range characterisation for the forward map. One of the drivers behind these results is the desire to give rigorous guarantees for the statistical performance of Bayesian algorithms. The talk is based on joint work with Gabriel Paternain and Richard Nickl.

      Speaker: Jan Bohr (University of Cambridge)
    • 19:30 21:30
      GIP Meeting 2h
    • 09:00 09:50
      Stable determination of a rigid scatterer in elastodynamics 50m

      We deal with an inverse elastic scattering problem for the shape determination of a rigid scatterer in the time-harmonic regime. We prove a local stability estimate of log log type for the identification of a scatterer by a single far-field measurement.
      The needed a priori condition on the closeness of the scatterers is estimated by the universal constant appearing in the Friedrichs inequality.

      This is based on a joint work with Luca Rondi and Mourad Sini.

      Speaker: Ms Eva Sincich (University of Trieste)
    • 09:50 10:20
      break 30m
    • 10:20 10:50
      Consistency of Bayesian inference with Gaussian process priors for a parabolic inverse problem 30m

      We consider the statistical nonlinear inverse problem of recovering the absorption term f > 0 in the heat equation, with given boundary and initial value functions, from N discrete noisy point evaluations of the solution u_f. We study the statistical performance of Bayesian nonparametric procedures based on Gaussian process priors, that are often used in practice. We show that, as the number of measurements increases, the resulting posterior distributions concentrate around the true parameter f* that generated the data, and derive a convergence rate for the reconstruction error of the associated posterior means. We also consider the optimality of the contraction rates and prove a lower bound for the minimax convergence rate for inferring f from the data, and show that optimal rates can be achieved with truncated Gaussian priors.

      Speaker: Dr Hanne Kekkonen (Delft University of Technology)
    • 10:50 11:20
      A model reduction approach for inverse problems with operator valued data 30m

      We study the efficient numerical solution of linear inverse problems with operator valued data which arise, e.g., in seismic exploration, inverse scattering, or tomographic imaging. The high-dimensionality of the data space implies extremely high computational cost already for the evaluation of the forward operator, which makes a numerical solution of the inverse problem, e.g., by iterative regularization methods, practically infeasible. To overcome this obstacle, we develop a novel model reduction approach that takes advantage of the underlying tensor product structure of the problem and which allows to obtain low-dimensional certified reduced order models of quasi-optimal rank. The theoretical results are illustrated by application to a typical model problem in fluorescence optical tomography.

      Speaker: Matthias Schlottbom (University of Twente)
    • 11:20 11:50
      Regularization as an approximation problem 30m

      Classically, regularization methods are often divided into three frameworks: variational regularization, iterative regularization, and regularization by projection. In this talk we consider regularization as an approximation problem in the classical Hilbert space setting. This enables us to treat all three categories in the same framework which we demonstrate on Tikhonov regularization and Landweber iteration. Our approach provides new insight on the way regularization works, helps understanding parameter choice rules, naturally includes discrete (finite dimensional) problems and, maybe most importantly, yields a numerically observable and computable quantity, namely a source element for the regularized solutions, that contains information about the smoothness of the unknown solution and the noise.

      Speaker: Daniel Gerth
    • 11:50 13:00
      Lunch 1h 10m
    • 13:00 13:20
      Beating the Saturation Phenomenon of Stochastic Gradient Descent 20m

      Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. The current mathematical theory in the lens of regularization theory predicts that SGD with a polynomially decaying stepsize schedule may suffer from an undesirable saturation phenomenon, i.e., the convergence rate does not further improve with the solution regularity index when it is beyond a certain range. In this talk, I will present our recent results on beating this saturation phenomenon:
      (i) (By using small initial stepsize.) We derive a refined convergence rate analysis of SGD, which shows that saturation actually does not occur if the initial stepsize of the schedule is sufficiently small.
      (ii) (By using Stochastic variance reduced gradient (SVRG), a popular variance reduction technique for SGD.) We prove that, for a suitable constant step size schedule, SVRG can achieve an optimal convergence rate in terms of the noise level (under suitable regularity condition), which means the saturation does not occur.

      Speaker: Zehui Zhou (Department of Mathematics, The Chinese University of Hong Kong)
    • 13:20 13:40
      On the geometric structures of Laplacian eigenfunctions and applications to inverse scattering problems 20m

      In this talk, we present some novel findings on the geometric structures of Laplacian eigenfunctions and their deep relationship to the quantitative behaviours of the eigenfunctions. The studies reveal that the intersecting angle between two lines (nodal lines, singular lines and generalized singular lines) is closely related to the vanishing order of the eigenfunction at the intersecting point in R^2. And in R^3, the analytic behaviors of a Laplacian eigenfunction depends on the geometric quantities at the corresponding corner point (edge corner and vertex corner). The theoretical findings can be applied directly to some physical problems including the inverse obstacle scattering problem. Taking two-dimensional case for example, it is shown in a certain polygonal setup that one can recover the support of the unknown scatterer as well as the surface impedance parameter by finitely many far-field patterns. Indeed, at most two far-field patterns are sufficient for some important applications.

      Speaker: XINLIN CAO
    • 13:40 14:00
      The ensemble Kalman filter applied to inverse problems: a neural network based one-shot formulation 20m

      The ensemble Kalman filter (EnKF) is a widely used metheodology for data assimilation problems and has been recently generalized to inverse problems, known as ensemble Kalman inversion (EKI). We view the method as a derivative free optimization method for a least-squares misfit functional and we present various variants of the scheme such as regularized EKI methods. This opens up the perspective to use the method in various areas of applications such as imaging, groundwater flow problems, biological problems as well as in the context of the training of neural networks. In particular, we will present application of the EKI to recent machine learning approaches, where we consider the incorporation of neural networks into inverse problems. We replace the complex forward model by a neural network acting as a physics-informed surrogate model, which will be trained in a one-shot fashion. This means we train the unknown parameter and the neural network at once, i.e. the neural network is only trained for the underlying unknown parameter. We connect the neural network based one-shot formulation to the Bayesian approach for inverse problems and apply the ensemble Kalman inversion in order to solve the optimization problem. Furthermore, we provide numerical experiments to highlight the promising direction of neural network based one-shot formulation together with the application of the ensemble Kalman inversion.

      Speaker: Simon Weissmann (Heidelberg University)
    • 14:00 14:20
      Vector Spline Approximation on the $3d$-Ball for Ill-Posed Functional Inverse Problems in Medical Imaging 20m

      Human brain activity is based on electrochemical processes, which can only be measured invasively. For this reason, induced quantities such as magnetic flux density (via MEG) or electric potential differences (via EEG) are measured non-invasively in medicine and research. The reconstruction of the neuronal current from the measurements is a severely ill-posed problem though the visualization of the cerebral activity is one of the main tools in brain science and diagnosis.

      Using an isotropic multiple-shell model for the geometry of the human head
      and a quasi-static approach for modelling the electro-magnetic processes, a singular-value decomposition of the continuous forward operator between infinite-dimensional Hilbert spaces is derived. Due to a full characterization of the operator null space, it is revealed that only the harmonic and solenoidal component of the neuronal current affects the measurements. Uniqueness of the problem can be achieved by a minimum-norm condition. The instability of the inverse problem caused by exponentially decreasing singular values requires a stable and robust regularization method.

      The few available measurements per time step ($\approx 100$) are irregularly distributed with larger gaps in the facial area. On these grounds, a vector spline method for regularized functional inverse problems based on reproducing kernel Hilbert spaces is derived for dealing with these difficulties. Combined with several parameter choice methods, numerical results are shown for synthetic test cases with and without additional Gaussian white noise. The relative normalized root mean square error of the approximation as well as the relative residual do not exceed the noise level. Finally, also results for real data are demonstrated. They can be computed with only a short delay time and are reasonable with respect to physiological expectations.

      Speaker: Sarah Leweke (University of Siegen)
    • 14:20 14:40
      Algorithmic improvements via a dictionary learning add-on 20m

      In the last 10 years, the Inverse Problem Matching Pursuits (IPMPs) were proposed as alternative solvers for linear inverse problems on the sphere and the ball, e.g. from the geosciences. They were constantly further developed and tested on diverse applications, e.g. on the downward continuation of the gravitational potential. This task remains a priority in geodesy due to significant contemporary challenges like the climate change.
      It is well-known that, for linear inverse problems on the sphere, there exist a variety of global as well as local basis systems, e.g. spherical harmonics, Slepian functions as well as radial basis functions and wavelets. All of these system have their specific pros and cons. Nonetheless, approximations are often represented in only one of the systems.
      On the contrary, as matching pursuits, the IPMPs realize the following line of thought: an approximation is built in a so-called best basis, i.e. a mixture of diverse trial functions. Such a basis is chosen iteratively from an intentionally overcomplete dictionary which contains several types of the mentioned global and local functions. The choice of the next best basis element aims to reduce the Tikhonov functional.
      In practice, an a-priori, finite set of trial functions was usually used which was highly inefficient. We developed a learning add-on which enables us to work with an infinite dictionary instead while simultaneously reducing the computational cost. Moreover, it automatized the dictionary choice as well. The add-on is implemented as constrained non-linear optimization problems with respect to the characteristic parameters of the different basis systems. In this talk, we explain the learning add-on and show recent numerical results with respect to the downward continuation of the gravitational potential.

      Speaker: Naomi Schneider (University of Siegen, Geomathematics Group)
    • 14:40 15:00
      Stability estimates for a special class of anisotropic conductivities with an ad-hoc functional 20m

      The Calderon problem, known also as the inverse conductivity problem, regards the determination of the conductivity inside a domain by the knowledge of the boundary data. For the isotropic case, the stability issue is almost solved. However, for the anisotropic case things get more complicated, since Tartar observation that any diffeomorphism of the domain which keeps the boundary points fixed has the property of leaving the Dirichlet-to-Neumann map unchanged, whereas the conductivity tensor is modified. In this talk we will introduce a special class of anisotropic conductivities for which we can prove a stability estimate. The novelty of this result lies in the fact that the stability is proved using an ad-hoc functional. As a corollary, we derive a Lipschitz stability estimate in terms of the classical Dirichet-to-Neumann map. This talk is based on a joint work with Eva Sincich and Romina Gaburro.

      Speaker: Sonia Foschiatti (Università degli Studi di Trieste)
    • 15:00 15:20
      Direct regularized reconstruction for the three-dimensional Calderón problem 20m

      Electrical Impedance Tomography gives rise to the severely ill-posed Calder?ón problem of determining the electrical conductivity distribution in a bounded domain from knowledge of the associated Dirichlet-to-Neumann map for the governing equation. The electrical conductivity of an object is of interest in many fields, notably medical imaging, where applications may vary from stroke detection to early detection of breast cancer.
      The uniqueness and stability questions for the three-dimensional problem were largely answered in the affirmative in the 1980's using complex geometrical optics solutions, and this led further to a direct reconstruction method relying on a non-physical scattering transform.

      In this talk we look at a direct reconstruction algorithm for the three-dimensional Calderón problem in the scope of regularization. Indeed, a suitable and explicit truncation of the scattering transform gives a stable and direct reconstruction method that is robust to small perturbations of the data. Numerical tests on simulated noisy data illustrate the feasibility and regularizing effect of the method, and suggest that the numerical implementation performs better than predicted by theory.

      Speaker: Aksel Rasmussen (Technical University of Denmark)
    • 15:20 15:40
      Stochastic EM methods with Variance Reduction for Penalised PET Reconstructions 20m

      Expectation-maximization (EM) is a popular and well-established method for image reconstruction in positron emission tomography (PET) due to its simple form and desirable properties. But, it often suffers from slow convergence, and full batch computations are often infeasible due to large data sizes in modern scanners. Ordered subsets EM (OSEM) is an effective mitigation scheme that provides significant acceleration during initial iterations, but it has been observed to enter a limit cycle. Another difficulty for EM methods is the incorporation of a regularising penalty, which poses additional difficulties for the maximisation step.
      In this work, we investigate two classes of algorithms for accelerating OSEM based on variance reduction for penalised PET reconstructions. The first is a stochastic variance reduced EM algorithm, termed as SVREM, which is an extension of the classical EM to the stochastic context that combines classical OSEM with insights from variance reduction techniques for gradient descent and facilitates the computation of the M-step through parabolic surrogates for the penalty. The second views OSEM as a preconditioned stochastic gradient ascent, and applies variance reduction techniques, i.e., SAGA and SVRG, to estimate the update direction. We present several numerical experiments to illustrate the efficiency and accuracy of the two methodologies. The numerical results show that these approaches significantly outperform existing OSEM type methods for penalised PET reconstructions, and hold great potential.

      Speaker: Zeljko Kereta (UCL)
    • 15:40 16:00
      Deterministic Dynamics of Ensemble Kalman Inversion 20m

      The Ensemble Kalman inversion (EKI) is a powerful tool for the solution of Bayesian inverse problems of type $y=Au^\dagger+\varepsilon$, with $u^\dagger$ being an unknown parameter and $y$ a given datum subject to measurement noise $\varepsilon$. It evolves an ensemble of particles, sampled from a prior measure, towards an approximate solution of the inverse problem. In this talk I will provide a complete description of the dynamics of EKI, utilizing a spectral decomposition of the particle covariance. In particular, I will demonstrate that, despite the common folklore that EKI creates samples from the posterior measure, this is only true for its mean field limit and will suggest modifications of EKI that overcome this drawback.

      Speaker: Leon Bungert (University of Bonn)