In recent years deep/machine learning methods using convolutional networks have become increas- ingly popular also in inverse problems mainly due to their practical performance . In many cases these methods outperform conventional regularization methods, such as total variation regulariza- tion, in particular when applied to more complicated data such as images containing texture. A major downside of machine learning methods, however, is the need for large sets of training data, which are often not available in the necessary extent. Moreover, the level of analytic understanding of machine learning methods, in particular in view of an analysis for inverse problems in function space, is still far from the one of conventional variational methods.
In this talk, we propose a novel regularization method for solving inverse problems in imaging, which is inspired by the architecture of convolutional neural networks as seen in many in deep learning approaches. In the model, the unknown is generated from a variable in latent space via multi-layer convolutions and non-linear penalties. In contrast to conventional deep learning methods, however, the convolution kernels are learned directly from the given (possibly noisy) data, such that no training is required.
In the talk, we will motivate the model and provide theoretical results about existence/stability of solutions and convergence for vanishing noise in function space. Afterwards, in a discretized setting, we will show practical results of our method in comparison to a state of the art deep learning method .
 V. Lempitsky, A. Vedaldi, and D. Ulyanov, Deep image prior, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.