Speaker
Description
in recent years new regularisation methods based on neural networks have shown promising performance for the solution of ill-posed problems, e.g., in imaging science. Due to the non-linearity of the networks, these methods often lack profound theoretical justification. In this talk we rigorously discuss convergence for an untrained convolutional network. Untrained networks are particulary attractive for applications, since they do not require any training data. Its regularising property is solely based on the architecture of the network. Because of this, appropriate early stopping is essential for the success of the method. We show that the discrepancy principle is an adequate method for early stopping here, as it yields minimax optimal convergence rates.