Expectation-maximization (EM) is a popular and well-established method for image reconstruction in positron emission tomography (PET) due to its simple form and desirable properties. But, it often suffers from slow convergence, and full batch computations are often infeasible due to large data sizes in modern scanners. Ordered subsets EM (OSEM) is an effective mitigation scheme that provides significant acceleration during initial iterations, but it has been observed to enter a limit cycle. Another difficulty for EM methods is the incorporation of a regularising penalty, which poses additional difficulties for the maximisation step.
In this work, we investigate two classes of algorithms for accelerating OSEM based on variance reduction for penalised PET reconstructions. The first is a stochastic variance reduced EM algorithm, termed as SVREM, which is an extension of the classical EM to the stochastic context that combines classical OSEM with insights from variance reduction techniques for gradient descent and facilitates the computation of the M-step through parabolic surrogates for the penalty. The second views OSEM as a preconditioned stochastic gradient ascent, and applies variance reduction techniques, i.e., SAGA and SVRG, to estimate the update direction. We present several numerical experiments to illustrate the efficiency and accuracy of the two methodologies. The numerical results show that these approaches significantly outperform existing OSEM type methods for penalised PET reconstructions, and hold great potential.