Input and reconstructions of a trained CARE Network on previously unseen live imaging data of Schmidtea mediterranea.
We consistently observed that the reconstructed image data was of very high quality, even if the signal to noise ratio (SNR) of the input was very low, e.g. being acquired with a 60-fold reduced light-dosage.
Live-cell imaging of developing Tribolium castaneum (red flour beetle) embryos. We employed the same training strategy as for the previous example. Please note that the shown result is performed in full 3D, but only one 2D image slice is shown.
We observe high quality reconstructions even on extremely noisy input data with up to 70-fold reduced light-dosage when compared to typical imaging protocols.
In contrast to previous examples, isotropic volumes that could be used as input/ground-truth training pairs are not readily obtainable. Therefore we propose to compute realistic observations from real ground-truth image data in-silico.
We applied this approach to 10-fold undersampled acquisitions of the developing retina of Danio rerio (zebrafish) embryos and observed that the trained CARE Network reconstructed isometric image data.
An extended CARE Network offers the possibility to reconstruct low SNR images and simultaneously projects only relevant intensities embedded in imaged volumes. It can be seen that our results are a considerable improved compared to the output of other state-of-the-art projection algorithms (here we compare to results obtained by Premosa).
Finally we show an example where neither the real biological ground-truth nor the real image degradation process is experimentally accessible, but a synthetic generative model of image content and image degradation process is available.
The predictions of a CARE Network trained on fully synthetic training data shows that wide-field acquisitions can yield high-quality reconstructions.