Introduction
Quantitative imaging is the foundation of fluorescence microscopy and has led to major discoveries in biomedical sciences leading to an increase in human longevity and quality of life. However, the imaging properties and measurement imperfections of fluorescence microscopy distort the image and reduce the maximum resolution that can be obtained by the imaging system. Hence, researchers are limited by spatial and temporal resolution, light exposure, signal-to-noise ratio (SNR) and need to routinely trade off these factors.
Deep Learning is a type of Artificial Intelligence (AI) which has been well suited for image-based problems and have been applied to image restoration applications like denoising and resolution enhancement as well as image segmentation. These AI applications have tremendous potential for microscopy experiments and could pave the way for a quantum leap forward in microscopy-based discoveries that can decode biological functions and mechanisms of disorders.
In this white paper we discuss practical limitations in fluorescence microscopy, deep learning enabled image enhancement and example datasets demonstrating deep learning deconvolution for confocal microscopy.
Challenges in Imaging Biological Samples and Microscope Point Spread Function (PSF)
The intrinsic thickness of cells and tissues poses challenges in imaging biological samples. While objective lenses with high numerical aperture have high resolving power they have a relatively narrow depth of field resulting in blurred, out-of-focus information that interferes with the image in the focal plane. This blurring decreases image contrast and resolution and becomes a significant problem with increasing thickness of the sample.
Point spread function (PSF) represents the blurring of a sample caused by diffraction at the objective lens [1]. Measuring the PSF is a laborious process and can be done by imaging a subresolution fluorescent bead ideally smaller than the resolution of the optical setup that will be used for samples or the PSF can be calculated by different formulae. In the former, although the PSF closely matches the experimental setup, the images obtained have very poor SNR. Additionally, PSF measurements can vary substantially [2] due to degradation of sample quality over time due to photobleaching, and problems in the optical system like temperature drifts, spherical aberrations, and defective relay lenses. Imaging equations are used to calculate the PSF in an attempt to reverse the effects of convolution such as blur, and loss of contrast of small features. These solutions are not ideal as they require time-consuming manual measuring processes and/or demand expert knowledge of many hardware components and estimation of the PSF that affect modeling functions.
Compared to PSF deconvolution which is an iterative process for every image, DL deconvolution is only iterative for the training of the model. After that it applies directly. Deconvolution algorithms help to remove out-of-focus data and can be categorized into two classes, deblurring and image restoration [3]. Deblurring algorithms are applied plane by plane to each 2D plane of a 3D image stack, and an estimate of the image blur is removed from each plane resulting in improved contrast of the image. There is an inherent disadvantage to this approach because as the deblurring algorithms remove blurred signals, they amplify noise in the images and reduce the signal levels making it difficult to determine the real object. These algorithms also introduce structural artifacts by altering the relative pixel intensities.
Image restoration algorithms operate simultaneously on every pixel in a 3D image stack to reverse the effects of blur caused by convolution of an image due to PSF and is computed using Fourier transformation. Deconvolution algorithms are iterative and computationally intensive techniques, and are therefore time consuming. Performance of these algorithms depends on accurate modeling of the PSF which could be challenging because it is not possible to determine all the aberrations present in microscope optics, making their implementation for deconvolution of microscopy image computationally complex.
Newer algorithms for image deconvolution have been developed to remove the blur in digital images and their proper use requires good working knowledge and optical properties of the imaging hardware, acquisition process, and image processing. Hence improper handling of these factors results in a broad range of artifacts related to the image content, which could be very difficult to remove.
Table 1: Comparison between classical and AI deconvolution
Aivia 8.5 Deep Learning Deconvolution Model
In Aivia 8.5, we have introduced deep learning deconvolution for a wide range of light microscopy modalities. Aivia 8.5 deep learning deconvolution model is based on Residual Channel Attention Networks (RCAN) [4] which bypasses low-frequency information, making the main network focus on learning high-frequency information. This structure allows to train very deep convolutional neural networks (over 400 layers) for image super resolution with high performance and achieves better results compared to state-of-the-art methods.
As outlined in Table 1, our AI deconvolution model offers multiple advantages over classical deconvolution and does not require any coding knowledge. The deconvolution model requires significantly less processing power and input from the user for optimizing parameters and generates data whose quality is on-par with the results from classical PSF deconvolution. This model can now enable previously impossible experiments such as achieving high SNR and spatio-temporal resolution without photobleaching and phototoxicity effects. Hence, this solution effectively increases the photon budget and can reduce the total imaging time. The model can be applied to 2D and 3D datasets to drastically improve the quality of the imaging data compared to the originally acquired data.
References
Markham, J., & Conchello, J.-A. (1999). Parametric blind deconvolution: a robust method for the simultaneous estimation of image and blur. Journal of the Optical Society of America A, 16(10), 2377
Cole, R. W., Jinadasa, T., & Brown, C. M. (2011). Measuring and interpreting point spread functions to determine confocal microscope resolution and ensure quality control. Nature Protocols, 6(12), 1929–1941
Wallace, W., Schaefer, L. H., & Swedlow, J. R. (2001). A Workingperson’s Guide to Deconvolution in Light Microscopy. BioTechniques, 31(5), 1076–1097
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y. (2018) Image Super-Resolution Using Very Deep Residual Channel Attention Networks. arXiv:1807.02758 [cs], Jul, 2018
Comments