Pixel Super-Resolved Fluorescence Lifetime Imaging Using Deep Learning
Abstract
Fluorescence lifetime imaging microscopy (FLIM) is a powerful quantitative technique that provides metabolic and molecular contrast, offering strong translational potential for label-free, real-time diagnostics. However, its clinical adoption remains limited by long pixel dwell times and low signal-to-noise ratio (SNR), which impose a stricter resolution-speed trade-off than conventional optical imaging approaches. Here, we introduce FLIM_PSR_k, a deep learning-based multi-channel pixel super-resolution (PSR) framework that reconstructs high-resolution FLIM images from data acquired with up to a 5-fold increased pixel size. The model is trained using the conditional generative adversarial network (cGAN) framework, which, compared to diffusion model-based alternatives, delivers a more robust PSR reconstruction with substantially shorter inference times, a crucial advantage for practical deployment. FLIM_PSR_k not only enables faster image acquisition but can also alleviate SNR limitations in autofluorescence-based FLIM. Blind testing on held-out patient-derived tumor tissue samples demonstrates that FLIM_PSR_k reliably achieves a super-resolution factor of k = 5, resulting in a 25-fold increase in the space-bandwidth product of the output images and revealing fine architectural features lost in lower-resolution inputs, with statistically significant improvements across various image quality metrics. By increasing FLIM's effective spatial resolution, FLIM_PSR_k advances lifetime imaging toward faster, higher-resolution, and hardware-flexible implementations compatible with low-numerical-aperture and miniaturized platforms, better positioning FLIM for translational applications.
Summary
This paper introduces FLIM_PSR_k, a deep learning framework for pixel super-resolution (PSR) in Fluorescence Lifetime Imaging Microscopy (FLIM). FLIM, a technique providing metabolic and molecular contrast, is limited by slow acquisition speeds due to long pixel dwell times and low signal-to-noise ratio (SNR). FLIM_PSR_k addresses this by reconstructing high-resolution FLIM images from low-resolution data acquired with larger pixel sizes, effectively increasing the space-bandwidth product. The framework employs a conditional generative adversarial network (cGAN) for training, which the authors argue is more robust and faster than diffusion model-based alternatives, making it suitable for practical deployment. The authors trained and tested FLIM_PSR_k on patient-derived head and neck tumor tissue samples, demonstrating its ability to achieve a super-resolution factor of k=5, leading to a 25-fold increase in the space-bandwidth product. This allowed the visualization of fine architectural features that were lost in the lower-resolution inputs. Quantitative metrics (LPIPS, SSIM, PSNR, MSE) showed statistically significant improvements. The authors also compared cGAN-based PSR with diffusion models, finding that cGANs offer a better trade-off between speed, structural fidelity, and artifact generation, particularly for biomedical applications where diagnostic accuracy is paramount. The proposed framework has the potential to accelerate FLIM, making it more compatible with low-numerical-aperture and miniaturized platforms, thus promoting its translational applications.
Key Insights
- •cGAN-based PSR for FLIM: The paper successfully implements a cGAN-based pixel super-resolution framework (FLIM_PSR_k) to enhance the spatial resolution of FLIM images.
- •Super-resolution Factor of 5: The framework reliably achieves a super-resolution factor of k = 5, resulting in a 25-fold increase in the space-bandwidth product of the output images.
- •Faster Acquisition: FLIM_PSR_k enables faster image acquisition by reconstructing high-resolution images from data acquired with larger pixel sizes, reducing the need for long pixel dwell times. This could potentially reduce FLIM scanning times by more than an order of magnitude.
- •cGAN vs. Diffusion Models: The paper presents a comparative analysis of cGANs and diffusion models for FLIM PSR, demonstrating that cGANs offer a more robust and computationally efficient solution, especially for biomedical imaging where structural fidelity is critical. cGAN inference required ~0.1 s per 1.92 × 1.92 mm patch, while diffusion-based inference required ~78 s per patch.
- •Performance Degradation Beyond k=5: The study identifies that the performance of FLIM_PSR_k starts to degrade beyond a super-resolution factor of k = 5, with the emergence of spatial artifacts and reduced lifetime precision.
- •Multi-channel Training: Joint training across FLIM channels provides more stable and consistent image reconstructions than per-channel training.
- •Validation on Tumor Tissue: The framework was validated using whole-slide FLIM data from patient-derived head and neck tumor specimens, with strict patient-wise separation between the training and blind testing sets, enhancing the reliability of the findings.
Practical Implications
- •Accelerated FLIM: The framework can be used to accelerate FLIM, making it more practical for clinical and high-throughput imaging applications.
- •Improved Diagnostics: The enhanced spatial resolution can improve the accuracy of FLIM-based diagnostics, particularly in applications such as head and neck pathology, where fine stromal organization and metabolic gradients are clinically informative.
- •Compatibility with Miniaturized Systems: FLIM_PSR_k can be integrated with low-numerical-aperture and miniaturized platforms, such as fiber-optic and catheter-based systems, expanding the applicability of FLIM to in vivo imaging.
- •Software Enhancement: Practitioners can implement this deep learning framework as a software enhancement to existing FLIM systems, without requiring hardware modifications.
- •Future Research: Future research can focus on incorporating physics-informed regularization constraints and/or uncertainty estimation approaches to further improve FLIM PSR image reconstruction performance. Increasing dataset diversity may also enhance the generalizability of the framework.