Advertisement

The promise of artificial intelligence and deep learning in PET and SPECT imaging

  • Hossein Arabi
    Affiliations
    Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
    Search for articles by this author
  • Azadeh AkhavanAllaf
    Affiliations
    Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
    Search for articles by this author
  • Amirhossein Sanaat
    Affiliations
    Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
    Search for articles by this author
  • Isaac Shiri
    Affiliations
    Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
    Search for articles by this author
  • Habib Zaidi
    Correspondence
    Corresponding author.
    Affiliations
    Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland

    Geneva University Neurocenter, Geneva University, CH-1205 Geneva, Switzerland

    Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB Groningen, Netherlands

    Department of Nuclear Medicine, University of Southern Denmark, 500 Odense, Denmark
    Search for articles by this author
Open AccessPublished:March 22, 2021DOI:https://doi.org/10.1016/j.ejmp.2021.03.008

      Highlights

      • Artificial intelligence and deep learning-based solutions emerged as promising approaches pertinent to PET and SPECT imaging.
      • Numerous applications in the field are taking advantage of these developments.
      • Successful commercial implementation of deep learning-based solutions is projected.
      • Clinical validation and adoption of these tools face many challenges.

      Abstract

      This review sets out to discuss the foremost applications of artificial intelligence (AI), particularly deep learning (DL) algorithms, in single-photon emission computed tomography (SPECT) and positron emission tomography (PET) imaging. To this end, the underlying limitations/challenges of these imaging modalities are briefly discussed followed by a description of AI-based solutions proposed to address these challenges. This review will focus on mainstream generic fields, including instrumentation, image acquisition/formation, image reconstruction and low-dose/fast scanning, quantitative imaging, image interpretation (computer-aided detection/diagnosis/prognosis), as well as internal radiation dosimetry. A brief description of deep learning algorithms and the fundamental architectures used for these applications is also provided. Finally, the challenges, opportunities, and barriers to full-scale validation and adoption of AI-based solutions for improvement of image quality and quantitative accuracy of PET and SPECT images in the clinic are discussed.

      Graphical abstract

      Keywords

      Introduction

      Artificial intelligence (AI) approaches, particularly deep learning (DL) techniques, have received tremendous attention during the last decade owing to their remarkable success in offering novel solutions to solve complex problems. Novel AI/DL-based solutions have created opportunities in clinical and research settings to automate a number of tasks deemed to depend on human cognition, and hence require his intervention to facilitate the decision-making process [
      • Nensa F.
      • Demircioglu A.
      • Rischpler C.
      Artificial intelligence in nuclear medicine.
      ]. State-of-the-art AI/DL algorithms have exhibited exceptional learning capability from high dimensional and/or highly complex data, accomplishing daunting challenging tasks in image and data analysis/processing in general and multimodality medical imaging in particular.
      In the context of medical imaging, challenging tasks, such as image segmentation/classification, data correction (such as noise or artifact reduction), image interpretation (prognosis, diagnosis, and monitoring of response to treatment), cross-modality image translation or synthesis, and replacing computationally demanding algorithms (such as Monte Carlo calculations) have been broadly revisited and evolved ever since the adoption of deep learning approaches [
      • Arabi H.
      • Zaidi H.
      Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy.
      ,
      • Gong K.
      • Berg E.
      • Cherry S.R.
      • Qi J.
      Machine learning in PET: from photon detection to quantitative image reconstruction.
      ]. AI-based solutions have been proposed to address the fundamental limitations/challenges faced by image acquisition and analysis procedures on modern molecular imaging technologies. Considering the superior performance of deep learning approaches compared to conventional techniques, a paradigm shift is expected to occur provided that task-specific pragmatic developments of these algorithms continue to evolve in the right direction.
      Single-photon emission computed tomography (SPECT) and Positron emission tomography (PET) imaging provide the in vivo radiotracer activity distribution maps, representative of biochemical processes in humans and animal species. The introduction of hybrid imaging combining functional and anatomical imaging modalities in the form of combined PET/CT and PET/MRI systems has remarkably thrived the widespread adoption and proliferation of these modalities in clinical practice. In this light, AI-based algorithms/solutions are developed to overcome the major shortcomings or to enhance the current functionality of these modalities.
      The applications of AI-based algorithms in PET and SPECT imaging ranges from low-level electronic signal formation/processing to high-level internal dosimetry and diagnostic/prognostic modeling. For developments in instrumentation, deep learning approaches have been mostly employed to improve the timing resolution and localization accuracy of the incident photons aiming at enhancing the overall spatial and time-of-flight (TOF) resolutions in PET. Image reconstruction algorithms are being revisited through the introduction of deep learning algorithms wherein the whole image reconstruction process or certain critical components (analytical models) are being replaced by machine learning models. A large body of literature is dedicated to quantitative SPECT and PET imaging aiming at reducing the impact of noise, artifact, and motion, or to correct for physical degrading factors, including attenuation, Compton scattering, and partial volume effects. The lack of straightforward techniques for generation of the attenuation map on organ-specific standalone PET scanners or hybrid PET/MRI systems inspired active scientists in the field to devise suitable strategies to enhance the quantitative potential of molecular imaging. High-level image processing tasks, such as segmentation, data interpretation, image-based diagnostic and prognostic models as well as internal dosimetry based on SPECT or PET imaging have substantially evolved owing to the formidable power and versatility of deep learning algorithms.
      AI/DL-based solutions have been proposed to undertake certain tasks belonging to the long chain of processes involved in image formation, analysis, and extraction of quantitative features for the development of disease-specific diagnosis/prognosis models from SPECT and PET imaging. In this review, the applications of AI/DL in these imaging modalities are summarized in six key sections focusing on the major challenges/opportunities and seminal contributions in the field. A concise overview of machine learning methods, in particular deep learning approaches, is presented in section 2. The following section describes AI-based techniques employed in PET instrumentation, image acquisition and formation, image reconstruction and low-dose scanning, quantitative imaging (attenuation and scatter corrections), image analysis and computer-aided detection/diagnosis/prognosis, as well as internal radiation dosimetry. The last section provides in perspective the major challenges and opportunities for AI/DL-based solutions in PET and SPECT imaging.

      Principles of machine learning and deep learning

      Machine learning algorithms are considered as a subset of non-symbolic artificial intelligence, which tends to automatically recognize a pattern and create/extract a desirable representation from raw data [
      • Alpaydin E.
      Introduction to machine learning.
      ]. In machine learning algorithms, the system attempts to learn certain patterns from the extracted features. Likewise, in deep learning algorithms, a subtype of machine learning techniques, feature extraction, feature selection, and ultimate tasks of classification or regression are carried out automatically in one step [
      • LeCun Y.
      • Bengio Y.
      • Hinton G.
      Deep learning.
      ]. Different deep learning algorithms have been proposed and applied in nuclear medicine [
      • Arabi H.
      • Zaidi H.
      Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy.
      ,
      • Wang T.
      • Lei Y.
      • Fu Y.
      • Curran W.J.
      • Liu T.
      • Nye J.A.
      • et al.
      Machine learning in quantitative PET: a review of attenuation correction and low-count image reconstruction methods.
      ], including convolutional neural networks (CNNs) [
      • Lee G.
      • Fujita H.
      Deep learning in medical image analysis: challenges and applications.
      ,
      • Masci J.
      • Meier U.
      • Cireşan D.
      • Schmidhuber J.
      Stacked convolutional auto-encoders for hierarchical feature extraction.
      ] and generative adversarial networks (GANs) [
      • LeCun Y.
      • Bengio Y.
      • Hinton G.
      Deep learning.
      ]. Some applications of machine learning algorithms, such as classification, segmentation, and image-to-image translation, have attracted more attention [
      • Zaidi H.
      • El Naqa I.
      Quantitative molecular positron emission tomography imaging using advanced deep learning techniques.
      ].
      A number of deep learning architectures became popular in the field of medical image analysis, including convolutional endcoders-decoders (CED) networks consisting of encoder and decoder parts designed to convert input images to feature vectors and feature vectors to target images, respectively [
      • Masci J.
      • Meier U.
      • Cireşan D.
      • Schmidhuber J.
      Stacked convolutional auto-encoders for hierarchical feature extraction.
      ]. In addition, GANs consist of two major components: a generator, mostly a CED network, and a discriminator, a classifier to differentiate the ground truth from the synthetic images/data [
      • Masci J.
      • Meier U.
      • Cireşan D.
      • Schmidhuber J.
      Stacked convolutional auto-encoders for hierarchical feature extraction.
      ]. Different architectures based on these models were developed and applied on medical images for different tasks, including image segmentation and image to image translation [
      • Altaf F.
      • Islam S.M.
      • Akhtar N.
      • Janjua N.K.
      Going deep in medical image analysis: concepts, methods, challenges, and future directions.
      ]. U-Net [
      • Ronneberger O.
      • Fischer P.
      • U-net B.T.
      Convolutional networks for biomedical image segmentation.
      ] is one the most popular architectures built upon the CED structure via adding some skip connections for context capturing and for creating a symmetric expanding path, which enables more efficient feature selection. Upgrading networks with different modules, such as attention blocks/components [

      Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, et al. Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:180403999. 2018.

      ] for highlighting salient features in the input data, and residual connections [
      • Diakogiannis F.I.
      • Waldner F.
      • Caccetta P.
      • Wu C.
      Resunet-a: a deep learning framework for semantic segmentation of remotely sensed data.
      ] to prevent gradient vanishing, are intended to improve the overall performance of the networks. Conventional GAN architectures have been upgraded in different ways, leading to conditional GAN (cGAN) [

      Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE conference on computer vision and pattern recognition2017. p. 1125-34.

      ] and cycle consistency GANs (Cycle-GAN) [

      Zhu J-Y, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision2017. p. 2223-32.

      ] models, which consist of a CED in the generator and discriminator components and task-specific loss functions. Cycle-GAN [

      Zhu J-Y, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision2017. p. 2223-32.

      ] is an unsupervised model for image-to-image transformation, which does not require paired (labeled) datasets. In the Cycle-GAN model, two generator and discriminator components are jointly involved in the training process, wherein images from two different domains are used as input and output within a cycle consistency scheme. In the cycle consistency scheme, the output of the generator component is used as input and vise versa with the calculated loss between the input and output acting as regularization of the generator model [

      Zhu J-Y, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision2017. p. 2223-32.

      ].
      Overall, deep learning-based algorithms outperformed conventional approaches in various applications [
      • LeCun Y.
      • Bengio Y.
      • Hinton G.
      Deep learning.
      ]. AI-based approaches, especially deep learning algorithms, do not require handcraft features extraction, specific data preprocessing, or user intervention within the learning and inferring processes [
      • LeCun Y.
      • Bengio Y.
      • Hinton G.
      Deep learning.
      ]. The major applications of deep learning approaches in SPECT and PET imaging are summarized in Fig. 1. Deep learning methods face many challenges, including the fact that they are data hungry, require high computation burden for the training process, and their black box nature (which hampers systematic analysis of their operation/performance) [
      • Lee G.
      • Fujita H.
      Deep learning in medical image analysis: challenges and applications.
      ]. To reach peak performance, these algorithms require a large number of clean and cured datasets for the training process. However, data collection remains the main challenge owing to patients’ privacy and complexity of ethical issues. Moreover, task-specific deep learning algorithms (i.e. for a particular organ/body region or radiotracer) are able to exhibit superior performance compared to more general models which are commonly more sensitive to variability in image acquisition protocols and reconstruction settings. Another challenge faced by the application of deep learning algorithms in medical imaging is the high computational burden owing to the large size of clinical data in terms of number of subjects and individual images (large 3-dimensional images or sinograms) which might cause memory or data management issues.
      Figure thumbnail gr1
      Fig. 1Main applications of deep learning-based algorithms in PET and SPECT imaging.

      Applications of deep learning in SPECT and PET imaging

      Instrumentation and image acquisition/formation

      Detector modules play a key role in the overall performance achieved by PET scanners. An ideal PET detector should have a good energy and timing resolution and capable of accurate event positioning. Energy resolution is a metric that determines how accurately a detector can identify the energy of incoming photons and as a result, distinguish scatter and random photons from true coincidences. These parameters affect the scanner’s sensitivity, spatial resolution, and signal-to-noise ratio (true coincidence versus scatters or randoms). Despite significant progress in PET instrumentation, there are a number of challenges that still need to be addressed and where machine learning approaches can offer alternative solutions to complex and multi-parametric problems.
      Accurate localization of the interaction position inside the crystals improves the overall spatial resolution of PET scanners. Since optical photons distribution is stochastic, particularly near the edges of the crystal, and owing to multiple Compton scattering and reflection, accurate positioning of the interaction within the crystal is challenging. In comparison with other positioning algorithms, such as Anger logic and correlated signal enhancement, which rely on determination of the centre of gravity, machine learning algorithms led to a better position estimation particularly at the crystal edges [
      • Müller F.
      • Schug D.
      • Hallen P.
      • Grahe J.
      • Schulz V.
      A novel DOI positioning algorithm for monolithic scintillator crystals in PET based on gradient tree boosting.
      ]. In this regard, Peng et al. trained a CNN classifier that was fed with signals from each Silicon photomultiplier’s channel to the coordinates of the scintillation point for a quasi-monolithic crystal [
      • Peng P.
      • Judenhofer M.S.
      • Jones A.Q.
      • Cherry S.R.
      Compton PET: a simulation study for a PET module with novel geometry and machine learning for position decoding.
      ]. Another study applied a multi-layer perceptron to predict the 3D coordinates of the interaction position inside a monolithic crystal and compared the performance of this positioning algorithm with anger logic for a preclinical PET scanner based on NEMA NU4 2008 standards [
      • Sanaat A.
      • Zaidi H.
      Depth of interaction estimation in a preclinical PET scanner equipped with monolithic crystals coupled to SiPMs using a deep neural network.
      ]. Fig. 2 depicts the adopted deep learning-based event-positioning scheme in monolithic detectors. To address the challenge of determining the depth of interaction, a gradient tree boosting supervised machine learning algorithm was used to extract the scintillation position, resulting in a spatial resolution of 1.4 mm full width half maximum (FWHM) for a 12 mm thick monolithic block [
      • Müller F.
      • Schug D.
      • Hallen P.
      • Grahe J.
      • Schulz V.
      Gradient tree boosting-based positioning method for monolithic scintillator crystals in positron emission tomography.
      ]. Recently, Cherenkov-based detectors attracted much attention owing to their superb performance in terms of time and spatial resolution. Hashimoto et al. studied the performance of a deep learning model for 3D positioning in this type of detectors through a Monte Carlo simulation study [
      • Hashimoto F.
      • Ote K.
      • Ota R.
      • Hasegawa T.
      A feasibility study on 3D interaction position estimation using deep neural network in Cherenkov-based detector: a Monte Carlo simulation study.
      ]. They demonstrated that in comparison with conventional positioning methods, such as the centre of gravity determination and principal component analysis, the deep learning model led to significantly improved spatial resolution.
      Figure thumbnail gr2
      Fig. 2Deep learning-based event positioning in monolithic detectors.
      Time resolution is another crucial factor in PET instrumentation which determines the achievable performance using TOF imaging as well as the efficiency of randoms and scatter rejection. This factor depends on the physical characteristics of the scintillator, photodetector quantum efficiency, and electronic circuits that convert the scintillation light to electrical signals. Considering the physics of photon interactions within a crystal, only a portion of produced scintillation photons reach the photodetector and contribute to positioning and timing. The consequence of this is noticeable statistical uncertainty and noise-induced bias. Straightforward approaches, such as feeding a CNN model with detector signals to estimate TOF information produced promising results. In a recent study, a training dataset (reference) obtained by scanning a 68Ga point source shifted repeatedly with steps of 5 mm across the field-of-view of the PET scanner was used to train a deep learning algorithm [
      • Berg E.
      • Cherry S.R.
      Using convolutional neural networks to estimate time-of-flight from PET detector waveforms.
      ]. The authors reported a TOF resolution of about 185 ps, exhibiting significant improvement with respect to conventional methods with a resolution of 210 to 527 ps. Gladen et al. developed a machine learning method, referred to as self-organized map (SOM) algorithm, for estimating the arrival time of annihilation photons in a high purity germanium detector (HPGe). SOM was able to cluster the TOF bins based on the signal shape and its raising edge [

      Gladen R, Chirayath V, Fairchild A, Manry M, Koymen A, Weiss A. Efficient Machine Learning Approach for Optimizing the Timing Resolution of a High Purity Germanium Detector. arXiv preprint arXiv:200400008. 2020.

      ].
      Recent studies substantiated the applicability of deep learning techniques to reliably estimate the interaction position, energy, and arrival time of incident photons within the crystal with improved accuracy and robustness to noise. One of the major difficulties in developing such models is the creation of labelled data (used as reference), which require extensive experimental measurements. For example, preparing a training dataset for position estimation requires a precise and reproducible setup of a single pencil beam and several measurements at any possible spot within the field-of-view. A number of recent studies came up with novel ideas to perform these tasks for monolithic crystal through using uniform or fan-beam sources or applying clustering to the training dataset [
      • Müller F.
      • Schug D.
      • Hallen P.
      • Grahe J.
      • Schulz V.
      Gradient tree boosting-based positioning method for monolithic scintillator crystals in positron emission tomography.
      ]. Likewise, for TOF training dataset, hundreds of point source positionings and data acquisitions are required to create a realistic range of TOF variations. In this regard, artificial ground-truth data creation was proposed through switching the PET detector waveforms forward and backward in the time domain [
      • Berg E.
      • Cherry S.R.
      Using convolutional neural networks to estimate time-of-flight from PET detector waveforms.
      ].
      Sophisticated machine learning-based algorithms for event positioning, timing, and/or calibration are envisioned on next generation SPECT and PET systems on the front-end electronics using dedicated application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) [
      • Shawahna A.
      • Sait S.M.
      • El-Maleh A.
      FPGA-based accelerators of deep learning networks for learning and classification: a review.
      ]. Furthermore, developing a single model for extracting time, position, and energy simultaneously from photodetector outputs would be an interesting approach that can potentially improve the overall performance of the nuclear imaging systems.

      Image reconstruction and low-dose/fast image acquisition

      Deep learning algorithms have recognized capabilities in solving complex inverse problems, such as image reconstruction from projections. The process of image reconstruction for CT, PET, and SPECT using deep learning techniques entails roughly the same procedure. Overall, four strategies were adopted for image reconstruction using deep learning algorithms. The first approach consists of image-to-image translation in the image space, wherein a model is trained to convert reconstructed images to another representation to improve image quality through, for instance, noise removal, supper resolution modelling, motion correction, etc. [
      • Shiri I.
      • Akhavanallaf A.
      • Sanaat A.
      • Salimi Y.
      • Askari D.
      • Mansouri Z.
      • et al.
      Ultra-low-dose chest CT imaging of COVID-19 patients using a deep residual neural network.
      ]. The second approach implements the training of the deep learning model in the projection space prior to image reconstruction to avoid the sensitivity and dependence on reconstruction algorithms. In the third approach, a model learns to develop non-linear direct mapping between information in the sinogram and image domains [
      • Häggström I.
      • Schmidtlein C.R.
      • Campanella G.
      • Fuchs T.J.
      DeepPET: A deep encoder–decoder network for directly solving the PET image reconstruction inverse problem.
      ,
      • Zhu B.
      • Liu J.Z.
      • Cauley S.F.
      • Rosen B.R.
      • Rosen M.S.
      Image reconstruction by domain-transform manifold learning.
      ]. The fourth approach, referred to as hybrid domain learning, relies simultaneously on analytical reconstruction and machine learning approaches to reach an optimal solution for the image reconstruction problem [
      • Ravishankar S.
      • Ye J.C.
      • Fessler J.A.
      Image reconstruction: from sparsity to data-adaptive methods and machine learning.
      ,
      • Reader A.
      • Corda G.
      • Mehranian A.
      • da Costa-Luis C.
      • Ellis S.
      • Schnabel J.A.
      Deep learning for PET image reconstruction. IEEE Trans Radiat Plasma.
      ].
      Two companies released AI-based solutions for image reconstruction in CT that were approved by the FDA [

      FDA. 510k Premarket Notification of AiCE Deep Learning Reconstruction (Canon). 2019.

      ,

      FDA. 510k Premarket Notification of Deep Learning Image Reconstruction (GE Medical Systems). 2019.

      ]. DeepPET is one of the earliest works suggesting direct reconstruction from sinograms to images through a deep learning approach [
      • Häggström I.
      • Schmidtlein C.R.
      • Campanella G.
      • Fuchs T.J.
      DeepPET: A deep encoder–decoder network for directly solving the PET image reconstruction inverse problem.
      ]. Likewise, FastPET, is a machine learning-based approach for direct PET image reconstruction using a simple memory-efficient architecture implemented to operate for any tracer and level of injected activity [
      • Whiteley W.
      • Panin V.
      • Zhou C.
      • Cabello J.
      • Bharkhada D.
      • FastPET G.J.
      Near real-time reconstruction of PET histo-image data using a neural network.
      ].
      Decreasing the injected activities is often desired owing to potential hazards of ionizing radiation for pediatric patients or subjects undergoing multiple serial PET or SPECT scans over time for monitoring of disease progression or in longitudinal studies. Moreover, decreasing the acquisition/scanning time increases scanners throughput and enhances patients’ comfort, particularly elderly patients and those suffering from neurodegenerative diseases where the risk of involuntary motion during scanning is more common.
      Reducing the injected activity amplifies Poisson noise, thus impacting image quality, lesion detectability, and quantitative accuracy of PET images. Devising optimized low-dose scanning protocols that preserve the critical information in the images is desirable. Although there is a fundamental difference between fast and low-dose scanning, both approaches have been interchangeably used in the literature. While both strategies produce noisy images, the content and information collected by these scanning modes are completely different. In a fast scan, the acquired data reflect the radiotracer kinetics in a short time course. For instance, if the scan starts right after injection, much information would be missing owing to insufficient and/or slow uptake in some organs. Fast acquisition protocols are also less sensitive to motion artifacts, though the patient’s effective dose is similar to standard protocols. Conversely, low-dose scanning is performed with standard acquisition time, with a much lower injected activity, which obviously decreases the effective dose.
      There might be a need to redesign/optimize reconstruction algorithms for low-dose scanning to reach an optimal trade-off between noise level and signal convergence. In low-dose/fast imaging, much critical information would be buried under the increased noise level wherein an efficient denoising algorithm would be able to recover genuine signals [
      • Arabi H.
      • Zaidi H.
      Non-local mean denoising using multiple PET reconstructions.
      ].
      To address the above-mentioned challenges, a number of denoising techniques to generate full-dose PET images from corresponding noisy/low-dose counterparts have been proposed. Conventional techniques include post-reconstruction processing/filtering algorithms [
      • Arabi H.
      • Zaidi H.
      Improvement of image quality in PET using post-reconstruction hybrid spatial-frequency domain filtering.
      ,
      • Arabi H.
      • Zaidi H.
      Spatially guided nonlocal mean approach for denoising of PET images.
      ], anatomically-guided algorithms [
      • Chan C.
      • Fulton R.
      • Barnett R.
      • Feng D.D.
      • Meikle S.
      Postreconstruction nonlocal means filtering of whole-body PET with an anatomical prior.
      ], statistical modelling during iterative reconstruction [
      • Reader A.J.
      • Zaidi H.
      Advances in PET image reconstruction.
      ], and MRI-guided joint noise removal and partial volume correction [
      • Yan J.
      • Lim J.-C.-S.
      • Townsend D.W.
      MRI-guided brain PET image filtering and partial volume correction.
      ]. Although these approaches attempted to minimize noise and quantitative bias, they still suffer from loss of spatial resolution and over-smoothing. By introducing image super-resolution techniques, such as sparse representation [
      • Wang Y.
      • Ma G.
      • An L.
      • Shi F.
      • Zhang P.
      • Lalush D.S.
      • et al.
      Semisupervised tripled dictionary learning for standard-dose PET image prediction using low-dose PET and multimodal MRI.
      ], canonical correlation analysis [
      • An L.
      • Zhang P.
      • Adeli E.
      • Wang Y.
      • Ma G.
      • Shi F.
      • et al.
      Multi-level canonical correlation analysis for standard-dose PET image estimation.
      ], and dictionary learning [
      • Zhang W.
      • Gao J.
      • Yang Y.
      • Liang D.
      • Liu X.
      • Zheng H.
      • et al.
      Image reconstruction for positron emission tomography based on patch-based regularization and dictionary learning.
      ], effective noise reduction and signal recovery in low-dose images is expected with minimum artifacts or information loss. The widespread availability of hybrid imaging enabled to incorporate anatomical information in the reconstruction of low-dose PET images [
      • Bland J.
      • Mehranian A.
      • Belzunce M.A.
      • Ellis S.
      • McGinnity C.J.
      • Hammers A.
      • et al.
      MR-guided kernel EM reconstruction for reduced dose PET imaging.
      ].
      In the last few years, AI algorithms have been widely used in the field of image reconstruction and enhancement of image quality [
      • Litjens G.
      • Kooi T.
      • Bejnordi B.E.
      • Setio A.A.A.
      • Ciompi F.
      • Ghafoorian M.
      • et al.
      A survey on deep learning in medical image analysis.
      ]. In most previous works, low-dose images were considered as the model’s input whereas full-dose images were considered as the target to perform an end-to-end mapping between low-dose and full-dose images [
      • Chen K.T.
      • Gong E.
      • de Carvalho Macruz F.B.
      • Xu J.
      • Boumis A.
      • Khalighi M.
      • et al.
      Ultra-low-dose (18)F-Florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs.
      ,
      • Xiang L.
      • Qiao Y.
      • Nie D.
      • An L.
      • Wang Q.
      • Shen D.
      Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI.
      ,
      • Wang Y.
      • Yu B.
      • Wang L.
      • Zu C.
      • Lalush D.S.
      • Lin W.
      • et al.
      3D conditional generative adversarial networks for high-quality PET image estimation at low dose.
      ,
      • Ouyang J.
      • Chen K.T.
      • Gong E.
      • Pauly J.
      • Zaharchuk G.
      Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss.
      ]. Such models with a single input channel (only low-dose images) suffer from the lack of sufficient information (for instance anatomical structures) to distinguish noise from genuine biological signals. Therefore, adding anatomical priors into the training procedure would make the model more accurate and robust. For resolution recovery, high-resolution anatomical information obtained from MR imaging was employed along with spatially-variant bluring kernels to avoid information loss during image reconstruction [
      • Song T.-A.
      • Chowdhury S.R.
      • Yang F.
      • Dutta J.
      Super-resolution PET imaging using convolutional neural networks.
      ]. Some groups devised strategies for deep learning-guided denoising models for synthesizing full-dose sinograms from their corresponding low-dose sinograms [
      • Sanaat A.
      • Arabi H.
      • Mainta I.
      • Garibotto V.
      • Zaidi H.
      Projection-space implementation of deep learning-guided low-dose brain PET imaging improves performance over implementation in image-space.
      ].
      An elegant study by Xu et al. proposed a U-Net model with concatenation connection and residual learning for full-dose reconstruction from a single 200th low-dose image [

      Xu J, Gong E, Pauly J, Zaharchuk G. 200x Low-dose PET Reconstruction using Deep Learning. ARXIV. 2017:eprint arXiv:1712.04119.

      ]. Xiang et al. presented a novel deep auto-context CNN model for synthesizing full-dose images from low-dose images complementing T1-weighted MR images. In comparison with state-of-the-art methods, their proposed model was able to generate comparable image quality while being 500 faster [
      • Xiang L.
      • Qiao Y.
      • Nie D.
      • An L.
      • Wang Q.
      • Shen D.
      Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI.
      ]. Another study employed a multi-input U-Net to predict 2D transaxial slices of 18F-Florbetaben full-dose PET images from corresponding low-dose images, taking advantage of available T1, T2, and Diffusion-weighted MR sequences [
      • Chen K.T.
      • Gong E.
      • de Carvalho Macruz F.B.
      • Xu J.
      • Boumis A.
      • Khalighi M.
      • et al.
      Ultra-low-dose (18)F-Florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs.
      ]. Liu et al. employed three modified U-Net architectures to enhance the noise characteristics of PET images through concurrent MR images without the need for full-dose PET images with a higher signal-to-noise ratio [
      • Liu C.C.
      • Qi J.
      Higher SNR PET image prediction using a deep learning model and MRI image.
      ]. In addition, Cui et al. [
      • Cui J.
      • Gong K.
      • Guo N.
      • Wu C.
      • Meng X.
      • Kim K.
      • et al.
      PET image denoising using unsupervised deep learning.
      ] proposed a 3D U-Net model for denoising of PET images acquired with two different radiotracers (68Ga-PRGD2 and 18F-FDG) where the model was trained with MR/CT images and prior high-quality images as input and noisy images as training labels. Using original noisy images instead of high-quality full-dose images makes the training of the model more convenient. Unsupervised networks are always desirable in medical image analysis due to the fact that data collection with accurate labels is challenging and/or time-cosuming. A foremost drawback of the above-mentioned models is that model training was performed in 2D rather than 2.5D or 3D.
      The 3D U-Net architecture was able to reduce the noise and PET quantification bias while enhancing image quality of brain and chest 18F-FDG PET images [
      • Lu W.
      • Onofrey J.A.
      • Lu Y.
      • Shi L.
      • Ma T.
      • Liu Y.
      • et al.
      An investigation of quantitative accuracy for deep learning based denoising in oncological PET.
      ]. To compensate for the limited training dataset, they pre-trained the model using simulation studies in the first stage and then fine-tuned the last layers of the network with realistic data. Kaplan et al. [
      • Kaplan S.
      • Zhu Y.-M.
      Full-dose PET image estimation from low-dose PET image using deep learning: a pilot study.
      ] trained a residual CNN separately for various body regions, including brain, chest, abdomen, and pelvis to generate full-dose images from 1/10th of the standard injected tracer activity. Training and testing of the model were performed on only two separate whole-body 18F-FDG PET datasets.
      GAN networks are widely used for image-to-image transformation tasks, especially image denoising. Conditional GANs (cGAN) and cycle GANs (Cycle-GAN) are two well-established architectures commonly used for style and domain transformation. In cGAN, unlike regular GAN, the generator and discriminator’s output is regularized by an extra-label. For instance, Wang et al. estimated the generator error and used it beside the discriminator loss to train the generator of a 3D cGAN more efficiently for denoising low-dose brain PET images [
      • Wang Y.
      • Yu B.
      • Wang L.
      • Zu C.
      • Lalush D.S.
      • Lin W.
      • et al.
      3D conditional generative adversarial networks for high-quality PET image estimation at low dose.
      ].
      Cycle-GAN models do not necessarily require paired images as the model can learn in an unsupervised way to map input images from source to target domains. Because of the iterative feature extraction process and the presence of the inverse path in this architecture, the underlying characteristics of input/output data can be extracted from unrelated images to be used in the image translation process. Zhou et al. proposed a 2D Cycle-GAN for generating full-dose with around 120 million true coincidences (for each bed position) from a low-dose image with only one million true coincidences [
      • Zhou L.
      • Schaefferkoetter J.D.
      • Tham I.W.
      • Huang G.
      • Yan J.
      Supervised learning with CycleGAN for low-dose FDG PET image denoising.
      ]. Lei et al. claimed that their Cycle-GAN model is able to predict whole-body full-dose 18F-FDG PET images from 1/8th of the injected activity [
      • Lei Y.
      • Dong X.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks.
      ]. They used a generator with residual blocks to learn the difference between low-dose and full-dose images to effectively reduce the noise. The same group presented a similar model incorporating CT images to guide low-dose to full-dose transformation using a relatively small dataset [
      • Dong X.
      • Lei Y.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging.
      ]. Their results revealed that the incorporation of CT images can improve the visibility of organ boundaries and decrease bias especially in regions located near bones.
      More recent studies implemented the training process using deep learning models in the projection space instead of image space, demonstrating that training a model in the sinogram space could lead to more efficient learning compared to training in the image space. Sanaat et al. trained a U-Net model with a dataset consisting of 120 brain 18F-FDG PET full-dose studies in the sinogram space [
      • Sanaat A.
      • Arabi H.
      • Mainta I.
      • Garibotto V.
      • Zaidi H.
      Projection-space implementation of deep learning-guided low-dose brain PET imaging improves performance over implementation in image-space.
      ]. The proposed model predicted full-dose from low-dose sinograms and demonstrated the superior performance of deep learning-based denoising in the sinogram space versus denoising in the image space (Fig. 3). Furthermore, another study proposed a prior knowledge-driven deep learning model for PET sinogram denoising [

      Lu S, Tan J, Gao Y, Shi Y, Liang Z. Prior knowledge driven machine learning approach for PET sinogram data denoising. Medical Imaging 2020: Physics of Medical Imaging: International Society for Optics and Photonics; 2020. p. 113124A.

      ]. Hong et al. [
      • Hong X.
      • Zan Y.
      • Weng F.
      • Tao W.
      • Peng Q.
      • Huang Q.
      Enhancing the image quality via transferred deep residual learning of coarse PET sinograms.
      ] combined Monte Carlo simulations and deep learning algorithms to predict high-quality sinograms from low-quality sinograms produced by two PET scanners equipped with small and large crystals, respectively. In whole-body PET imaging, Sanaat et al. compared the performance of two state-of-the-art deep learning approaches, namely Cycle-GAN and ResNet, to estimate standard whole-body 18F-FDG PET images from a fast acquisition protocol with 1/8th of the standard scan time [

      Sanaat A, Shiri I, Arabi H, Mainta I, Nkoulou R, and Zaidi H. Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging. Eur J Nucl Med Mol Imaging; 2021 in press.

      ]. Cycle-GAN predicted PET images exhibited superior quality in terms of SUV bias and variability as well as the lesion conspicuity.
      Figure thumbnail gr3
      Fig. 3Comparison between full-dose and low-dose brain PET image predictions in the sinogram and image domains.
      Though most of the above-described approaches could be applied to SPECT imaging, few studies dedicatedly addressed low-dose and/or fast SPECT imaging studies. Recently, a supervised deep learning network was employed to reduce the noise in myocardial perfusion SPECT images obtained from 1/2th, 1/4th, 1/8th, and 1/16th of the standard-dose protocol across 1052 subjects [
      • Ramon A.J.
      • Yang Y.
      • Pretorius P.H.
      • Johnson K.L.
      • King M.A.
      • Wernick M.N.
      Improving Diagnostic Accuracy in Low-Dose SPECT myocardial perfusion imaging with convolutional denoising networks.
      ]. Similarly, Shiri et al. exploited a residual neural network to predict standard SPECT myocardial perfusion images from half-time acquisitions [

      Shiri I, AmirMozafari Sabet K, Arabi H, Pourkeshavarz M, Teimourian B, Ay MR, et al. Standard SPECT myocardial perfusion estimation from half-time acquisitions using deep convolutional residual neural networks. J Nucl Cardiol; 2021 in press.

      ]. Raymann et al. used a U-Net architecture and XCAT phantom simulation studies of different regions of the body to reduce noise in SPECT images [

      Reymann MP, Würfl T, Ritt P, Stimpel B, Cachovan M, Vija AH, et al. U-Net for SPECT Image Denoising. 2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC): p. 1–2.

      ].
      Song et al. implemented a 3D residual convolutional neural network to estimate standard-dose SPECT-MPI data from four-time reduced dose counterparts using 119 clinical studies [

      Song C, Yang Y, Wernick MN, Pretorius PH, King MA. Low-Dose Cardiac-Gated SPECT Studies Using a Residual Convolutional Neural Network. In IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) 2019. p. 653–6.

      ]. Compared to spatio-temporal non-local mean (NLM) post-reconstruction filtering, the deep learning algorithm achieved significant improvement in spatial resolution of the left ventricular wall. As an alternative to post-reconstruction filtering, Liu et al. employed a coupled-Unet to suppress the noise in conventional SPECT-MPI acquisitions [
      • Liu J.
      • Yang Y.
      • Wernick M.N.
      • Pretorius P.H.
      • King M.A.
      Deep learning with noise-to-noise training for denoising in SPECT myocardial perfusion imaging.
      ]. A noise-to-noise denoising approach was adopted and compared with traditional post-filtering methods owing to the lack of ground truth/reference in clinical studies. The training dataset was generated using a bootstrap procedure where multiple noise realizations were made from clinical list-mode acquisitions and employed for the training of the coupled-Unet model. The clinical study performed on 895 patients revealed improved perfusion defect detection in deep learning-based filtered images reflected by enhancement of the SNR by 23% compared to conventional 3D Gaussian and NLM filtering.
      Since Monte Carlo-based iterative image reconstruction algorithms exhibit superior performance over analytic approaches (owing to accurate modeling of photon interactions and collimator effects) in SPECT imaging, Dietze et al. proposed a CNN model to upgrade conventional filtered back-projection generated SPECT images to the quality of Monte Carlo-based reconstructions within a few seconds [
      • Dietze M.M.A.
      • Branderhorst W.
      • Kunnen B.
      • Viergever M.A.
      • de Jong H.
      Accelerated SPECT image reconstruction with FBP and an image enhancement convolutional neural network.
      ]. The clinical assessment of 128 99mTc-macroaggregated albumin SPECT examinations revealed only a 2% difference between the deep learning- and Monte Carlo-based image reconstruction frameworks, while the deep learning solution exhibited over 200 times faster image reconstruction (5 s vs. 19 min). To reconstruct SPECT images directly from the projection data, Shao et al. trained a deep learning model using a simulation dataset derived from computerized phantoms [
      • Shao W.
      • Pomper M.G.
      • Du Y.
      A learned reconstruction network for SPECT imaging.
      ]. The performance of the deep learning-based reconstruction was evaluated on the Zubal human brain phantom and clinical studies wherein higher spatial resolution and quantitative accuracy were observed compared to conventional iterative reconstruction methods. Likewise, Chrysostomou et al. employed projections acquired from digital phantoms to train a deep learning model to enables directly mapping of SPECT images from the projection data [
      • Chrysostomou C.
      • Koutsantonis L.
      • Lemesios C.
      Papanicolas CN. A Reconstruction Method Based on Deep Convolutional Neural Network for SPECT Imaging.
      ]. The model exhibited superior performance over filtered backprojection and maximum likelihood expectation maximization reconstruction algorithms.
      To reduce the acquisition time of paediatric 99mTc-dimercaptosuccinic acid SPECT acquisitions, Lin et al. trained a deep learning model to estimate full-time (in the image domain) from half-time acquisitions using 112 paediatric renal SPECT scans [
      • Lin C.
      • Chang Y.C.
      • Chiu H.Y.
      • Cheng C.H.
      • Huang H.M.
      Reducing scan time of paediatric (99m)Tc-DMSA SPECT via deep learning.
      ]. Synthetic SPECT images led to 91.7%, 83.3%, and 100% accuracy, sensitivity, and specificity, respectively, in the detection of affected kidneys. Similar efforts were undertaken to decrease 177Lu SPECT acquisition time, where intermediate projections were synthesized using a deep convolutional Unet model to compensate for image degradation due to the reduced angular samples. The deep learning model was trained using 352 clinical SPECT studies, wherein every fourth projection out of a total of 120 projections was fed into the deep learning model to synthesize the other three projections (four-fold reduced acquisition time). Deep learning-based synthesized intermediate projections (from sparse-projection scan) improved remarkably the quality of the final reconstructed images compared to dedicated iterative image reconstruction algorithms for sparsely acquired projections [
      • Ryden T.
      • van Essen M.
      • Marin I.
      • Svensson J.
      • Bernhardt P.
      Deep learning generation of synthetic intermediate projections improves (177)Lu SPECT images reconstructed with sparsely acquired projections.
      ].
      Generalizability and robustness of deep learning models are two significant factors that show how much a model is trustable and the results robust and reproducible for normal/abnormal unseen datasets. These two factors are largely linked to the diversity and number of training samples. It is very common to exclude abnormal cases prior to training or evaluation of a model to create a homogeneous training/test sample. Although this results in better results, it will reduce robustness to a realistic dataset with a broad range of abnormalities. It is strongly recommended to use both healthy/normal and unhealthy/abnormal subjects with a realistic distribution of the samples. Moreover, to avoid overfitting and guarantee effective training of the model, application of relevant data augmentation techniques is also recommended.
      Using recurrent neural networks to decrease the scanning time and/or injected activity, especially in low-count dynamic PET imaging studies would be an interesting field of research. In addition, applying self-attention concepts to deep learning models would effectively enhance the performance of these models through indirect down-weighting/elimination of irrelevant regions and information in low-dose images while emphasizing the prominent/meaningful properties/information during the training process. Using realistic simulations to produce gold standard data sets beside clinical images would help deep learning models to learn noise distributions from a larger representative sample.

      Quantitative imaging

      A significant number of emitted photons undergo attenuation and Compton scatter interactions before they reach PET and SPECT detectors. Scatter and attenuation lead to over- and under-estimation of activity concentration, consequently resulting in large quantification errors [
      • Zaidi H.
      • Karakatsanis N.
      Towards enhanced PET quantification in clinical oncology.
      ]. To link the detected photons to the radiotracer activity concentration, attenuation and scatter correction (ASC) should be performed in SPECT and PET imaging [
      • Zaidi H.
      • Karakatsanis N.
      Towards enhanced PET quantification in clinical oncology.
      ,
      • Zaidi H.
      • Koral K.F.
      Scatter modelling and compensation in emission tomography.
      ]. In hybrid PET/CT and SPECT/CT images, the attenuation maps reflecting the distribution of linear attenuation coefficients are readily provided by CT images.
      The main challenge for ASC arises in SPECT-only, PET-only, as well as PET/MR and SPECT/MR imaging since MR images are not directly correlated to electron density, and as such, do not provide information about attenuation coefficients of biological tissues [
      • Mehranian A.
      • Arabi H.
      • Zaidi H.
      Vision 20/20: magnetic resonance imaging-guided attenuation correction in PET/MRI: challenges, solutions, and opportunities.
      ,
      • Teuho J.
      • Torrado-Carvajal A.
      • Herzog H.
      • Anazodo U.
      • Klén R.
      • Iida H.
      • et al.
      Magnetic resonance-based attenuation correction and scatter correction in neurological positron emission tomography/magnetic resonance imaging—current status with emerging applications.
      ]. For SPECT-only and PET-only systems, emission-based algorithms have been developed to address this issue [
      • Berker Y.
      • Li Y.
      Attenuation correction in emission tomography using the emission data–a review.
      ]. The main advantage is the capability to account for metallic implants and truncation artefacts [
      • Mehranian A.
      • Arabi H.
      • Zaidi H.
      Vision 20/20: magnetic resonance imaging-guided attenuation correction in PET/MRI: challenges, solutions, and opportunities.
      ,
      • Teuho J.
      • Torrado-Carvajal A.
      • Herzog H.
      • Anazodo U.
      • Klén R.
      • Iida H.
      • et al.
      Magnetic resonance-based attenuation correction and scatter correction in neurological positron emission tomography/magnetic resonance imaging—current status with emerging applications.
      ,

      Arabi H, Zaidi H. Deep learning-based metal artefact reduction in PET/CT imaging. Eur Radiol; 2021 Feb 10.

      ,
      • Mostafapour S.
      • Gholamian Khah F.
      • Dadgar H.
      • ARABI H.
      • Zaidi H.
      Feasibility of deep learning-guided attenuation and scatter correction of whole-body 68Ga-PSMA PET studies in the image domain.
      ]. Including TOF information and anatomical prior improved the quantitative accuracy of emission-based algorithms [
      • Mehranian A.
      • Zaidi H.
      • Reader A.J.
      MR-guided joint reconstruction of activity and attenuation in brain PET-MR.
      ,
      • Rezaei A.
      • Deroose C.M.
      • Vahle T.
      • Boada F.
      • Nuyts J.
      Joint reconstruction of activity and attenuation in fime-of-flight PET: a quantitative analysis.
      ,
      • Mehranian A.
      • Arabi H.
      • Zaidi H.
      Quantitative analysis of MRI-guided attenuation correction techniques in time-of-flight brain PET/MRI.
      ]. However, application of this methodology across different radiotracers warrants further investigation.
      In addition to emission-based algorithms, MR image-based algorithms, including segmentation and atlas-based algorithms have been developed to estimate attenuation coefficients from concurrent MR images [
      • Teuho J.
      • Torrado-Carvajal A.
      • Herzog H.
      • Anazodo U.
      • Klén R.
      • Iida H.
      • et al.
      Magnetic resonance-based attenuation correction and scatter correction in neurological positron emission tomography/magnetic resonance imaging—current status with emerging applications.
      ]. In segmentation-based algorithms, different MR sequences, including T1, T2, ultra-short echo (UTE), and zero-time echo (ZTE) have been used to delineate major tissue classes followed by assignment of pre-defined linear attenuation coefficients to each tissue class. In Atlas-based algorithms [
      • Arabi H.
      • Zaidi H.
      One registration multi-atlas-based pseudo-CT generation for attenuation correction in PET/MRI.
      ,
      • Arabi H.
      • Koutsouvelis N.
      • Rouzaud M.
      • Miralbell R.
      • Zaidi H.
      Atlas-guided generation of pseudo-CT images for MRI-only and hybrid PET-MRI-guided radiotherapy treatment planning.
      ], pairs of co-registered MR and CT images (considered as template or atlas) are aligned to the target MR image to generate a continuous attenuation map. The main disadvantage of atlas-based algorithms is the high dependence on the atlas dataset and sub-optimal performance for subjects presenting with anatomical abnormalities [
      • Arabi H.
      • Zaidi H.
      Comparison of atlas-based techniques for whole-body bone segmentation.
      ,
      • Arabi H.
      • Zaidi H.
      Truncation compensation and metallic dental implant artefact reduction in PET/MRI attenuation correction using deep learning-based object completion.
      ].
      Deep learning-based algorithms were proposed to address the challenges of conventional ASC approaches in PET and SPECT imaging [
      • Arabi H.
      • Zaidi H.
      Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy.
      ,
      • Wang T.
      • Lei Y.
      • Fu Y.
      • Curran W.J.
      • Liu T.
      • Nye J.A.
      • et al.
      Machine learning in quantitative PET: a review of attenuation correction and low-count image reconstruction methods.
      ]. Liu et al. [
      • Liu F.
      • Jang H.
      • Kijowski R.
      • Zhao G.
      • Bradshaw T.
      • McMillan A.B.
      A deep learning approach for 18 F-FDG PET attenuation correction.
      ] proposed converting non-attenuation corrected (NAC) brain PET images to synthetic CT (sCT) images. A GAN model was trained using 100 patients (in 2D mode) and tested on 28 patients achieving a relative error of less than 1% within 21 brain regions. Dong et al. [
      • Dong X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging.
      ] applied a similar approach in whole-body PET imaging using Cycle-GAN [
      • Dong X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging.
      ] reporting a mean PET quantification bias of 0.12% ± 2.98%. Shi et al. [
      • Shi L.
      • Onofrey J.A.
      • Liu H.
      • Liu Y.H.
      • Liu C.
      Deep learning-based attenuation map generation for myocardial perfusion SPECT.
      ] proposed a novel approach to generate sCT images in 99mTc-tetrofosmin myocardial perfusion SPECT imaging taking advantage of two images produced using different energy windows providing different representations of scattered and primary photon distributions. A multi-channel conditional GAN model was trained using SPECT images reconstructed using different energy windows as input to predict the corresponding sCT image. This model exhibited a normalized mean absolute error (NMAE) of 0.26 ± 0.15%.
      Hwang et al. [
      • Hwang D.
      • Kim K.Y.
      • Kang S.K.
      • Seo S.
      • Paeng J.C.
      • Lee D.S.
      • et al.
      Improving the accuracy of simultaneously reconstructed activity and attenuation maps using deep learning.
      ] used emission-based generated activity distributions and μ-maps as input to generate high-quality sCT images for 18F-FDG brain PET studies. They reported less than 10% errors for CT values using CED and U-Net models. The same group applied the same approach in whole-body PET imaging using U-Net, achieving a relative error of 2.22 ± 1.77% across 20 subjects [
      • Hwang D.
      • Kang S.K.
      • Kim K.Y.
      • Seo S.
      • Paeng J.C.
      • Lee D.S.
      • et al.
      Generation of PET Attenuation Map for Whole-Body Time-of-Flight (18)F-FDG PET/MRI Using a deep neural network trained with simultaneously reconstructed activity and attenuation maps.
      ]. Arabi and Zaidi [
      • Arabi H.
      • Zaidi H.
      Deep learning-guided estimation of attenuation correction factors from time-of-flight PET emission data.
      ] proposed the estimation of attenuation correction factors from the different TOF sinogram bins using ResNet, reporting an absolute SUV bias of less than 7% in different regions of the brain.
      In addition to generating sCTs using PET emission data, direct generation of attenuation and scatter corrected images from NAC images was reported. Shiri et al. [
      • Shiri I.
      • Ghafarian P.
      • Geramifar P.
      • Leung K.H.
      • Ghelichoghli M.
      • Oveisi M.
      • et al.
      Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC).
      ] and Yang et al. [
      • Yang J.
      • Park D.
      • Gullberg G.T.
      • Seo Y.
      Joint correction of attenuation and scatter in image space using deep convolutional neural networks for dedicated brain (18)F-FDG PET.
      ] trained a 2D U-Net network using brain 18F-FDG PET studies reporting PET quantification bias of less than 5% in different regions of the brain. Arabi et al. [
      • Arabi H.
      • Bortolin K.
      • Ginovart N.
      • Garibotto V.
      • Zaidi H.
      Deep learning-guided joint attenuation and scatter correction in multitracer neuroimaging studies.
      ] applied this approach to different brain molecular imaging probes, including 18F-FDG, 18F-DOPA, 18F-Flortaucipir, and 18F-Flutemetamol and reported SUV bias of less than 9% in different brain regions (Fig. 4). Shiri et al. [
      • Shiri I.
      • Arabi H.
      • Geramifar P.
      • Hajianfar G.
      • Ghafarian P.
      • Rahmim A.
      • et al.
      Deep-JASC: joint attenuation and scatter correction in whole-body (18)F-FDG PET using a deep residual network.
      ] trained 2D, 3D, and patch-based ResNets on 1000 whole-body 18F-FDG images and tested the proposed models on unseen 150 subjects. They performed ROI-based and voxel-based assessments and reported a relative error of less than 5%. Dong et al. [
      • Dong X.
      • Lei Y.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging.
      ] trained a 3D patch-based Cycle-GAN for whole-body 18F-FDG images and reported a mean relative error of less than 5% calculated on malignant lesions. Emission-based ASC approaches using deep learning algorithms are summarized in Table 1.
      Figure thumbnail gr4
      Fig. 4Comparison of PET images corrected for attenuation using CT‐based, segmentation-based (containing background air and soft-tissue) (SegAC), and deep learning-guided (DLAC) approaches together with the reference CT image for 18F-FDG, 18F-DOPA, 18F-Flortaucipir, and 18F-Flutemetamol radiotracers. Difference SUV error maps are also presented for segmentation- and deep learning-based approaches.
      Table 1Summary of studies performed for emission-based ASC using deep learning algorithms.
      AuthorsModalityRadiotracerApproachAlgorithmBody regionTrainingTraining/TestInputOutputEvaluationOutcomeLoss Function
      Liu et al.
      • Liu F.
      • Jang H.
      • Kijowski R.
      • Zhao G.
      • Bradshaw T.
      • McMillan A.B.
      A deep learning approach for 18 F-FDG PET attenuation correction.
      PET18F-FDGNAC to sCTCEDBrain2D (200 × 180)100/28NACsCT21 VOIs + whole brainAverage PET quantification bias − 0.64 ± 1.99L2
      Armanious et al.
      • Gong K.
      • Yang J.
      • Larson P.E.
      • Behr S.C.
      • Hope T.A.
      • Seo Y.
      • et al.
      MR-based attenuation correction for brain PET using 3D cycle-consistent adversarial network.
      PET18F-FDGNAC to sCTGANBrain2D (400 × 400)50/40NACsCT7 VOIs + Whole brin< 5% Average PET quantification biasPerceptual
      Dong et al.
      • Dong X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging.
      PET18F-FDGNAC to sCTCycle-GANWhole-bodyPatches (64 × 64 × 16)80/39NACsCT7 VOIs in different regions0.12% ±2.98% Mean PET quantification biasAdversarial loss + cycle consistency loss
      Colmeiro et al.

      Leynes AP, Ahn SP, Wangerin KA, Kaushik SS, Wiesinger F, Hope TA, et al. Bayesian deep learning Uncertainty estimation and pseudo-CT prior for robust Maximum Likelihood estimation of Activity and Attenuation (UpCT-MLAA) in the presence of metal implants for simultaneous PET/MRI in the pelvis. arXiv preprint arXiv:200103414. 2020.

      PET18F-FDGNAC to sCTGANWhole-body3D(128 × 128 × 32)108/10NACsCT---SUV not reportedMAE 88.9 ± 10.5 HU
      Shi et al.
      • Shi L.
      • Onofrey J.A.
      • Liu H.
      • Liu Y.H.
      • Liu C.
      Deep learning-based attenuation map generation for myocardial perfusion SPECT.
      SPECT99mTc-tetrofosminNAC to sCTGANConditionalCardiac3D (16 × 16 × 16)40/25Photo peak (126–155 keV) and (114–126 keV)sCTVoelwiseNMAE 0.26%±0.15%L2 + LGDL
      Arabi et al.
      • Arabi H.
      • Zaidi H.
      Deep learning-guided estimation of attenuation correction factors from time-of-flight PET emission data.
      PET18F-FDGNAC to ACFResNetBrain2D (168 × 200)7 input channels and 1 output channel68/4 CVTOF sinogram binsattenuation correction factors (ACFs)63 brain regions< 7% absolute PET quantification biasL2norm
      Hwang et al.
      • Hwang D.
      • Kim K.Y.
      • Kang S.K.
      • Seo S.
      • Paeng J.C.
      • Lee D.S.
      • et al.
      Improving the accuracy of simultaneously reconstructed activity and attenuation maps using deep learning.
      PET18F-FP-CITMLAA to sCTCAE andU-NetBrain2D (200 × 200)40/5 CVMLAA-generated activity distribution and μ-mapsCT4 VOIs of brainPET quantification bias ranging from −8% to −4%L2-norm
      Hwang et al.
      • Hwang D.
      • Kang S.K.
      • Kim K.Y.
      • Seo S.
      • Paeng J.C.
      • Lee D.S.
      • et al.
      Generation of PET Attenuation Map for Whole-Body Time-of-Flight (18)F-FDG PET/MRI Using a deep neural network trained with simultaneously reconstructed activity and attenuation maps.
      PET18F-FDGMLAA to sCTU-NetWhole-bodyPatches (64 × 64 × 16)80/20MLAA-generated activity distribution and μ-mapsCTbone lesions + soft-tissuesPET quantification bias bias% Bone lesions: 2.22 ± 1.77% Soft-tissue lesions: 1.31%± 3.35%)L1 norm
      Shi et al.
      • Pozaruk A.
      • Pawar K.
      • Li S.
      • Carey A.
      • Cheng J.
      • Sudarshan V.P.
      • et al.
      Augmented deep learning model for improved quantitative accuracy of MR-based PET attenuation correction in PSMA PET-MRI prostate imaging.
      PET18F-FDGMLAA to sCTU-NetWhole-bodyPatches (32 × 32 × 32)80/20MLAA-generated activity distribution and μ-mapsCTRegion-wiseNMAE 3.6%Line-integral projection loss
      Shiri et al.
      • Shiri I.
      • Ghafarian P.
      • Geramifar P.
      • Leung K.H.
      • Ghelichoghli M.
      • Oveisi M.
      • et al.
      Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC).
      PET18F-FDGNAC to MACU-NetBrain2D (256 × 256)111/18NACAC83 VOIsPET quantification bias − 0.10 ± 2.14%MSE
      Yang et al.
      • Yang J.
      • Park D.
      • Gullberg G.T.
      • Seo Y.
      Joint correction of attenuation and scatter in image space using deep convolutional neural networks for dedicated brain (18)F-FDG PET.
      PET18F-FDGNAC to MACU-NetBrain2D (256 × 256)25/10NACAC116 VOIsPET quantification bias 4.0%±15.4%Meansquared error (or L2 loss)
      Arabi et al.
      • Arabi H.
      • Bortolin K.
      • Ginovart N.
      • Garibotto V.
      • Zaidi H.
      Deep learning-guided joint attenuation and scatter correction in multitracer neuroimaging studies.
      PET18F-FDG18F-DOPA18F-Flortaucipir18F-FlutemetamolNAC to MACResNetBrain2D (128 × 128)180NACAC7 brain regions< 9% Absolute PET quantification biasL2norm
      Dong et al.
      • Dong X.
      • Lei Y.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging.
      PET18F-FDGNAC to MACCycle-GANWhole-bodyPatches (64 × 64 × 64)25 leave-one-out + 10 patients × 3 sequential scan testsNACAC6 VOIs in lesionsME 2.85 ± 5.21Wasserstein loss
      Shiri et al.
      • Shiri I.
      • Arabi H.
      • Geramifar P.
      • Hajianfar G.
      • Ghafarian P.
      • Rahmim A.
      • et al.
      Deep-JASC: joint attenuation and scatter correction in whole-body (18)F-FDG PET using a deep residual network.
      PET18F-FDGNAC to MACResNetWhole-body2D (154 × 154)

      Patch (64 × 64 × 64)

      3D (154 × 154 × 32)
      1000/150NACACVoxelwise and region-wiseRE % < 5%L2norm
      Xiang et al.
      • Xiang H.
      • Lim H.
      • Fessler J.A.
      • Dewaraja Y.K.
      A deep neural network for fast and accurate scatter estimation in quantitative SPECT/CT under challenging scatter conditions.
      SPECT90YInput: µ-map + SPECT projections, Output: scatter projectionsDCNN (VGG and ResNet)Chest + Abdomen2D (128 × 80)Phantom +6 patientsProjected attenuation mapSPECT projectionEstimated scatterprojectionsVoxelwiseNRMSE 0.41MSE
      Nguyen et al.
      • Nguyen T.T.
      • Chi T.N.
      • Hoang M.D.
      • Thai H.N.
      • Duc T.N.
      SPECT99mTcNAC to MAC3D Unet-GANCardiac3D (90 × 90 × 28)1473/336NACACVoxel-wiseNMAE = 0.034L2norm and cross-entropy
      Mostafapour et al.

      Mostafapour S, Gholamiankhah F, Maroofpour S, Momennezhad M, Asadinezhad M, Zakavi SR, et al. Deep learning-based attenuation correction in the image domain for myocardial perfusion SPECT imaging. arXiv preprint arXiv:210204915. 2021.

      SPECT99mTcNAC to MACResNetCardiac2D (64 × 64)20/80NACACVoxel-wise and clinicalBias = 0.34 ± 5.03%L2norm
      CED: Convolutional Encoder Decoder, GAN: Generative Adversarial Network, NAC: Non-Attenuation Corrected, sCT: Pseudo CT, VOI: Volume of Interest, HU: Hounsfield Unit, MAE: Mean Absolute Error, ACF: Attenuation correction factor, CV: Cross-Validation, TOF: Time of Flight, ME: Mean Error, RE: Relative Error, NRMSE: Normalized Root Mean Square Error.
      The generation of sCT from MR images using deep learning-based regression approaches were reported in a number of studies. Li et al. used a 2D CED model to generate a 3-class probability map from T1-weighted images for 18F-FDG brain images and reported an average bias of less than 1% in different brain regions [
      • Liu F.
      • Jang H.
      • Kijowski R.
      • Bradshaw T.
      • McMillan A.B.
      Deep learning MR imaging-based attenuation correction for PET/MR imaging.
      ]. Arabi et al. reported on the development of a novel adversarial semantic structure GAN model using T1-weighted MR images to generate synthetic CT images for brain PET studies [
      • Arabi H.
      • Zeng G.
      • Zheng G.
      • Zaidi H.
      Novel adversarial semantic structure deep learning for MRI-guided attenuation correction in brain PET/MRI.
      ]. They reported a relative error of less than 4% in 64 anatomical brain regions. Leynes et al. used ZTE and Dixon MR sequences in a multi-channel input framework to train a U-Net model [
      • Leynes A.P.
      • Yang J.
      • Wiesinger F.
      • Kaushik S.S.
      • Shanbhag D.D.
      • Seo Y.
      • et al.
      Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI.
      ]. The network was trained on 10 subjects using a patch extraction strategy and tested on 16 subjects consisting of the external validation set, reporting a quantification bias less than 5% in different ROIs defined on bones and soft tissues of 18F-FDG and 68Ga-PSMA-11 PET images. Ladefoged et al. evaluated 3D U-Net architectures with UTE MR sequence as input and reported a mean relative error of −0.1% in brain tumours [
      • Ladefoged C.N.
      • Marner L.
      • Hindsholm A.
      • Law I.
      • Højgaard L.
      • Andersen F.L.
      Deep learning based attenuation correction of PET/MRI in pediatric brain tumor patients: evaluation in a clinical setting.
      ]. The main contributions of deep learning-assisted MRI-guided attenuation and scatter correction in emission tomography are summarized in Table 2.
      Table 2Summary of studies performed on MRI-guided synthetic CT generation using deep learning approaches. CV: Cross-Validation, ROI: Region of Interest, VOIs: Volume of Interest, HU: Hounsfield Unit.
      AuthorsModalityRadiotracerApproachesAlgorithmOrganTrainingTraining/TestInputOutputEvaluationErrorLoss Function
      Bradshaw et al.
      • Tao L.
      • Fisher J.
      • Anaya E.
      • Li X.
      • Levin C.S.
      Pseudo CT Image Synthesis and Bone Segmentation from MR images using adversarial networks with residual blocks for MR-based attenuation correction of brain PET Data.
      PET18F-FDGMRI to tissue labelingDeepMedicPelvisPatch (25 × 25 × 25)12/6T1/T24-class probability map16 soft-tissue lesionsMSE 4.9%Cross-entropy loss
      Jang et al.
      • Rajalingam B.
      • Priya R.
      Multimodal medical image fusion based on deep learning neural network for clinical treatment analysis.
      PET18F-FDGMRI to tissue labelingSegNetBrain2D (340 × 340)Pretraining: 30 MRI, Training 6 MRIEvaluation: 8 MRIUTEsCT23 VOIs + whole brain< 1%Multi-class soft-max classifier
      Liu et al.
      • Liu F.
      • Jang H.
      • Kijowski R.
      • Bradshaw T.
      • McMillan A.B.
      Deep learning MR imaging-based attenuation correction for PET/MR imaging.
      PET18F-FDGMRI to tissue labelingCEDBrain2D (340 × 340)30/10 MRI to label 5 PET/MRIT1-weighted3-class probability map23 VOIs + whole brainAverage error < 1% inthe whole brainCross-entropy
      Arabi et al.
      • Arabi H.
      • Zeng G.
      • Zheng G.
      • Zaidi H.
      Novel adversarial semantic structure deep learning for MRI-guided attenuation correction in brain PET/MRI.
      PET18F-FDGMRI to tissue labelingGANBrain3D (224 × 224 × 32)40 /2 CVT13-class probability map63 brain regions<4%Cross-entropy
      Mecheter et al.
      • Kang S.K.
      • Seo S.
      • Shin S.A.
      • Byun M.S.
      • Lee D.Y.
      • Kim Y.K.
      • et al.
      Adaptive template generation for amyloid PET using a deep learning approach.
      PET18F-FDGMRI to SegmentSegNetBrain2D (256 × 256)12/3T1/T23 TissueCross-entropy
      Leynes et al.
      • Leynes A.P.
      • Yang J.
      • Wiesinger F.
      • Kaushik S.S.
      • Shanbhag D.D.
      • Seo Y.
      • et al.
      Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI.
      PET18F-FDG68Ga-PSMA-11MRI to sCTU-NetPelvisPatch (32 × 32 × 16)10/16ZTE and Dixon (fat/water)multi-inputsCT30 bone lesions and 60 soft-tissue lesionsRMSE 2.68% in bone and 4.07% in soft-tissuesL1-loss, gradient difference loss (GDL), and Laplacian difference loss (LDL)
      Gong et al.
      • Huang B.
      • Chen Z.
      • Wu P.-M.
      • Ye Y.
      • Feng S.-T.
      • Wong C.-Y.O.
      • et al.
      Fully automated delineation of gross tumor volume for head and neck cancer on PET-CT using deep learning: a dual-center study.
      PET18F-FDGMRI to sCTU-NetBrain2D (144 × 144)40 /5 CVDixon and ZTEsCT8 VOIs + whole brainMRE 3%L1 norm
      Ladefoged et al.
      • Ladefoged C.N.
      • Marner L.
      • Hindsholm A.
      • Law I.
      • Højgaard L.
      • Andersen F.L.
      Deep learning based attenuation correction of PET/MRI in pediatric brain tumor patients: evaluation in a clinical setting.
      PET18F-FETMRI to sCTU-NetBrain3D (192 × 192 × 16)79/4 CVUTEsCT36 brain tumor VOIsMean relative difference −0.1%Mean squared-error
      Blanc-Durand et al.
      • Lian C.
      • Ruan S.
      • Denoeux T.
      • Li H.
      • Vera P.
      Joint tumor segmentation in PET-CT images using co-clustering and fusion based on belief functions.
      PET18F-FDGMRI to sCTU-NetBrainPatch (64 × 64 × 16)23/47ZTEsCT70 VOIs + whole brainAverage error −0.2%Mean squarederror
      Spuhler et al.
      • Zhao X.
      • Li L.
      • Lu W.
      • Tan S.
      Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network.
      PET11C-WAY-10063511C-DASBMRI to sCTU-NetBrain2D (256 × 256)56/11T1sCT20 brain regions (VOIs)PET quanitifaction error within VOIs −0.49 ± 1.7% 11C-WAY-100635

      −1.52 ± 0.73% 11C-DASB
      L1 error
      Torrado-Carvajal et al.
      • Zhao L.
      • Lu Z.
      • Jiang J.
      • Zhou Y.
      • Wu Y.
      • Feng Q.
      Automatic nasopharyngeal carcinoma segmentation using fully convolutional networks with auxiliary paths on dual-modality PET-CT images.
      PET18F-FDG18F-CholineMRI to sCTU-NetPelvis2D (256 × 256)28/4 CVDixon-VIBEsCTRegionwise and voxelwise< 1%Mean absolute error
      Gong et al.
      • Blanc-Durand P.
      • Van Der Gucht A.
      • Schaefer N.
      • Itti E.
      • Prior J.O.
      Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-Net convolutional neural network study.
      PET11C-PiB18F-MK6240MRI to sCTU-NetBrain2D (160 × 160)Multichannel input of 5 and 3535/5 CV1 UTE imageand 6 multi-echo Dixon with different TEssCT8 VOIs< 2%L1-norm
      Gong et al.
      • Leung K.H.
      • Marashdeh W.
      • Wray R.
      • Ashrafinia S.
      • Pomper M.G.
      • Rahmim A.
      • et al.
      A physics-guided modular deep-learning based automated framework for tumor segmentation in PET.
      PET18F-FDGMRI to sCTCycle-GANBrainPatch (144 × 144 × 25)32 /4 CVDixonsCT16 VOIs< 3%L1-norm loss
      Ladefoged et al.
      • Ladefoged C.N.
      • Hansen A.E.
      • Henriksen O.M.
      • Bruun F.J.
      • Eikenes L.
      • Øen S.K.
      • et al.
      AI-driven attenuation correction for brain PET/MRI: clinical evaluation of a dementia cohort and importance of the training group size.
      PET18F-FDGMRI to sCTU-NetBrain3D (192 × 192 × 16)Multichannel732/305Dixon VIBET1UTEsCT16 VOIs< 1%Mean squared error
      Leynes et al.
      • Wang T.
      • Lei Y.
      • Tang H.
      • Harms J.
      • Wang C.
      • Liu T.
      • et al.
      A learning-based automatic segmentation method on left ventricle in SPECT imaging. Medical Imaging Biomedical Applications in Molecular, Structural, and Functional Imaging.
      PET18F-FDG68Ga-PSMA-1168Ga-DOTATATEMRI to sCTBayesian DCNNU-NetPelvisPatch (64 × 64 × 32)10/19DixonZTEsCTROIs on lesion< 5%L1-loss + gradient difference loss (GDL+(Laplacian difference loss
      Pozaruk et al.
      • Roccia E.
      • Mikhno A.
      • Ogden R.T.
      • Mann J.J.
      • Laine A.F.
      • Angelini E.D.
      • et al.
      Quantifying brain [18 F] FDG uptake noninvasively by combining medical health records and dynamic PET imaging data.
      PET68Ga-PSMA-11MRI to sCTGAN, U-NetPelvis2D (192 × 128)18/10DixonsCTROIs on the prostate< 3%mean absolute error
      Tao et al.
      • Park J.
      • Bae S.
      • Seo S.
      • Park S.
      • Bang J.-I.
      • Han J.H.
      • et al.
      Measurement of glomerular filtration rate using quantitative SPECT/CT and deep-learning-based kidney segmentation.
      PETNot reportedMRI to sCTConditional GANBrain2D (256 × 256)9/2ZTEsCTVoxel wise<5% CTHU biasL1 loss and GAN loss
      Most deep learning-based ASC studies focused on brain imaging, which is less challenging compared to whole-body imaging where the anatomical structures are more complex with juxtapositions of various tissues having diverse attenuation properties and irregular shapes. There is obviously a need to evaluate these algorithms in more challenging heterogeneous regions, such as the chest and abdomen [
      • Shiri I.
      • Arabi H.
      • Geramifar P.
      • Hajianfar G.
      • Ghafarian P.
      • Rahmim A.
      • et al.
      Deep-JASC: joint attenuation and scatter correction in whole-body (18)F-FDG PET using a deep residual network.
      ]. Moreover, the majority of these studies were performed using only one radiotracer (mostly 18F-FDG) which raises questions regarding the generalizability of the models and the need for retraining and reevaluation on other tracers [
      • Arabi H.
      • Bortolin K.
      • Ginovart N.
      • Garibotto V.
      • Zaidi H.
      Deep learning-guided joint attenuation and scatter correction in multitracer neuroimaging studies.
      ]. The size of training and evaluation sets is another limitation of deep learning-based ASC as the performance of these algorithms depends on the training sample. To the best of our knowledge, only two studies, one focusing on brain imaging [
      • Ladefoged C.N.
      • Hansen A.E.
      • Henriksen O.M.
      • Bruun F.J.
      • Eikenes L.
      • Øen S.K.
      • et al.
      AI-driven attenuation correction for brain PET/MRI: clinical evaluation of a dementia cohort and importance of the training group size.
      ] and the other on whole-body imaging [
      • Shiri I.
      • Arabi H.
      • Geramifar P.
      • Hajianfar G.
      • Ghafarian P.
      • Rahmim A.
      • et al.
      Deep-JASC: joint attenuation and scatter correction in whole-body (18)F-FDG PET using a deep residual network.
      ], which used a large number of training sets. Most deep learning-based ASC studies were performed in PET imaging with a limited number of works reported for SPECT imaging [
      • Shi L.
      • Onofrey J.A.
      • Liu H.
      • Liu Y.H.
      • Liu C.
      Deep learning-based attenuation map generation for myocardial perfusion SPECT.
      ,
      • Xiang H.
      • Lim H.
      • Fessler J.A.
      • Dewaraja Y.K.
      A deep neural network for fast and accurate scatter estimation in quantitative SPECT/CT under challenging scatter conditions.
      ].
      Regarding attenuation correction in the image domain, Nguyen et al. proposed a 3D Unet-GAN network that takes 3D patches (90 × 90 × 28 voxels) of non-AC images as input to directly predict attenuation corrected MPI-SPECT images [
      • Nguyen T.T.
      • Chi T.N.
      • Hoang M.D.
      • Thai H.N.
      • Duc T.N.
      ]. The performance of the proposed network was compared with 2D Unet and 3D Unet models, wherein the 3D Unet-GAN model exhibited superior performance with NMAE = 0.034 and mean square error (MSE) = 294.97. Likewise, Mostafapour et al. examined attenuation correction of MPI-SPECT images in the image domain using ResNet and Unet models (in 2D mode). Chang’s attenuation correction method was also implemented to provide a baseline for performance assessment of the deep learning-based AC approaches. The clinical assessment and quantitative analysis demonstrated excellent agreement between deep learning- and CT-based AC with a mean quantification bias of 0.34 ± 5.03% [

      Mostafapour S, Gholamiankhah F, Maroofpour S, Momennezhad M, Asadinezhad M, Zakavi SR, et al. Deep learning-based attenuation correction in the image domain for myocardial perfusion SPECT imaging. arXiv preprint arXiv:210204915. 2021.

      ].

      Image interpretation and decision support

      Image segmentation, registration, and fusion

      Computer-aided tools for the analysis and processing of medical images have been developed to improve the reliability and robustness of the extracted features. Advanced machine-learning techniques are being developed to learn 1) effective similarity features, 2) a common feature representation, or 3) appearance mapping, in order to provide a model that can match large appearance variations [
      • Armanious K.
      • Küstner T.
      • Reimold M.
      • Nikolaou K.
      • La Fougère C.
      • Yang B.
      • et al.
      Independent brain (18)F-FDG PET attenuation correction using a deep learning approach with Generative Adversarial Networks.
      ,
      • Colmeiro R.R.
      • Verrastro C.
      • Minsky D.
      • Grosges T.
      Whole Body Positron Emission Tomography Attenuation Correction Map Synthesizing using 3D Deep Generative Adversarial Networks.
      ].
      Accurate organ/tumor delineation from molecular images is mainly used in the context of oncological PET imaging studies for quantitative analysis targeting various aspects, including severity scoring, radiation treatment planning, volumetric quantification, radiomic features extraction, etc. However, this is challenging owing to the poor spatial resolution and high statistical noise of molecular images. In current clinical practice, image segmentation is typically performed manually, which tends to be labor-intensive and prone to intra- and inter-observer variability. A number of recent studies explored the potential of DL-based automated tumor segmentation from PET or hybrid PET/CT examinations [
      • Shi L.
      • Onofrey J.A.
      • Revilla E.M.
      • Toyonaga T.
      • Menard D.
      • Ankrah J.
      • et al.
      A novel loss function incorporating imaging acquisition physics for PET attenuation map generation using deep learning.
      ,
      • Bradshaw T.J.
      • Zhao G.
      • Jang H.
      • Liu F.
      • McMillan A.B.
      Feasibility of Deep Learning-Based PET/MR attenuation correction in the pelvis using only diagnostic MR images.
      ]. Zhao et al. used a U-Net architecture for tumor delineation from 18F-FDG PET/CT images within the lung and nasopharyngeal regions [
      • Jang H.
      • Liu F.
      • Zhao G.
      • Bradshaw T.
      • McMillan A.B.
      Technical Note: deep learning based MRAC using rapid ultrashort echo time imaging.
      ,

      Mecheter I, Amira A, Abbod M, Zaidi H. Brain MR Imaging Segmentation Using Convolutional Auto Encoder Network for PET Attenuation Correction. In Proceedings of SAI Intelligent Systems Conference: Springer; 2020. p. 430-40.

      ]. Blanc-Durant et al. demonstrated the feasibility of 18F-fluoro-ethyl-tyrosine (18F-FET) PET lesion segmentation using a CNN model [
      • Gong K.
      • Yang J.
      • Kim K.
      • El Fakhri G.
      • Seo Y.
      • Li Q.
      Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images.
      ]. Leung et al. developed a modular deep-learning framework for primary lung tumor segmentation from FDG-PET images with a small-size clinical training dataset, generalized across different scanners, achieving a Dice index of 0.73. They addressed the limitations of the small size of the training dataset as well as the accuracy and variability of manual segmentations used as ground truth by using a realistic simulation dataset [
      • Blanc-Durand P.
      • Khalife M.
      • Sgard B.
      • Kaushik S.
      • Soret M.
      • Tiss A.
      • et al.
      Attenuation correction using 3D deep convolutional neural network for brain 18F-FDG PET/MR: comparison with Atlas, ZTE and CT based attenuation correction.
      ]. Wang et al. proposed a deep learning-assisted method for automated segmentation of the left ventricular region using gated myocardial perfusion SPECT [
      • Spuhler K.D.
      • Gardus 3rd, J.
      • Gao Y.
      • DeLorenzo C.
      • Parsey R.
      • Huang C.
      Synthesis of Patient-Specific Transmission Data for PET Attenuation Correction for PET/MRI neuroimaging using a convolutional neural network.
      ].
      Roccia et al. used a DL algorithm to predict the arterial input function for quantification of the regional cerebral metabolic rate from dynamic 18F-FDG PET scans [
      • Torrado-Carvajal A.
      • Vera-Olmos J.
      • Izquierdo-Garcia D.
      • Catalano O.A.
      • Morales M.A.
      • Margolin J.
      • et al.
      Dixon-VIBE Deep Learning (DIVIDE) Pseudo-CT Synthesis for Pelvis PET/MR Attenuation Correction.
      ]. Park et al. developed an automated pipeline for glomerular filtration rate (GFR) quantification of 99mTc-DTPA from SPECT/CT scans using a 3D U-Net model through kidney segmentation [
      • Gong K.
      • Han P.K.
      • Johnson K.A.
      • El Fakhri G.
      • Ma C.
      • Li Q.
      Attenuation correction using deep Learning and integrated UTE/multi-echo Dixon sequence: evaluation in amyloid and tau PET imaging.
      ].

      AI-assisted diagnosis and prognosis

      AI algorithms have been employed to build models exploiting the information extracted from medical images to perform a specific clinical task, e.g. object detection/classification, severity scoring, clinical outcome prediction, treatment planning, and monitoring response to therapy [
      • Visvikis D.
      • Le Rest C.C.
      • Jaouen V.
      • Hatt M.
      Artificial intelligence, machine (deep) learning and radio (geno) mics: definitions and nuclear medicine imaging applications.
      ]. Numerous works reported on automated detection and classification of various pathologies (e.g. malignant vs. benign) in nuclear medicine [
      • Wang H.
      • Zhou Z.
      • Li Y.
      • Chen Z.
      • Lu P.
      • Wang W.
      • et al.
      Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18 F-FDG PET/CT images.
      ]. For benign diseases, cardiovascular SPECT and brain PET imaging were the main focus of AI applications [
      • Seifert R.
      • Weber M.
      • Kocakavuk E.
      • Rischpler C.
      • Kersting D.
      AI and machine learning in nuclear medicine: future perspectives.
      ]. Xu et al. developed an automated pipeline using two cascaded V-NETs for lesion prediction and segmentation to detect multiple myeloma bone lesions from 68Ga-Pentixafor PET/CT [
      • Xu L.
      • Tetteh G.
      • Lipkova J.
      • Zhao Y.
      • Li H.
      • Christ P.
      • et al.
      Automated whole-body bone lesion detection for multiple myeloma on 68Ga-Pentixafor PET/CT imaging using deep learning methods.
      ]. Togo et al. demonstrated the feasibility of cardiac sarcoidosis detection from 18F-FDG PET scans using Inception-v3 network (83.9% sensitivity and 87% specificity), which outperformed conventional SUVmax- (46.8% sensitivity and 71.0% specificity) and coefficient of variance (CoV)-based (65.5% sensitivity and 75.0% specificity) approaches [
      • Togo R.
      • Hirata K.
      • Manabe O.
      • Ohira H.
      • Tsujino I.
      • Magota K.
      • et al.
      Cardiac sarcoidosis classification with deep convolutional neural network-based features using polar maps.
      ]. Ma et al. modified a DenseNet architecture for the diagnosis of thyroid disease using SPECT images into three categories: Graves’ disease, Hashimoto, and subacute thyroiditis [
      • Ma L.
      • Ma C.
      • Liu Y.
      • Wang X.
      Thyroid diagnosis from SPECT images using convolutional neural network with optimization.
      ].
      18F-FDG PET is extensively used as a diagnostic tool in neurodegenerative disorders, especially Alzheimer Disease (AD) to improve diagnosis and monitor disease progression. The role of AI in AD diagnosis has been recently reviewed by Duffy et al. [

      Duffy IR, Boyle AJ, Vasdev N. Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology. Molecular imaging. 2019;18:1536012119869070.

      ]. Lu et al. developed an AI-based framework for the early diagnosis of AD using multimodal 18F-FDG PET/MR and multiscale deep neural network (82.4% accuracy and 94.23% sensitivity) [
      • Lu D.
      • Popuri K.
      • Ding G.W.
      • Balachandar R.
      • Beg M.F.
      Multimodal and multiscale deep neural networks for the early diagnosis of Alzheimer’s disease using structural MR and FDG-PET images.
      ]. Choi and Jin proposed a straightforward deep learning algorithm based on only 18F-FDG PET images for early detection of AD (84.2% accuracy) that outperformed conventional feature-based quantification approaches, e.g. Support-Vector-Machine (76.0% accuracy) and VOI-based (75.4% accuracy) techniques [
      • Choi H.
      • Jin K.H.
      • AsDN I.
      Predicting cognitive decline with deep learning of brain metabolism and amyloid imaging.
      ]. Machine learning algorithms have shown promising results in the classification of AD using brain PET images. Liu et al. proposed a classification algorithm of FDG PET images composed of 2D CNNs and recurrent neural networks (RNNs) [
      • Liu M.
      • Cheng D.
      • Yan W.
      Initiative AsDN. Classification of Alzheimer’s disease by combination of convolutional and recurrent neural networks using FDG-PET images.
      ]. The CNN model was trained to extract the features in 2D, while the RNN extracted the features in 3D mode (95.3% accuracy for AD vs controls and 83.9% for mild impairment vs controls). In a follow-up work, they proposed a cascaded CNN model to train the multi-level features of multimodal PET/MRI images. First, a patch-based 3D CNN was constructed, and then, a high-level 2D CNN followed by a softmax layer was trained to collect the high-level features. Finally, all features were concatenated followed by a softmax layer for AD classification [
      • Liu M.
      • Cheng D.
      • Wang K.
      • Wang Y.
      Initiative AsDN. Multi-modality cascaded convolutional neural networks for Alzheimer’s disease diagnosis.
      ]. The flexibility of AI algorithms enables learning the characteristics from heterogeneous data that have meaningful correlations but not obvious for the human interpreter. Zhou et al. developed a deep learning model for AD diagnosis using genetic input data, e.g. single nucleotide polymorphism in addition to radiological brain images that outperformed classification performance relative to other state‐of‐the‐art methods [
      • Zhou T.
      • Thung K.H.
      • Zhu X.
      • Shen D.
      Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis.
      ].
      Betancur et al. exploited a deep learning model trained with a large multi-center clinical database for coronary artery disease prediction per-vessel to evaluate the automated prediction of obstructive disease from MPI-SPECT [
      • Betancur J.
      • Commandeur F.
      • Motlagh M.
      • Sharir T.
      • Einstein A.J.
      • Bokhari S.
      • et al.
      Deep learning for prediction of obstructive disease from fast myocardial perfusion SPECT: a multicenter study.
      ]. The effectiveness of the proposed method was compared with the total perfusion deficit (TPD) index. Overall, 1638 patients underwent 99mTc-sestamibi or tetrofosmin MPI-SPECT scans in 9 different sites. The diagnosis based on invasive coronary angiography examinations was considered as reference. The deep learning model led to a higher area under the receiver-operating characteristic curve for prediction of the obstructive disease compared to TPD for both patient-wise (0.80 vs. 0.78) and vessel-wise (0.76 vs. 0.73) analysis. Wang et al. developed a convolutional neural network for left ventricular functional assessment from gated MPI-SPECT images to circumvent the tedious and subjective task of manual segmentation/adjustment and measurement [
      • Wang T.
      • Lei Y.
      • Tang H.
      • He Z.
      • Castillo R.
      • Wang C.
      • et al.
      A learning-based automatic segmentation and quantification method on left ventricle in gated myocardial perfusion SPECT imaging: a feasibility study.
      ]. The evaluation on 56 normal and abnormal patients exhibited a left ventricular volume correlation coefficient of 0.910 ± 0.061 between AI- and physicians-based analysis. The diagnosis of Parkinson’s disease from brain SPECT scans using deep learning approaches has been investigated in [
      • Mohammed F.
      • He X.
      • Lin Y.
      An easy-to-use deep-learning model for highly accurate diagnosis of Parkinson's disease using SPECT images.
      ], wherein 2723 patients from healthy and Parkinson's disease groups were examined. The deep learning approach demonstrated outstanding performance with a sensitivity of 99.04%, specificity of 99.63%, and accuracy of 99.34%, suggesting the remarkable potential in the diagnosis of Parkinson’s disease and its management.

      Radiomics and precision medicine

      Radiomics refers to a quantitative set of features, e.g. intensity, texture, and geometrical characteristics obtained from radiological images to discriminate quantifiable phenotypes that cannot be extracted through qualitative assessment of images. A radiomics model is commonly built through 4 steps: i) image acquisition/reconstruction; ii) VOI segmentation; iii) quantification/hand-crafted feature extraction; iv) statistical analysis [
      • Afshar P.
      • Mohammadi A.
      • Plataniotis K.N.
      • Oikonomou A.
      • Benali H.
      From handcrafted to deep-learning-based cancer radiomics: challenges and opportunities.
      ]. While data-driven deep learning approaches are different from feature-driven approaches, deep learning has the ability to directly learn discriminative features from data in their natural raw form without the necessity to define VOIs or extract engineered features [
      • Noortman W.A.
      • Vriens D.
      • Grootjans W.
      • Tao Q.
      • de Geus-Oei L.-F.
      Van Velden FH. Nuclear medicine radiomics in precision medicine: why we can’t do without artificial intelligence.
      ].
      SPECT and PET images represent biological and physiopathological characteristics that can be quantitatively expressed using radiomics. Most studies focused on 18F-FDG PET images for prognosis (staging) or outcome prediction using handcrafted radiomics [
      • Xu W.
      Predictive power of a radiomic signature based on 18F-FDG PET/CT images for EGFR mutational status in NSCLC.
      ,
      • Shiri I.
      • Maleki H.
      • Hajianfar G.
      • Abdollahi H.
      • Ashrafinia S.
      • Hatt M.
      • et al.
      Next-generation radiogenomics sequencing for prediction of EGFR and KRAS mutation status in NSCLC patients using multimodal imaging and machine learning algorithms.
      ,
      • Edalat-Javid M.
      • Shiri I.
      • Hajianfar G.
      • Abdollahi H.
      • Arabi H.
      • Oveisi N.
      • et al.
      Cardiac SPECT radiomic features repeatability and reproducibility: A multi-scanner phantom study.
      ]. Delta radiomics, as a metric for treatment outcome, has been developed based on multiple time-point images [
      • Fave X.
      • Zhang L.
      • Yang J.
      • Mackin D.
      • Balter P.
      • Gomez D.
      • et al.
      Delta-radiomics features for the prediction of patient outcomes in non–small cell lung cancer.
      ]. Some studies investigated the advantage of using hybrid images, e.g. PET/CT and PET/MR [
      • Dissaux G.
      • Visvikis D.
      • Da-Ano R.
      • Pradier O.
      • Chajon E.
      • Barillot I.
      • et al.
      Pretreatment 18F-FDG PET/CT radiomics predict local recurrence in patients treated with stereotactic body radiotherapy for early-stage non-small cell lung cancer: a multicentric study.
      ], extending the feature extraction to non-primary tumor volumes, such as bone marrow and metastatic lymph nodes [
      • Mattonen S.A.
      • Davidzon G.A.
      • Benson J.
      • Leung A.N.
      • Vasanawala M.
      • Horng G.
      • et al.
      Bone Marrow and tumor radiomics at 18F-FDG PET/CT: impact on outcome prediction in non-small cell lung cancer.
      ], and deriving features from parametric PET images [
      • Tixier F.
      • Vriens D.
      • Cheze-Le Rest C.
      • Hatt M.
      • Disselhorst J.A.
      • Oyen W.J.
      • et al.
      Comparison of tumor uptake heterogeneity characterization between static and parametric 18F-FDG PET images in non-small cell lung cancer.
      ]. Application of radiomics in SPECT has also been recently investigated by Ashrafnia et al. for prediction of coronary artery calcification in [99mTc]-sestamibi SPECT myocardial perfusion scans [
      • Ashrafinia S.
      • Dalaie P.
      • Yan R.
      • Ghazi P.
      • Marcus C.
      • Taghipour M.
      • et al.
      Radiomics analysis of clinical myocardial perfusion SPECT to predict coronary artery calcification.
      ]. Rahmim et al. evaluated the extraction of radiomic features from longitudinal Dopamine transporter (DAT) SPECT images for outcome prediction in Parkinson’s disease [
      • Rahmim A.
      • Huang P.
      • Shenkov N.
      • Fotouhi S.
      • Davoodi-Bojd E.
      • Lu L.
      • et al.
      Improved prediction of outcome in Parkinson's disease using radiomics analysis of longitudinal DAT SPECT images.
      ]. DL-based radiomics was compared with feature-driven methods to highlight the advantages of CNNs compared to handcrafted radiomics for response prediction of chemotherapy in oesophageal cancer [
      • Ypsilantis P.-P.
      • Siddique M.
      • Sohn H.-M.
      • Davies A.
      • Cook G.
      • Goh V.
      • et al.
      Predicting response to neoadjuvant chemotherapy with PET imaging using convolutional neural networks.
      ]. Wang et al. reported that CNNs did not outperform traditional radiomics in the classification of mediastinal lymph nodes of non-small lung cancer. Yet, it was preferred, since it was more user-friendly and required less data handling, and was less prone to feature selection bias [
      • Wang H.
      • Zhou Z.
      • Li Y.
      • Chen Z.
      • Lu P.
      • Wang W.
      • et al.
      Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18 F-FDG PET/CT images.
      ].

      Internal radiation dosimetry

      AI has significantly impacted other fields of nuclear medicine through developing methods for radiation dose monitoring, dose reduction strategies, building theranostic decision trees, and dose limit compliance. In the era of precision medicine, personalized dosimetry is increasingly used in nuclear medicine. Targeted Radionuclide Therapy (TRT) has been recently merged with the concept of theranostics, a promising technique in radiation oncology. Despite the growing interest in dosimetry-guided patient-specific TRT, the one-fits-all approach is still used in routine clinical practice. In the context of individualized dose profiling, the construction of patient-specific computational models is the first step toward this goal [
      • Li T.
      • Ao E.C.
      • Lambert B.
      • Brans B.
      • Vandenberghe S.
      • Mok G.S.
      Quantitative imaging for targeted radionuclide therapy dosimetry-technical review.
      ]. Numerous works focused on the development of pipelines for the construction of patient-specific computational models applicable in personalized dosimetry in either therapy or diagnostic procedures [
      • Peng Z.
      • Fang X.
      • Yan P.
      • Shan H.
      • Liu T.
      • Pei X.
      • et al.
      A method of rapid quantification of patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose computing.
      ,
      • Xie T.
      • Zaidi H.
      Estimation of the radiation dose in pregnancy: an automated patient-specific model using convolutional neural networks.
      ,
      • Schoppe O.
      • Pan C.
      • Coronel J.
      • Mai H.
      • Rong Z.
      • Todorov M.I.
      • et al.
      Deep learning-enabled multi-organ segmentation in whole-body mouse scans.
      ]. Fu et al. developed a framework for automated generation of computational phantoms from CT images [

      Fu W, Sharma S, Abadi E, Iliopoulos A-S, Wang Q, Lo JY, et al. iPhantom: a framework for automated creation of individualized computational phantoms and its application to CT organ dosimetry. arXiv preprint arXiv:200808730. 2020.

      ]. They used cascaded modules consisting of i) registration of patient CT images to an anchor phantom, ii) segmentation of organs using UNet structure, and iii) registration of segmented organs inside the deformed anchor phantom to generate an individualized computational model that is applicable for personalized dosimetry in both diagnostic and therapeutic procedures. Besides, the automatic segmentation of organs at risk for various application sites of TRT has been extensively studied. Jackson et al. developed a framework for automated monitoring of absorbed dosed in the kidneys of patients undergoing 177Lu-PSMA therapy [
      • Jackson P.
      • Hardcastle N.
      • Dawe N.
      • Kron T.
      • Hofman M.S.
      • Hicks R.J.
      Deep learning renal segmentation for fully automated radiation dose estimation in unsealed source therapy.
      ]. They used a 3D CNN architecture for kidney segmentation to provide organ-level dosimetry from post-treatment SPECT imaging to estimate renal radiation doses from TRT. Tang et al. proposed a CNN-based algorithm for liver segmentation for personalized selective internal radiation therapy [
      • Tang X.
      • Jafargholi Rangraz E.
      • Coudyzer W.
      • Bertels J.
      • Robben D.
      • Schramm G.
      • et al.
      Whole liver segmentation based on deep learning and manual adjustment for clinical use in SIRT.
      ]. Kidney segmentation has been conducted using a 3D UNet architecture on 177Lu SPECT images for uptake quantification and dosimetry [
      • Ryden T.
      • van Essen M.
      • Svensson J.
      • Bernhardt P.
      Deep learning-based SPECT/CT quantification of 177Lu uptake in the kidneys.
      ].
      MC simulations using patient-specific anatomical and metabolic features constitute the current gold standard for internal dosimetry calculations. However, the approach suffers from exhaustive computational burden. Recently deep learning approaches have been employed in patient-specific dosimetry for monitoring or treatment plan optimization using molecular images (SPECT and PET). Akhavanallaf et al. developed an AI-based framework based on ResNet architecture for personalized dosimetry in nuclear medicine procedures [
      • Akhavanallaf A.
      • Shiri I.
      • Arabi H.
      • Zaidi H.
      Whole-body voxel-based internal dosimetry using deep learning.
      ]. They extended the key idea behind the voxel-based MIRD (Medical Internal Radiation Dose ) approach through the prediction of specific S-values according to the density map derived from CT images followed by calculation of the cumulated activity map from the predicted specific kernels (Fig. 5). A physics-informed deep neural network (DNN) was designed to predict the energy deposited in the volume surrounding a unit radioactive source in the center of the kernel. The input channel was fed with a density map whereas the output was MC-based deposited energy maps of the given radiotracer, referred to as specific S-value kernels. Lee et al. proposed a methodology employing deep learning for the direct generation of dose rate maps from 18F-FDG PET/CT images [
      • Lee M.S.
      • Hwang D.
      • Kim J.H.
      • Lee J.S.
      Deep-dose: a voxel dose estimation method using deep convolutional neural network for personalized internal dosimetry.
      ]. Gotz et al. used a modified U-Net network for dose map reconstruction of patients receiving 177Lu-PSMA [
      • Götz T.I.
      • Schmidkonz C.
      • Chen S.
      • Al-Baddai S.
      • Kuwert T.
      • Lang E.
      A deep learning approach to radiation dose estimation.
      ]. They further extended their work for patient-specific dosimetry of 177Lu compounds by predicting specific dose voxel kernels using AI algorithms [
      • Götz T.I.
      • Lang E.
      • Schmidkonz C.
      • Kuwert T.
      • Ludwig B.
      Dose voxel kernel prediction with neural networks for radiation dose estimation.
      ]. Xue et al. developed a GAN model to predict post-therapy dosimetry for 177Lu-PSMA therapy using pre-therapy 68Ga-PSMA PET/CT examinations [
      • Xue S.
      • Gafita A.
      • Afshar-Oromieh A.
      • Eiber M.
      • Rominger A.
      • Shi K.
      Voxel-wise Prediction of Post-therapy Dosimetry for 177Lu-PSMA I&T Therapy using Deep Learning.
      ].
      Figure thumbnail gr5
      Fig. 5Schematic representation of the voxel-scale dosimetry procedure. The top and bottom panels show the deep learning-based specific S-value kernel prediction and MIRD-based voxel dosimetry formalism. Adapted from Ref.
      [
      • Akhavanallaf A.
      • Shiri I.
      • Arabi H.
      • Zaidi H.
      Whole-body voxel-based internal dosimetry using deep learning.
      ]
      .
      Despite the substantial growth and widespread adoption of patient-specific TRT, the “one-size-fits-all” approach is still commonly used in the clinic. Del Prete et al. reported that in TRT, organs at risk rarely reach the conservative threshold dose while most tumors receive submaximal doses, thus leading to undertreatment of patients [
      • Del Prete M.
      • Buteau F.-A.
      • Beauregard J.-M.
      Personalized 177 Lu-octreotate peptide receptor radionuclide therapy of neuroendocrine tumours: a simulation study.
      ]. Therefore, retrospective studies involving patients receiving TRT allows the evaluation of the treatment response to the one-dose-fits-all approach and would demonstrate the critical nature of the transition to adaptive dosimetry-guided treatment planning. This technique requires a tool incorporating a module for automatic segmentation of tumors/organs at risk along with a fast and accurate personalized dosimetry module.

      Challenges/opportunities and outlook

      Over the past decade, there have been significant advances in deep learning-assisted developments which have impacted modern healthcare. The potential of AI-based solutions in various molecular imaging applications has been thoroughly explored in academic and corporate settings during the last decade. This article may, therefore, be viewed as an early album covering some of the many and varied snapshots of this rapidly growing field. At this time, these tools are still available only to experts in the field but there are many reasons to believe that it will be potentially available for routine use in the near future.
      The proposed AI-based solutions in PET and SPECT imaging can be divided into two groups: (i) Techniques solely proposed to replace the current algorithms/frameworks due to their superior performance and (ii) approaches that have rendered previously impractical/unfeasible scenarios/frameworks using conventional methods feasible. In the first category, the promise of deep learning approaches consists in providing even slightly better functionality/performance compared to existing methods rather than undertaking an unprecedented functionality previously inconceivable. For example in PET instrumentation, Anger logic is used to determine the location of the interaction within the detector modules. Novel approaches based on deep learning methods tend to solely replace the Anger logic to achieve better localization and energy resolution. In this regard, novel deep learning approaches play the same role and compete with existing methods.
      Likewise, in MRI-guided synthetic CT generation, deep learning approaches serve as alternative to atlas- or MRI segmentation-based techniques, whereas in the domain of noise reduction, current analytical models/algorithms are being replaced by deep learning methods. In this regard, the proposed deep learning methods would not revolutionarily alter the current frameworks or produce a paradigm shift, though they hold the promise of providing more accurate outcomes or requiring less human intervention, and easy adaptability to new input data. In this light, this category of AI-based solutions are more likely to be fully employed in clinical practice or on commercial systems since less standardization, protocols and frame redefinition, and staff retraining is required. For instance, deep learning-guided CT image reconstruction developed by GE Medical Systems obtained FDA approval [

      FDA. 510k Premarket Notification of Deep Learning Image Reconstruction (GE Medical Systems). 2019.

      ].
      Conversely, the extraordinary power of deep learning approaches has rendered many previously impractical/nonfeasible scenarios/frameworks feasible. This includes tasks, such as attenuation and scatter correction in the image domain, estimation of synthetic CT images from the non-attenuation corrected emission images, object completion of truncation date, image translation, and internal dosimetry. These processes are inherently ill-posed and in many cases, there is a lack of a mathematical framework associated with these problems. Such AI-based solutions, though offering unprecedented opportunities in PET and SPECT imaging, face thoughtful challenges with respect to their deployment in clinical practice as they require extensive validation using large clinical databases and a wide range of conditions.
      Overall, a clear distinction should be made between the applications of AI-based solutions as processing or decision support tools or the replacement of experts or clinicians in clinical practice. Considering the superior performance of deep learning approaches, some algorithms are sufficiently mature and robust to be deployed in clinical practice as decision support tools. These algorithms are supposed to replace conventional methods owing to their superior performance or robustness. In this regard, any possible failure of the AI-based solution would be treated in a similar way to existing approaches. Conversely, AI-based solutions deemed to fully replace the experts are still considered as fantasy or science-fiction. Such algorithms still require additional development and remarkable evolution to be independently employed in clinical setting. Nevertheless, these algorithms could play a significant role in the short-run as decision support tools to create a synergy between the capabilities of AI and human expertise.
      It is gratifying to see in overview the progress that AI has made, from early developments in neural networks to complex deep learning architectures, and more recently towards continuous learning AI in radiology [
      • Song T.-A.
      • Chowdhury S.R.
      • Yang F.
      • Dutta J.
      Super-resolution PET imaging using convolutional neural networks.
      ]. Challenges remain, particularly in the areas of clinical validation and liability towards wider adoption, ethical and legal aspects and a number of other issues that need to be settled [
      • Perez-Liva M.
      • Yoganathan T.
      • Herraiz J.L.
      • Poree J.
      • Tanter M.
      • Balvay D.
      • et al.
      Ultrafast ultrasound imaging for super-resolution preclinical cardiac PET.
      ].

      Acknowledgments

      This work was supported by the Swiss National Science Foundation under grant SNRF 320030_176052 and the Private Foundation of Geneva University Hospitals under grant RC-06-01.

      References

        • Nensa F.
        • Demircioglu A.
        • Rischpler C.
        Artificial intelligence in nuclear medicine.
        J Nucl Med. 2019; 60: 29S-37S
        • Arabi H.
        • Zaidi H.
        Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy.
        Eur J Hybrid Imaging. 2020; 4: 1-23
        • Gong K.
        • Berg E.
        • Cherry S.R.
        • Qi J.
        Machine learning in PET: from photon detection to quantitative image reconstruction.
        Proc IEEE. 2020; 108: 51-68
        • Alpaydin E.
        Introduction to machine learning.
        MIT Press, 2020
        • LeCun Y.
        • Bengio Y.
        • Hinton G.
        Deep learning.
        Nature. 2015; 521: 436-444
        • Wang T.
        • Lei Y.
        • Fu Y.
        • Curran W.J.
        • Liu T.
        • Nye J.A.
        • et al.
        Machine learning in quantitative PET: a review of attenuation correction and low-count image reconstruction methods.
        Phys Med. 2020; 76: 294-306
        • Lee G.
        • Fujita H.
        Deep learning in medical image analysis: challenges and applications.
        Springer, 2020
        • Masci J.
        • Meier U.
        • Cireşan D.
        • Schmidhuber J.
        Stacked convolutional auto-encoders for hierarchical feature extraction.
        in: International Conference on Artificial Neural Networks. Springer, 2011: 52-59
        • Zaidi H.
        • El Naqa I.
        Quantitative molecular positron emission tomography imaging using advanced deep learning techniques.
        Annu Rev Biomed Eng. 2021; 23 (in press)https://doi.org/10.1146/annurev-bioeng-082420-020343
        • Altaf F.
        • Islam S.M.
        • Akhtar N.
        • Janjua N.K.
        Going deep in medical image analysis: concepts, methods, challenges, and future directions.
        IEEE Access. 2019; 7: 99540-99572
        • Ronneberger O.
        • Fischer P.
        • U-net B.T.
        Convolutional networks for biomedical image segmentation.
        in: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015: 234-241
      1. Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, et al. Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:180403999. 2018.

        • Diakogiannis F.I.
        • Waldner F.
        • Caccetta P.
        • Wu C.
        Resunet-a: a deep learning framework for semantic segmentation of remotely sensed data.
        ISPRS J Photogramm Remote Sens. 2020; 162: 94-114
      2. Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE conference on computer vision and pattern recognition2017. p. 1125-34.

      3. Zhu J-Y, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision2017. p. 2223-32.

        • Müller F.
        • Schug D.
        • Hallen P.
        • Grahe J.
        • Schulz V.
        A novel DOI positioning algorithm for monolithic scintillator crystals in PET based on gradient tree boosting.
        IEEE Trans Radiat Plasma Med Sci. 2018; 3: 465-474
        • Peng P.
        • Judenhofer M.S.
        • Jones A.Q.
        • Cherry S.R.
        Compton PET: a simulation study for a PET module with novel geometry and machine learning for position decoding.
        Biomed Phys Eng Express. 2018; 5015018
        • Sanaat A.
        • Zaidi H.
        Depth of interaction estimation in a preclinical PET scanner equipped with monolithic crystals coupled to SiPMs using a deep neural network.
        Appl Sci. 2020; 10: 4753
        • Müller F.
        • Schug D.
        • Hallen P.
        • Grahe J.
        • Schulz V.
        Gradient tree boosting-based positioning method for monolithic scintillator crystals in positron emission tomography.
        IEEE Trans Radiat Plasma Med Sci. 2018; 2: 411-421
        • Hashimoto F.
        • Ote K.
        • Ota R.
        • Hasegawa T.
        A feasibility study on 3D interaction position estimation using deep neural network in Cherenkov-based detector: a Monte Carlo simulation study.
        Biomed Phys Eng Express. 2019; 5035001
        • Berg E.
        • Cherry S.R.
        Using convolutional neural networks to estimate time-of-flight from PET detector waveforms.
        Phys Med Biol. 2018; 63 (02LT1)
      4. Gladen R, Chirayath V, Fairchild A, Manry M, Koymen A, Weiss A. Efficient Machine Learning Approach for Optimizing the Timing Resolution of a High Purity Germanium Detector. arXiv preprint arXiv:200400008. 2020.

        • Shawahna A.
        • Sait S.M.
        • El-Maleh A.
        FPGA-based accelerators of deep learning networks for learning and classification: a review.
        IEEE Access. 2018; 7: 7823-7859
        • Shiri I.
        • Akhavanallaf A.
        • Sanaat A.
        • Salimi Y.
        • Askari D.
        • Mansouri Z.
        • et al.
        Ultra-low-dose chest CT imaging of COVID-19 patients using a deep residual neural network.
        Eur Radiol. 2020; 1–12
        • Häggström I.
        • Schmidtlein C.R.
        • Campanella G.
        • Fuchs T.J.
        DeepPET: A deep encoder–decoder network for directly solving the PET image reconstruction inverse problem.
        Med Image Anal. 2019; 54: 253-262
        • Zhu B.
        • Liu J.Z.
        • Cauley S.F.
        • Rosen B.R.
        • Rosen M.S.
        Image reconstruction by domain-transform manifold learning.
        Nature. 2018; 555: 487-492
        • Ravishankar S.
        • Ye J.C.
        • Fessler J.A.
        Image reconstruction: from sparsity to data-adaptive methods and machine learning.
        Proc IEEE. 2019; 108: 86-109
        • Reader A.
        • Corda G.
        • Mehranian A.
        • da Costa-Luis C.
        • Ellis S.
        • Schnabel J.A.
        Deep learning for PET image reconstruction. IEEE Trans Radiat Plasma.
        Med Sci. 2021; 5: 1-25
      5. FDA. 510k Premarket Notification of AiCE Deep Learning Reconstruction (Canon). 2019.

      6. FDA. 510k Premarket Notification of Deep Learning Image Reconstruction (GE Medical Systems). 2019.

        • Whiteley W.
        • Panin V.
        • Zhou C.
        • Cabello J.
        • Bharkhada D.
        • FastPET G.J.
        Near real-time reconstruction of PET histo-image data using a neural network.
        IEEE Trans Radiat Plasma Med Sci. 2021; 5: 65-77
        • Arabi H.
        • Zaidi H.
        Non-local mean denoising using multiple PET reconstructions.
        Ann Nucl Med. 2021; 35: 176-186
        • Arabi H.
        • Zaidi H.
        Improvement of image quality in PET using post-reconstruction hybrid spatial-frequency domain filtering.
        Phys Med Biol. 2018; 63215010
        • Arabi H.
        • Zaidi H.
        Spatially guided nonlocal mean approach for denoising of PET images.
        Med Phys. 2020; 47: 1656-1669
        • Chan C.
        • Fulton R.
        • Barnett R.
        • Feng D.D.
        • Meikle S.
        Postreconstruction nonlocal means filtering of whole-body PET with an anatomical prior.
        IEEE Trans Med Imaging. 2014; 33: 636-650
        • Reader A.J.
        • Zaidi H.
        Advances in PET image reconstruction.
        PET Clinics. 2007; 2: 173-190
        • Yan J.
        • Lim J.-C.-S.
        • Townsend D.W.
        MRI-guided brain PET image filtering and partial volume correction.
        Phys Med Biol. 2015; 60: 961
        • Wang Y.
        • Ma G.
        • An L.
        • Shi F.
        • Zhang P.
        • Lalush D.S.
        • et al.
        Semisupervised tripled dictionary learning for standard-dose PET image prediction using low-dose PET and multimodal MRI.
        IEEE Trans Biomed Eng. 2016; 64: 569-579
        • An L.
        • Zhang P.
        • Adeli E.
        • Wang Y.
        • Ma G.
        • Shi F.
        • et al.
        Multi-level canonical correlation analysis for standard-dose PET image estimation.
        IEEE Trans Image Process. 2016; 25: 3303-3315
        • Zhang W.
        • Gao J.
        • Yang Y.
        • Liang D.
        • Liu X.
        • Zheng H.
        • et al.
        Image reconstruction for positron emission tomography based on patch-based regularization and dictionary learning.
        Med Phys. 2019; 46: 5014-5026
        • Bland J.
        • Mehranian A.
        • Belzunce M.A.
        • Ellis S.
        • McGinnity C.J.
        • Hammers A.
        • et al.
        MR-guided kernel EM reconstruction for reduced dose PET imaging.
        IEEE Trans Radiat Plasma Med Sci. 2017; 2: 235-243
        • Litjens G.
        • Kooi T.
        • Bejnordi B.E.
        • Setio A.A.A.
        • Ciompi F.
        • Ghafoorian M.
        • et al.
        A survey on deep learning in medical image analysis.
        Med Image Anal. 2017; 42: 60-88
        • Chen K.T.
        • Gong E.
        • de Carvalho Macruz F.B.
        • Xu J.
        • Boumis A.
        • Khalighi M.
        • et al.
        Ultra-low-dose (18)F-Florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs.
        Radiology. 2019; 290: 649-656
        • Xiang L.
        • Qiao Y.
        • Nie D.
        • An L.
        • Wang Q.
        • Shen D.
        Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI.
        Neurocomputing. 2017; 267: 406-416
        • Wang Y.
        • Yu B.
        • Wang L.
        • Zu C.
        • Lalush D.S.
        • Lin W.
        • et al.
        3D conditional generative adversarial networks for high-quality PET image estimation at low dose.
        Neuroimage. 2018; 174: 550-562
        • Ouyang J.
        • Chen K.T.
        • Gong E.
        • Pauly J.
        • Zaharchuk G.
        Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss.
        Med Phys. 2019; 46: 3555-3564
        • Song T.-A.
        • Chowdhury S.R.
        • Yang F.
        • Dutta J.
        Super-resolution PET imaging using convolutional neural networks.
        IEEE Trans Comput Imaging. 2020; 6: 518-528
        • Sanaat A.
        • Arabi H.
        • Mainta I.
        • Garibotto V.
        • Zaidi H.
        Projection-space implementation of deep learning-guided low-dose brain PET imaging improves performance over implementation in image-space.
        J Nucl Med. 2020; 61: 1388-1396
      7. Xu J, Gong E, Pauly J, Zaharchuk G. 200x Low-dose PET Reconstruction using Deep Learning. ARXIV. 2017:eprint arXiv:1712.04119.

        • Liu C.C.
        • Qi J.
        Higher SNR PET image prediction using a deep learning model and MRI image.
        Phys Med Biol. 2019; 64115004
        • Cui J.
        • Gong K.
        • Guo N.
        • Wu C.
        • Meng X.
        • Kim K.
        • et al.
        PET image denoising using unsupervised deep learning.
        Eur J Nucl Med Mol Imaging. 2019; 46: 2780-2789
        • Lu W.
        • Onofrey J.A.
        • Lu Y.
        • Shi L.
        • Ma T.
        • Liu Y.
        • et al.
        An investigation of quantitative accuracy for deep learning based denoising in oncological PET.
        Phys Med Biol. 2019; 64165019
        • Kaplan S.
        • Zhu Y.-M.
        Full-dose PET image estimation from low-dose PET image using deep learning: a pilot study.
        J Digit Imaging. 2019; 32: 773-778
        • Zhou L.
        • Schaefferkoetter J.D.
        • Tham I.W.
        • Huang G.
        • Yan J.
        Supervised learning with CycleGAN for low-dose FDG PET image denoising.
        Med Image Anal. 2020; 101770
        • Lei Y.
        • Dong X.
        • Wang T.
        • Higgins K.
        • Liu T.
        • Curran W.J.
        • et al.
        Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks.
        Phys Med Biol. 2019; 64215017
        • Dong X.
        • Lei Y.
        • Wang T.
        • Higgins K.
        • Liu T.
        • Curran W.J.
        • et al.
        Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging.
        Phys Med Biol. 2020; 65055011
      8. Lu S, Tan J, Gao Y, Shi Y, Liang Z. Prior knowledge driven machine learning approach for PET sinogram data denoising. Medical Imaging 2020: Physics of Medical Imaging: International Society for Optics and Photonics; 2020. p. 113124A.

        • Hong X.
        • Zan Y.
        • Weng F.
        • Tao W.
        • Peng Q.
        • Huang Q.
        Enhancing the image quality via transferred deep residual learning of coarse PET sinograms.
        IEEE Trans Med Imaging. 2018; 37: 2322-2332
      9. Sanaat A, Shiri I, Arabi H, Mainta I, Nkoulou R, and Zaidi H. Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging. Eur J Nucl Med Mol Imaging; 2021 in press.

        • Ramon A.J.
        • Yang Y.
        • Pretorius P.H.
        • Johnson K.L.
        • King M.A.
        • Wernick M.N.
        Improving Diagnostic Accuracy in Low-Dose SPECT myocardial perfusion imaging with convolutional denoising networks.
        IEEE Trans Med Imaging. 2020; 39: 2893-2903
      10. Shiri I, AmirMozafari Sabet K, Arabi H, Pourkeshavarz M, Teimourian B, Ay MR, et al. Standard SPECT myocardial perfusion estimation from half-time acquisitions using deep convolutional residual neural networks. J Nucl Cardiol; 2021 in press.

      11. Reymann MP, Würfl T, Ritt P, Stimpel B, Cachovan M, Vija AH, et al. U-Net for SPECT Image Denoising. 2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC): p. 1–2.

      12. Song C, Yang Y, Wernick MN, Pretorius PH, King MA. Low-Dose Cardiac-Gated SPECT Studies Using a Residual Convolutional Neural Network. In IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) 2019. p. 653–6.

        • Liu J.
        • Yang Y.
        • Wernick M.N.
        • Pretorius P.H.
        • King M.A.
        Deep learning with noise-to-noise training for denoising in SPECT myocardial perfusion imaging.
        Med Phys. 2021; 48: 156-168
        • Dietze M.M.A.
        • Branderhorst W.
        • Kunnen B.
        • Viergever M.A.
        • de Jong H.
        Accelerated SPECT image reconstruction with FBP and an image enhancement convolutional neural network.
        EJNMMI Phys. 2019; 6: 14
        • Shao W.
        • Pomper M.G.
        • Du Y.
        A learned reconstruction network for SPECT imaging.
        IEEE Trans Radiat Plasma Med Sci. 2021; 5: 26-34
        • Chrysostomou C.
        • Koutsantonis L.
        • Lemesios C.
        Papanicolas CN. A Reconstruction Method Based on Deep Convolutional Neural Network for SPECT Imaging.
        IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC). 2018; : 1-4
        • Lin C.
        • Chang Y.C.
        • Chiu H.Y.
        • Cheng C.H.
        • Huang H.M.
        Reducing scan time of paediatric (99m)Tc-DMSA SPECT via deep learning.
        Clin Radiol. 2021; 76: 315.e13-315.e20
        • Ryden T.
        • van Essen M.
        • Marin I.
        • Svensson J.
        • Bernhardt P.
        Deep learning generation of synthetic intermediate projections improves (177)Lu SPECT images reconstructed with sparsely acquired projections.
        J Nucl Med. 2021; (in press)https://doi.org/10.2967/jnumed.120.245548
        • Zaidi H.
        • Karakatsanis N.
        Towards enhanced PET quantification in clinical oncology.
        Br J Radiol. 2017; 91: 20170508
        • Zaidi H.
        • Koral K.F.
        Scatter modelling and compensation in emission tomography.
        Eur J Nucl Med Mol Imaging. 2004; 31: 761-782
        • Mehranian A.
        • Arabi H.
        • Zaidi H.
        Vision 20/20: magnetic resonance imaging-guided attenuation correction in PET/MRI: challenges, solutions, and opportunities.
        Med Phys. 2016; 43: 1130-1155
        • Teuho J.
        • Torrado-Carvajal A.
        • Herzog H.
        • Anazodo U.
        • Klén R.
        • Iida H.
        • et al.
        Magnetic resonance-based attenuation correction and scatter correction in neurological positron emission tomography/magnetic resonance imaging—current status with emerging applications.
        Front Phys. 2020; 7: 243
        • Berker Y.
        • Li Y.
        Attenuation correction in emission tomography using the emission data–a review.
        Med Phys. 2016; 43: 807-832
      13. Arabi H, Zaidi H. Deep learning-based metal artefact reduction in PET/CT imaging. Eur Radiol; 2021 Feb 10.

        • Mostafapour S.
        • Gholamian Khah F.
        • Dadgar H.
        • ARABI H.
        • Zaidi H.
        Feasibility of deep learning-guided attenuation and scatter correction of whole-body 68Ga-PSMA PET studies in the image domain.
        Clin Nucl Med. 2021; (in press)https://doi.org/10.1097/RLU.0000000000003585
        • Mehranian A.
        • Zaidi H.
        • Reader A.J.
        MR-guided joint reconstruction of activity and attenuation in brain PET-MR.
        NeuroImage. 2017; 162: 276-288
        • Rezaei A.
        • Deroose C.M.
        • Vahle T.
        • Boada F.
        • Nuyts J.
        Joint reconstruction of activity and attenuation in fime-of-flight PET: a quantitative analysis.
        J Nucl Med. 2018; 59: 1624-1629
        • Mehranian A.
        • Arabi H.
        • Zaidi H.
        Quantitative analysis of MRI-guided attenuation correction techniques in time-of-flight brain PET/MRI.
        NeuroImage. 2016; 130: 123-133
        • Arabi H.
        • Zaidi H.
        One registration multi-atlas-based pseudo-CT generation for attenuation correction in PET/MRI.
        Eur J Nucl Med Mol Imaging. 2016; 43: 2021-2035
        • Arabi H.
        • Koutsouvelis N.
        • Rouzaud M.
        • Miralbell R.
        • Zaidi H.
        Atlas-guided generation of pseudo-CT images for MRI-only and hybrid PET-MRI-guided radiotherapy treatment planning.
        Phys Med Biol. 2016; 61: 6531-6552
        • Arabi H.
        • Zaidi H.
        Comparison of atlas-based techniques for whole-body bone segmentation.
        Med Image Anal. 2017; 36: 98-112
        • Arabi H.
        • Zaidi H.
        Truncation compensation and metallic dental implant artefact reduction in PET/MRI attenuation correction using deep learning-based object completion.
        Phys Med Biol. 2020; 65195002
        • Liu F.
        • Jang H.
        • Kijowski R.
        • Zhao G.
        • Bradshaw T.
        • McMillan A.B.
        A deep learning approach for 18 F-FDG PET attenuation correction.
        EJNMMI Phys. 2018; 5: 1-15
        • Dong X.
        • Wang T.
        • Lei Y.
        • Higgins K.
        • Liu T.
        • Curran W.J.
        • et al.
        Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging.
        Phys Med Biol. 2019; 64215016
        • Shi L.
        • Onofrey J.A.
        • Liu H.
        • Liu Y.H.
        • Liu C.
        Deep learning-based attenuation map generation for myocardial perfusion SPECT.
        Eur J Nucl Med Mol Imaging. 2020; 47: 2383-2395
        • Hwang D.
        • Kim K.Y.
        • Kang S.K.
        • Seo S.
        • Paeng J.C.
        • Lee D.S.
        • et al.
        Improving the accuracy of simultaneously reconstructed activity and attenuation maps using deep learning.
        J Nucl Med. 2018; 59: 1624-1629
        • Hwang D.
        • Kang S.K.
        • Kim K.Y.
        • Seo S.
        • Paeng J.C.
        • Lee D.S.
        • et al.
        Generation of PET Attenuation Map for Whole-Body Time-of-Flight (18)F-FDG PET/MRI Using a deep neural network trained with simultaneously reconstructed activity and attenuation maps.
        J Nucl Med. 2019; 60: 1183-1189
        • Arabi H.
        • Zaidi H.
        Deep learning-guided estimation of attenuation correction factors from time-of-flight PET emission data.
        Med Image Anal. 2020; 64101718
        • Shiri I.
        • Ghafarian P.
        • Geramifar P.
        • Leung K.H.
        • Ghelichoghli M.
        • Oveisi M.
        • et al.
        Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC).
        Eur Radiol. 2019; 29: 6867-6879
        • Yang J.
        • Park D.
        • Gullberg G.T.
        • Seo Y.
        Joint correction of attenuation and scatter in image space using deep convolutional neural networks for dedicated brain (18)F-FDG PET.
        Phys Med Biol. 2019; 64075019
        • Arabi H.
        • Bortolin K.
        • Ginovart N.
        • Garibotto V.
        • Zaidi H.
        Deep learning-guided joint attenuation and scatter correction in multitracer neuroimaging studies.
        Hum Brain Mapp. 2020; 41: 3667-3679
        • Shiri I.
        • Arabi H.
        • Geramifar P.
        • Hajianfar G.
        • Ghafarian P.
        • Rahmim A.
        • et al.
        Deep-JASC: joint attenuation and scatter correction in whole-body (18)F-FDG PET using a deep residual network.
        Eur J Nucl Med Mol Imaging. 2020; 47: 2533-2548
        • Liu F.
        • Jang H.
        • Kijowski R.
        • Bradshaw T.
        • McMillan A.B.
        Deep learning MR imaging-based attenuation correction for PET/MR imaging.
        Radiology. 2018; 286: 676-684
        • Arabi H.
        • Zeng G.
        • Zheng G.
        • Zaidi H.
        Novel adversarial semantic structure deep learning for MRI-guided attenuation correction in brain PET/MRI.
        Eur J Nucl Med Mol Imaging. 2019; 46: 2746-2759
        • Leynes A.P.
        • Yang J.
        • Wiesinger F.
        • Kaushik S.S.
        • Shanbhag D.D.
        • Seo Y.
        • et al.
        Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI.
        J Nucl Med. 2018; 59: 852-858
        • Ladefoged C.N.
        • Marner L.
        • Hindsholm A.
        • Law I.
        • Højgaard L.
        • Andersen F.L.
        Deep learning based attenuation correction of PET/MRI in pediatric brain tumor patients: evaluation in a clinical setting.
        Front Neurosci. 2018; 12: 1005
        • Ladefoged C.N.
        • Hansen A.E.
        • Henriksen O.M.
        • Bruun F.J.
        • Eikenes L.
        • Øen S.K.
        • et al.
        AI-driven attenuation correction for brain PET/MRI: clinical evaluation of a dementia cohort and importance of the training group size.
        NeuroImage. 2020; 222117221
        • Xiang H.
        • Lim H.
        • Fessler J.A.
        • Dewaraja Y.K.
        A deep neural network for fast and accurate scatter estimation in quantitative SPECT/CT under challenging scatter conditions.
        Eur J Nucl Med Mol Imaging. 2020; 47: 2956-2967
        • Nguyen T.T.
        • Chi T.N.
        • Hoang M.D.
        • Thai H.N.
        • Duc T.N.
        3D Unet Generative Adversarial Network for Attenuation Correction of SPECT Images. IEEE, 2020: 93-97
      14. Mostafapour S, Gholamiankhah F, Maroofpour S, Momennezhad M, Asadinezhad M, Zakavi SR, et al. Deep learning-based attenuation correction in the image domain for myocardial perfusion SPECT imaging. arXiv preprint arXiv:210204915. 2021.

        • Armanious K.
        • Küstner T.
        • Reimold M.
        • Nikolaou K.
        • La Fougère C.
        • Yang B.
        • et al.
        Independent brain (18)F-FDG PET attenuation correction using a deep learning approach with Generative Adversarial Networks.
        Hellenic J Nucl Med. 2019; 22: 179-186
        • Colmeiro R.R.
        • Verrastro C.
        • Minsky D.
        • Grosges T.
        Whole Body Positron Emission Tomography Attenuation Correction Map Synthesizing using 3D Deep Generative Adversarial Networks.
        Research Square. 2020; https://doi.org/10.21203/rs.3.rs-46953/v1
        • Shi L.
        • Onofrey J.A.
        • Revilla E.M.
        • Toyonaga T.
        • Menard D.
        • Ankrah J.
        • et al.
        A novel loss function incorporating imaging acquisition physics for PET attenuation map generation using deep learning.
        in: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019: 723-731
        • Bradshaw T.J.
        • Zhao G.
        • Jang H.
        • Liu F.
        • McMillan A.B.
        Feasibility of Deep Learning-Based PET/MR attenuation correction in the pelvis using only diagnostic MR images.
        Tomography. 2018; 4: 138-147
        • Jang H.
        • Liu F.
        • Zhao G.
        • Bradshaw T.
        • McMillan A.B.
        Technical Note: deep learning based MRAC using rapid ultrashort echo time imaging.
        Med Phys. 2018; 45: 3697-3704
      15. Mecheter I, Amira A, Abbod M, Zaidi H. Brain MR Imaging Segmentation Using Convolutional Auto Encoder Network for PET Attenuation Correction. In Proceedings of SAI Intelligent Systems Conference: Springer; 2020. p. 430-40.

        • Gong K.
        • Yang J.
        • Kim K.
        • El Fakhri G.
        • Seo Y.
        • Li Q.
        Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images.
        Phys Med Biol. 2018; 63125011
        • Blanc-Durand P.
        • Khalife M.
        • Sgard B.
        • Kaushik S.
        • Soret M.
        • Tiss A.
        • et al.
        Attenuation correction using 3D deep convolutional neural network for brain 18F-FDG PET/MR: comparison with Atlas, ZTE and CT based attenuation correction.
        PLoS One. 2019; 14e0223141
        • Spuhler K.D.
        • Gardus 3rd, J.
        • Gao Y.
        • DeLorenzo C.
        • Parsey R.
        • Huang C.
        Synthesis of Patient-Specific Transmission Data for PET Attenuation Correction for PET/MRI neuroimaging using a convolutional neural network.
        J Nucl Med. 2019; 60: 555-560
        • Torrado-Carvajal A.
        • Vera-Olmos J.
        • Izquierdo-Garcia D.
        • Catalano O.A.
        • Morales M.A.
        • Margolin J.
        • et al.
        Dixon-VIBE Deep Learning (DIVIDE) Pseudo-CT Synthesis for Pelvis PET/MR Attenuation Correction.
        J Nucl Med. 2019; 60: 429-435
        • Gong K.
        • Han P.K.
        • Johnson K.A.
        • El Fakhri G.
        • Ma C.
        • Li Q.
        Attenuation correction using deep Learning and integrated UTE/multi-echo Dixon sequence: evaluation in amyloid and tau PET imaging.
        Eur J Nucl Med Mol Imaging. 2021; (in press)
        • Gong K.
        • Yang J.
        • Larson P.E.
        • Behr S.C.
        • Hope T.A.
        • Seo Y.
        • et al.
        MR-based attenuation correction for brain PET using 3D cycle-consistent adversarial network.
        IEEE Trans Radiat Plasma Med Sci. 2021; 5: 185-192
      16. Leynes AP, Ahn SP, Wangerin KA, Kaushik SS, Wiesinger F, Hope TA, et al. Bayesian deep learning Uncertainty estimation and pseudo-CT prior for robust Maximum Likelihood estimation of Activity and Attenuation (UpCT-MLAA) in the presence of metal implants for simultaneous PET/MRI in the pelvis. arXiv preprint arXiv:200103414. 2020.

        • Pozaruk A.
        • Pawar K.
        • Li S.
        • Carey A.
        • Cheng J.
        • Sudarshan V.P.
        • et al.
        Augmented deep learning model for improved quantitative accuracy of MR-based PET attenuation correction in PSMA PET-MRI prostate imaging.
        Eur J Nucl Med Mol Imaging. 2021; 48: 9-20
        • Tao L.
        • Fisher J.
        • Anaya E.
        • Li X.
        • Levin C.S.
        Pseudo CT Image Synthesis and Bone Segmentation from MR images using adversarial networks with residual blocks for MR-based attenuation correction of brain PET Data.
        IEEE Trans Radiat Plasma Med Sci. 2021; 5: 193-201
        • Rajalingam B.
        • Priya R.
        Multimodal medical image fusion based on deep learning neural network for clinical treatment analysis.
        Int J ChemTech Res. 2018; 11: 160-176
        • Kang S.K.
        • Seo S.
        • Shin S.A.
        • Byun M.S.
        • Lee D.Y.
        • Kim Y.K.
        • et al.
        Adaptive template generation for amyloid PET using a deep learning approach.
        Hum Brain Mapp. 2018; 39: 3769-3778
        • Huang B.
        • Chen Z.
        • Wu P.-M.
        • Ye Y.
        • Feng S.-T.
        • Wong C.-Y.O.
        • et al.
        Fully automated delineation of gross tumor volume for head and neck cancer on PET-CT using deep learning: a dual-center study.
        Contrast Media & Molecular Imaging. 2018; : 8923028
        • Lian C.
        • Ruan S.
        • Denoeux T.
        • Li H.
        • Vera P.
        Joint tumor segmentation in PET-CT images using co-clustering and fusion based on belief functions.
        IEEE Trans Image Process. 2018; 28: 755-766
        • Zhao X.
        • Li L.
        • Lu W.
        • Tan S.
        Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network.
        Phys Med Biol. 2018; 64015011
        • Zhao L.
        • Lu Z.
        • Jiang J.
        • Zhou Y.
        • Wu Y.
        • Feng Q.
        Automatic nasopharyngeal carcinoma segmentation using fully convolutional networks with auxiliary paths on dual-modality PET-CT images.
        J Digit Imaging. 2019; 32: 462-470
        • Blanc-Durand P.
        • Van Der Gucht A.
        • Schaefer N.
        • Itti E.
        • Prior J.O.
        Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-Net convolutional neural network study.
        PLoS One. 2018; 13e0195798
        • Leung K.H.
        • Marashdeh W.
        • Wray R.
        • Ashrafinia S.
        • Pomper M.G.
        • Rahmim A.
        • et al.
        A physics-guided modular deep-learning based automated framework for tumor segmentation in PET.
        Phys Med Biol. 2020; 65: 245032
        • Wang T.
        • Lei Y.
        • Tang H.
        • Harms J.
        • Wang C.
        • Liu T.
        • et al.
        A learning-based automatic segmentation method on left ventricle in SPECT imaging. Medical Imaging Biomedical Applications in Molecular, Structural, and Functional Imaging.
        International Society for Optics and Photonics. 2019; : 109531M
        • Roccia E.
        • Mikhno A.
        • Ogden R.T.
        • Mann J.J.
        • Laine A.F.
        • Angelini E.D.
        • et al.
        Quantifying brain [18 F] FDG uptake noninvasively by combining medical health records and dynamic PET imaging data.
        IEEE J Biomed Health Inf. 2019; 23: 2576-2582
        • Park J.
        • Bae S.
        • Seo S.
        • Park S.
        • Bang J.-I.
        • Han J.H.
        • et al.
        Measurement of glomerular filtration rate using quantitative SPECT/CT and deep-learning-based kidney segmentation.
        Sci Rep. 2019; 9: 1-8
        • Visvikis D.
        • Le Rest C.C.
        • Jaouen V.
        • Hatt M.
        Artificial intelligence, machine (deep) learning and radio (geno) mics: definitions and nuclear medicine imaging applications.
        Eur J Nucl Med Mol Imaging. 2019; 1–8
        • Wang H.
        • Zhou Z.
        • Li Y.
        • Chen Z.
        • Lu P.
        • Wang W.
        • et al.
        Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18 F-FDG PET/CT images.
        EJNMMI Res. 2017; 7: 11
        • Seifert R.
        • Weber M.
        • Kocakavuk E.
        • Rischpler C.
        • Kersting D.
        AI and machine learning in nuclear medicine: future perspectives.
        Sem Nucl Med. 51. 2021: 170-177
        • Xu L.
        • Tetteh G.
        • Lipkova J.
        • Zhao Y.
        • Li H.
        • Christ P.
        • et al.
        Automated whole-body bone lesion detection for multiple myeloma on 68Ga-Pentixafor PET/CT imaging using deep learning methods.
        Contrast Media & Molecular Imaging. 2018; 2391925
        • Togo R.
        • Hirata K.
        • Manabe O.
        • Ohira H.
        • Tsujino I.
        • Magota K.
        • et al.
        Cardiac sarcoidosis classification with deep convolutional neural network-based features using polar maps.
        Comput Biol Med. 2019; 104: 81-86
        • Ma L.
        • Ma C.
        • Liu Y.
        • Wang X.
        Thyroid diagnosis from SPECT images using convolutional neural network with optimization.
        Comput Intelligence Neurosci. 2019; : 6212759
      17. Duffy IR, Boyle AJ, Vasdev N. Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology. Molecular imaging. 2019;18:1536012119869070.

        • Lu D.
        • Popuri K.
        • Ding G.W.
        • Balachandar R.
        • Beg M.F.
        Multimodal and multiscale deep neural networks for the early diagnosis of Alzheimer’s disease using structural MR and FDG-PET images.
        Sci Rep. 2018; 8: 1-13
        • Choi H.
        • Jin K.H.
        • AsDN I.
        Predicting cognitive decline with deep learning of brain metabolism and amyloid imaging.
        Behav Brain Res. 2018; 344: 103-109
        • Liu M.
        • Cheng D.
        • Yan W.
        Initiative AsDN. Classification of Alzheimer’s disease by combination of convolutional and recurrent neural networks using FDG-PET images.
        Front Neuroinf. 2018; 12: 35
        • Liu M.
        • Cheng D.
        • Wang K.
        • Wang Y.
        Initiative AsDN. Multi-modality cascaded convolutional neural networks for Alzheimer’s disease diagnosis.
        Neuroinformatics. 2018; 16: 295-308
        • Zhou T.
        • Thung K.H.
        • Zhu X.
        • Shen D.
        Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis.
        Hum Brain Mapp. 2019; 40: 1001-1016
        • Betancur J.
        • Commandeur F.
        • Motlagh M.
        • Sharir T.
        • Einstein A.J.
        • Bokhari S.
        • et al.
        Deep learning for prediction of obstructive disease from fast myocardial perfusion SPECT: a multicenter study.
        JACC Cardiovasc Imaging. 2018; 11: 1654-1663
        • Wang T.
        • Lei Y.
        • Tang H.
        • He Z.
        • Castillo R.
        • Wang C.
        • et al.
        A learning-based automatic segmentation and quantification method on left ventricle in gated myocardial perfusion SPECT imaging: a feasibility study.
        J Nucl Cardiol. 2020; 27: 976-987
        • Mohammed F.
        • He X.
        • Lin Y.
        An easy-to-use deep-learning model for highly accurate diagnosis of Parkinson's disease using SPECT images.
        Comput Med Imaging Graphics. 2021; 87101810
        • Afshar P.
        • Mohammadi A.
        • Plataniotis K.N.
        • Oikonomou A.
        • Benali H.
        From handcrafted to deep-learning-based cancer radiomics: challenges and opportunities.
        IEEE Signal Process Mag. 2019; 36: 132-160
        • Noortman W.A.
        • Vriens D.
        • Grootjans W.
        • Tao Q.
        • de Geus-Oei L.-F.
        Van Velden FH. Nuclear medicine radiomics in precision medicine: why we can’t do without artificial intelligence.
        Q J Nucl Med Mol Imaging. 2020; 64: 278-290
        • Xu W.
        Predictive power of a radiomic signature based on 18F-FDG PET/CT images for EGFR mutational status in NSCLC.
        Front Oncol. 2019; 9: 1062
        • Shiri I.
        • Maleki H.
        • Hajianfar G.
        • Abdollahi H.
        • Ashrafinia S.
        • Hatt M.
        • et al.
        Next-generation radiogenomics sequencing for prediction of EGFR and KRAS mutation status in NSCLC patients using multimodal imaging and machine learning algorithms.
        Mol Imag Biol. 2020; 1–17
        • Edalat-Javid M.
        • Shiri I.
        • Hajianfar G.
        • Abdollahi H.
        • Arabi H.
        • Oveisi N.
        • et al.
        Cardiac SPECT radiomic features repeatability and reproducibility: A multi-scanner phantom study.
        J Nucl Cardiol. 2021; (in press)https://doi.org/10.1007/s12350-020-02109-0
        • Fave X.
        • Zhang L.
        • Yang J.
        • Mackin D.
        • Balter P.
        • Gomez D.
        • et al.
        Delta-radiomics features for the prediction of patient outcomes in non–small cell lung cancer.
        Sci Rep. 2017; 7: 1-11
        • Dissaux G.
        • Visvikis D.
        • Da-Ano R.
        • Pradier O.
        • Chajon E.
        • Barillot I.
        • et al.
        Pretreatment 18F-FDG PET/CT radiomics predict local recurrence in patients treated with stereotactic body radiotherapy for early-stage non-small cell lung cancer: a multicentric study.
        J Nucl Med. 2020; 61: 814-820
        • Mattonen S.A.
        • Davidzon G.A.
        • Benson J.
        • Leung A.N.
        • Vasanawala M.
        • Horng G.
        • et al.
        Bone Marrow and tumor radiomics at 18F-FDG PET/CT: impact on outcome prediction in non-small cell lung cancer.
        Radiology. 2019; 293: 451-459
        • Tixier F.
        • Vriens D.
        • Cheze-Le Rest C.
        • Hatt M.
        • Disselhorst J.A.
        • Oyen W.J.
        • et al.
        Comparison of tumor uptake heterogeneity characterization between static and parametric 18F-FDG PET images in non-small cell lung cancer.
        J Nucl Med. 2016; 57: 1033-1039
        • Ashrafinia S.
        • Dalaie P.
        • Yan R.
        • Ghazi P.
        • Marcus C.
        • Taghipour M.
        • et al.
        Radiomics analysis of clinical myocardial perfusion SPECT to predict coronary artery calcification.
        J Nucl Med. 2018; 59: 512
        • Rahmim A.
        • Huang P.
        • Shenkov N.
        • Fotouhi S.
        • Davoodi-Bojd E.
        • Lu L.
        • et al.
        Improved prediction of outcome in Parkinson's disease using radiomics analysis of longitudinal DAT SPECT images.
        NeuroImage: Clin. 2017; 16: 539-544
        • Ypsilantis P.-P.
        • Siddique M.
        • Sohn H.-M.
        • Davies A.
        • Cook G.
        • Goh V.
        • et al.
        Predicting response to neoadjuvant chemotherapy with PET imaging using convolutional neural networks.
        PLoS One. 2015; 10e0137036
        • Li T.
        • Ao E.C.
        • Lambert B.
        • Brans B.
        • Vandenberghe S.
        • Mok G.S.
        Quantitative imaging for targeted radionuclide therapy dosimetry-technical review.
        Theranostics. 2017; 7: 4551
        • Peng Z.
        • Fang X.
        • Yan P.
        • Shan H.
        • Liu T.
        • Pei X.
        • et al.
        A method of rapid quantification of patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose computing.
        Med Phys. 2020; 47: 2526-2536
        • Xie T.
        • Zaidi H.
        Estimation of the radiation dose in pregnancy: an automated patient-specific model using convolutional neural networks.
        Eur Radiol. 2019; 29: 6805-6815
        • Schoppe O.
        • Pan C.
        • Coronel J.
        • Mai H.
        • Rong Z.
        • Todorov M.I.
        • et al.
        Deep learning-enabled multi-organ segmentation in whole-body mouse scans.
        Nat Commun. 2020; 11: 1-14
      18. Fu W, Sharma S, Abadi E, Iliopoulos A-S, Wang Q, Lo JY, et al. iPhantom: a framework for automated creation of individualized computational phantoms and its application to CT organ dosimetry. arXiv preprint arXiv:200808730. 2020.

        • Jackson P.
        • Hardcastle N.
        • Dawe N.
        • Kron T.
        • Hofman M.S.
        • Hicks R.J.
        Deep learning renal segmentation for fully automated radiation dose estimation in unsealed source therapy.
        Front Oncol. 2018; 8: 215
        • Tang X.
        • Jafargholi Rangraz E.
        • Coudyzer W.
        • Bertels J.
        • Robben D.
        • Schramm G.
        • et al.
        Whole liver segmentation based on deep learning and manual adjustment for clinical use in SIRT.
        Eur J Nucl Med Mol Imaging. 2020; 47: 2742-2752
        • Ryden T.
        • van Essen M.
        • Svensson J.
        • Bernhardt P.
        Deep learning-based SPECT/CT quantification of 177Lu uptake in the kidneys.
        J Nucl Med. 2020; 61: 1401
        • Akhavanallaf A.
        • Shiri I.
        • Arabi H.
        • Zaidi H.
        Whole-body voxel-based internal dosimetry using deep learning.
        Eur J Nucl Med Mol Imaging. 2021; (in press)https://doi.org/10.1007/s00259-020-05013-4
        • Lee M.S.
        • Hwang D.
        • Kim J.H.
        • Lee J.S.
        Deep-dose: a voxel dose estimation method using deep convolutional neural network for personalized internal dosimetry.
        Sci Rep. 2019; 9: 1-9
        • Götz T.I.
        • Schmidkonz C.
        • Chen S.
        • Al-Baddai S.
        • Kuwert T.
        • Lang E.
        A deep learning approach to radiation dose estimation.
        Phys Med Biol. 2020; 65035007
        • Götz T.I.
        • Lang E.
        • Schmidkonz C.
        • Kuwert T.
        • Ludwig B.
        Dose voxel kernel prediction with neural networks for radiation dose estimation.
        Zeitschrift für Medizinische Physik. 2021; 31: 23-36
        • Xue S.
        • Gafita A.
        • Afshar-Oromieh A.
        • Eiber M.
        • Rominger A.
        • Shi K.
        Voxel-wise Prediction of Post-therapy Dosimetry for 177Lu-PSMA I&T Therapy using Deep Learning.
        J Nucl Med. 2020; 61: 1424
        • Del Prete M.
        • Buteau F.-A.
        • Beauregard J.-M.
        Personalized 177 Lu-octreotate peptide receptor radionuclide therapy of neuroendocrine tumours: a simulation study.
        Eur J Nucl Med Mol Imaging. 2017; 44: 1490-1500
        • Perez-Liva M.
        • Yoganathan T.
        • Herraiz J.L.
        • Poree