Advertisement

Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods

      Highlights

      • Machine learning has been integrated to PET in attenuation correction (AC) and low-count reconstruction in recent years.
      • The proposed methods, study designs and key results of the current published studies are reviewed in this paper.
      • Machine learning generates synthetic CT from MR or non-AC PET for PET AC, or directly maps non-AC PET to AC PET.
      • Deep learning-based methods have advantages over conventional machine learning methods in low-count PET reconstruction.

      Abstract

      The rapid expansion of machine learning is offering a new wave of opportunities for nuclear medicine. This paper reviews applications of machine learning for the study of attenuation correction (AC) and low-count image reconstruction in quantitative positron emission tomography (PET). Specifically, we present the developments of machine learning methodology, ranging from random forest and dictionary learning to the latest convolutional neural network-based architectures. For application in PET attenuation correction, two general strategies are reviewed: 1) generating synthetic CT from MR or non-AC PET for the purposes of PET AC, and 2) direct conversion from non-AC PET to AC PET. For low-count PET reconstruction, recent deep learning-based studies and the potential advantages over conventional machine learning-based methods are presented and discussed. In each application, the proposed methods, study designs and performance of published studies are listed and compared with a brief discussion. Finally, the overall contributions and remaining challenges are summarized.

      Keywords

      1. Introduction

      Positron emission tomography (PET) has been used as a non-invasive functional imaging modality with a wide range of clinical applications. By providing the information of metabolic processes in the human body, it is utilized for various purposes including tumor staging and detection of metastases in oncology [
      • Schrevens L.
      • Lorent N.
      • Dooms C.
      • Vansteenkiste J.
      The role of PET scan in diagnosis, staging, and management of non-small cell lung cancer.
      ,
      • Sugiyama M.
      • Sakahara H.
      • Torizuka T.
      • Kanno T.
      • Nakamura F.
      • Futatsubashi M.
      • et al.
      18F-FDG PET in the detection of extrahepatic metastases from hepatocellular carcinoma.
      ,
      • Ma S.-Y.
      • See L.-C.
      • Lai C.-H.
      • Chou H.-H.
      • Tsai C.-S.
      • Ng K.-K.
      • et al.
      Delayed 18F-FDG PET for detection of paraaortic lymph node metastases in cervical cancer patients.
      ,
      • Strobel K.
      • Dummer R.
      • Husarik D.B.
      • Lago M.P.
      • Hany T.F.
      • Steinert H.C.
      High-risk melanoma: accuracy of FDG PET/CT with added CT morphologic information for detection of metastases.
      ,
      • Adler L.P.
      • Faulhaber P.F.
      • Schnur K.C.
      • Al-Kasi N.L.
      • Shenk R.R.
      Axillary lymph node metastases: screening with [F-18]2-deoxy-2-fluoro-D-glucose (FDG) PET.
      ,
      • Abdel-Nabi H.
      • Doerr R.J.
      • Lamonica D.M.
      • Cronin V.R.
      • Galantowicz P.J.
      • Carbone G.M.
      • et al.
      Staging of primary colorectal carcinomas with fluorine-18 fluorodeoxyglucose whole-body PET: correlation with histopathologic and CT findings.
      ,
      • Taira A.V.
      • Herfkens R.J.
      • Gambhir S.S.
      • Quon A.
      Detection of bone metastases: assessment of integrated FDG PET/CT imaging1.
      ,
      • Ohta M.
      • Tokuda Y.
      • Suzuki Y.
      • Kubota M.
      • Makuuchi H.
      • Tajima T.
      • et al.
      Whole body PET for the evaluation of bony metastases in patients with breast cancer: comparison with 99Tcm-MDP bone scintigraphy.
      ,
      • Czernin J.
      • Allen-Auerbach M.
      • Schelbert H.R.
      Improvements in cancer staging with PET/CT: literature-based evidence as of September 2006.
      ], gross target volume definition in radiation oncology [
      • Biehl K.J.
      • Kong F.M.
      • Dehdashti F.
      • Jin J.Y.
      • Mutic S.
      • El Naqa I.
      • et al.
      18F-FDG PET definition of gross tumor volume for radiotherapy of non-small cell lung cancer: is a single standardized uptake value threshold approach appropriate?.
      ,
      • Paulino A.C.
      • Koshy M.
      • Howell R.
      • Schuster D.
      • Davis L.W.
      Comparison of CT- and FDG-PET-defined gross tumor volume in intensity-modulated radiotherapy for head-and-neck cancer.
      ,
      • Nestle U.
      • Kremp S.
      • Schaefer-Schuler A.
      • Sebastian-Welsch C.
      • Hellwig D.
      • Rübe C.
      • et al.
      Comparison of different methods for delineation of 18F-FDG PET–positive tissue for target volume definition in radiotherapy of patients with non-small cell lung cancer.
      ], myocardial perfusion in cardiology [
      • Schwaiger M.
      • Ziegler S.
      • Nekolla S.G.
      PET/CT: challenge for nuclear cardiology.
      ,
      • Parker M.W.
      • Iskandar A.
      • Limone B.
      • Perugini A.
      • Kim H.
      • Jones C.
      • et al.
      Diagnostic accuracy of cardiac positron emission tomography versus single photon emission computed tomography for coronary artery disease: a bivariate meta-analysis.
      ], and investigation of neurological disorders [
      • Politis M.
      • Piccini P.
      Positron emission tomography imaging in neurological disorders.
      ]. Among these applications, the accuracy of tracer uptake quantification has been less recognized than other characteristics of PET such as sensitivity [
      • Boellaard R.
      Standards for PET image acquisition and quantitative data analysis.
      ]. Recently, with the focus shifting towards precision medicine, it is of great clinical interest to accurately quantify tracer uptake towards expansion of PET in more demanding applications such as therapeutic response monitoring [
      • Ben-Haim S.
      • Ell P.
      18F-FDG PET and PET/CT in the evaluation of cancer treatment response.
      ,
      • Shankar L.K.
      • Hoffman J.M.
      • Bacharach S.
      • Graham M.M.
      • Karp J.
      • Lammertsma A.A.
      • et al.
      Consensus recommendations for the use of 18F-FDG PET as an indicator of therapeutic response in patients in national cancer institute trials.
      ,
      • Wahl R.L.
      • Jacene H.
      • Kasamon Y.
      • Lodge M.A.
      From RECIST to PERCIST: evolving considerations for PET response criteria in solid tumors.
      ], and treatment outcome prediction as a prognostic factor [
      • Naqa I.E.
      The role of quantitative PET in predicting cancer treatment outcomes.
      ].
      Long-standing challenges in quantitative PET accuracy remain such as degraded image quality caused by inaccurate photon attenuation and scattering [
      • Kinahan P.E.
      • Townsend D.W.
      • Beyer T.
      • Sashin D.
      Attenuation correction for a combined 3D PET/CT scanner.
      ,
      • Watson C.C.
      New, faster, image-based scatter correction for 3D PET.
      ], low count statistics [
      • Lodge M.A.
      • Chaudhry M.A.
      • Wahl R.L.
      Noise considerations for PET quantification using maximum and peak standardized uptake value.
      ,
      • Slifstein M.
      • Laruelle M.
      Effects of statistical noise on graphic analysis of PET neuroreceptor studies.
      ], and the partial-volume effect [
      • Soret M.
      • Bacharach S.L.
      • Buvat I.
      Partial-volume effect in PET tumor imaging.
      ,
      • Sureshbabu W.
      • Mawlawi O.
      PET/CT imaging artifacts.
      ,
      • Blodgett T.M.
      • Mehta A.S.
      • Mehta A.S.
      • Laymon C.M.
      • Carney J.
      • Townsend D.W.
      PET/CT artifacts.
      ]. Improperly addressing these challenges can result in bias, uncertainty and artifacts in the PET images, which can degrade the utility of both qualitative and quantitative assessments. With the introduction of advanced PET scanners and applications in the recent years, such as Magnetic Resonance (MR)-combined PET (PET/MR), low-count scanning protocol and high-resolution PET, these novel PET technologies aim to incorporate anatomical imaging modality for better soft tissue visualization, reduce administered activity and shorten scan time of stand-alone PET and Computed tomography (CT)-combined PET (PET/CT), and increase detection capability on radiopharmaceutical accumulation in structures of millimeter size, all of which are highly desirable for clinical practice. On the other hand, unconventional or suboptimal scan schemes are proposed to achieve these clinical benefits, which bring new technical barriers that have not been seen in conventional PET/CT systems. It is important to address these problems without compromising the quantitative capabilities of PET.
      Many methods have been proposed to deal with the inherent deficiencies of PET and are still being improved to tackle the difficulties emerging from these novel implementations. For example, before PET/CT was widely implemented, transmission scanning with an external positron emitting source (e.g. Ge-68) rotating around the patient was used to determine noisy but accurate 511 keV linear attenuation coefficient maps of the patient body for attenuation correction [
      • Miwa K.
      • Wagatsuma K.
      • Iimori T.
      • Sawada K.
      • Kamiya T.
      • Sakurai M.
      • et al.
      Multicenter study of quantitative PET system harmonization using NIST-traceable 68Ge/68Ga cross-calibration kit.
      ]. With the addition of CT images in PET/CT, 511 keV linear attenuation coefficient maps can be computed by a piecewise linear scaling algorithm reducing the noise contribution compared to Ge-68 transmission scans in the reconstruction process [
      • Kinahan P.E.
      • Townsend D.W.
      • Beyer T.
      • Sashin D.
      Attenuation correction for a combined 3D PET/CT scanner.
      ,
      • Burger C.
      • Goerres G.
      • Schoenes S.
      • Buck A.
      • Lonn A.H.
      • Von Schulthess G.K.
      PET attenuation coefficients from CT images: experimental evaluation of the transformation of CT into PET 511-keV attenuation coefficients.
      ,
      • Witoszynskyj S.
      • Andrzejewski P.
      • Georg D.
      • Hacker M.
      • Nyholm T.
      • Rausch I.
      • et al.
      Attenuation correction of a flat table top for radiation therapy in hybrid PET/MR using CT- and 68Ge/68Ga transmission scan-based μ-maps.
      ]. In recent years, the incorporation of MR with PET has become a promising alternative to PET/CT due to the availability of its excellent anatomical soft tissue visualization without ionizing radiation. However, MR voxel intensity is related to proton density rather than electron density, thus it cannot be directly converted to 511 keV linear attenuation coefficient maps for use in the AC process. To overcome this barrier, conventional methods have assigned piecewise constant attenuation coefficients on MR images based on the segmentation of tissues [
      • Fei B.
      • Yang X.
      • Nye J.A.
      • Aarsvold J.N.
      • Raghunath N.
      • Cervo M.
      • et al.
      MR/PET quantification tools: registration, segmentation, classification, and MR-based attenuation correction.
      ,
      • Yang X.
      • Fei B.
      Multiscale segmentation of the skull in MR images for MRI-based attenuation correction of combined MR/PET.
      ]. The segmentation can be done by either manually-drawn contours [

      Goff-Rougetet RL, Frouin V, Mangin J-F, Bendriem B. Segmented MR images for brain attenuation correction in PET. Medical Imaging 1994: SPIE; 1994. 12.

      ] or automatic classification methods [
      • El Fakhri G.
      • Kijewski M.F.
      • Johnson K.A.
      • Syrkin G.
      • Killiany R.J.
      • Becker J.A.
      • et al.
      MRI-guided SPECT perfusion measures and volumetric MRI in prodromal Alzheimer disease.
      ,
      • Hofmann M.
      • Pichler B.
      • Scholkopf B.
      • Beyer T.
      Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.
      ,
      • Zaidi H.
      • Montandon M.L.
      • Slosman D.O.
      Magnetic resonance imaging-guided attenuation and scatter corrections in three-dimensional brain positron emission tomography.
      ]. However, these methods are limited by misclassification and inaccurate prediction of bone and air regions due to their ambiguous relationships in MR. Instead of segmentation, other methods have warped atlases of MR images labeled with known attenuation factors to patient-specific MR images by deformable registration or pattern recognition but their efficacy highly depends on the performance of the registration algorithm. Moreover, the atlases are usually created on normal anatomy and cannot represent anatomic abnormalities seen in clinical practice [
      • Kops E.R.
      • Herzog H.
      Alternative methods for attenuation correction for PET images in MR-PET scanners.
      ,
      • Hofmann M.
      • Steinke F.
      • Scheel V.
      • Charpiat G.
      • Farquhar J.
      • Aschoff P.
      • et al.
      MRI-based attenuation correction for PET/MRI: a novel approach combining pattern recognition and atlas registration.
      ].
      Low-count PET protocols aim to reduce the administrated activity, which is attractive for it clinical utility [
      • Catana C.
      The dawn of a new era in low-dose PET imaging.
      ]. The reduction in radiation dose is desirable in the pediatric population since accumulated imaging dose can be a big concern for younger patients who are more sensitive to radiation [
      • Lei Y.
      • Xu D.
      • Zhou Z.
      • Wang T.
      • Dong X.
      • Liu T.
      • et al.
      A denoising algorithm for CT image using low-rank sparse coding.
      ,
      • Wang T.
      • Lei Y.
      • Tian Z.
      • Dong X.
      • Liu Y.
      • Jiang X.
      • et al.
      Deep learning-based image quality improvement for low-dose computed tomography simulation in radiation therapy.
      ]. Reducing administered activity can also help patients undergoing radiation therapy who have multiple serial PET scans as part of their pretreatment and radiotherapy response evaluation [
      • Erdi Y.E.
      • Macapinlac H.
      • Rosenzweig K.E.
      • Humm J.L.
      • Larson S.M.
      • Erdi A.K.
      • et al.
      Use of PET to monitor the response of lung cancer to radiation treatment.
      ,
      • Cliffe H.
      • Patel C.
      • Prestwich R.
      • Scarsbrook A.
      Radiotherapy response evaluation using FDG PET-CT-established and emerging applications.
      ]. Considerations should be given to the need for optimal imaging and minimization of patient dose in order to reduce the potential risk of secondary cancer [
      • Das S.K.
      • McGurk R.
      • Miften M.
      • Mutic S.
      • Bowsher J.
      • Bayouth J.
      • et al.
      Task Group 174 report: utilization of [18F]Fluorodeoxyglucose positron emission tomography ([18F]FDG-PET) in radiation therapy.
      ]. Secondly, shortened scan time is beneficial for minimizing patient motion as well as potentially increasing patient throughput. More importantly, in dynamic PET imaging employing a sequence of time frames, counts at each frame are much lower than those in static PET imaging resulting in increased image noise, reduced contrast-to-noise ratio, and measurement bias [
      • Boellaard R.
      Standards for PET image acquisition and quantitative data analysis.
      ]. Dynamic PET imaging aims to provide a voxel-wise parametric analysis for kinetic modeling and the accuracy highly depends on the image quality [
      • Rahmim A.
      • Lodge M.A.
      • Karakatsanis N.A.
      • Panin V.Y.
      • Zhou Y.
      • McMillan A.
      • et al.
      Dynamic whole-body PET imaging: principles, potentials and applications.
      ]. However, the low count statistics would result in increased image noise, reduced contrast-to-noise ratio, and large bias in uptake measurement [
      • Boellaard R.
      Standards for PET image acquisition and quantitative data analysis.
      ]. The immunoPET introduced recently also encounters low-count issue since the commonly administered activity is about 37 to 75 MBq of 89Zr [
      • Borjesson P.K.
      • Jauw Y.W.
      • de Bree R.
      • Roos J.C.
      • Castelijns J.A.
      • Leemans C.R.
      • et al.
      Radiation dosimetry of 89Zr-labeled chimeric monoclonal antibody U36 as used for immuno-PET in head and neck cancer patients.
      ,
      • Jauw Y.W.S.
      • Heijtel D.F.
      • Zijlstra J.M.
      • Hoekstra O.S.
      • de Vet H.C.W.
      • Vugts D.J.
      • et al.
      Noise-induced variability of immuno-PET with zirconium-89-labeled antibodies: an analysis based on count-reduced clinical images.
      ], which is much lower than 370 MBq used for 18F-FDG of a standard PET. Both hardware-based and software-based solutions have been proposed for the low-count PET scanning. Advances in PET instruments such as lutetium-based detectors, silicon photomultipliers and time-of-flight compatible scanners are able to considerably increase the acquisition efficiency thus lower the measurement uncertainty [
      • Nguyen N.C.
      • Vercher-Conejero J.L.
      • Sattar A.
      • Miller M.A.
      • Maniawski P.J.
      • Jordan D.W.
      • et al.
      Image quality and diagnostic performance of a digital PET prototype in patients with oncologic diseases: initial experience and comparison with analog PET.
      ,
      • Karp J.S.
      • Surti S.
      • Daube-Witherspoon M.E.
      • Muehllehner G.
      Benefit of time-of-flight in PET: experimental and clinical results.
      ]. Meanwhile, software-based post-processing and the usage of noise regularization in reconstruction would penalize the differences among neighboring pixels in order to have a smooth appearance [
      • Qi J.
      • Leahy R.M.
      A theoretical study of the contrast recovery and variance of MAP reconstructions from PET data.
      ,
      • Chan C.
      • Fulton R.
      • Barnett R.
      • Feng D.D.
      • Meikle S.
      Postreconstruction nonlocal means filtering of whole-body PET with an anatomical prior.
      ,
      • Christian B.T.
      • Vandehey N.T.
      • Floberg J.M.
      • Mistretta C.A.
      Dynamic PET denoising with HYPR processing.
      ].
      Hardware advances have mainly focusing on the developments in the scintillator and photodetector technologies, which have substantially improved the performance of PET systems [
      • Balcerzyk M.
      • Moszynski M.
      • Kapusta M.
      • Wolski D.
      • Pawelke J.
      • Melcher C.L.Y.S.O.
      • et al.
      A study of energy resolution and nonproportionality.
      ,
      • Herbert D.J.
      • Moehrs S.
      • D’Ascenzo N.
      • Belcari N.
      • Del Guerra A.
      • Morsani F.
      • et al.
      The silicon photomultiplier for application to high-resolution positron emission tomography.
      ]. Paired with software improvements, particularly novel image processing and reconstruction algorithms, software advances may complement hardware improvements and be easier to implement at a lower cost.
      Inspired by the rapid expansion of artificial intelligence in both industry and academia in recent years, many research groups have attempted to integrate machine learning-based methods into medical imaging and radiation therapy [
      • Sahiner B.
      • Pezeshk A.
      • Hadjiiski L.M.
      • Wang X.
      • Drukker K.
      • Cha K.H.
      • et al.
      Deep learning in medical imaging and radiation therapy.
      ,
      • Giger M.L.
      Machine learning in medical imaging.
      ,
      • Erickson B.J.
      • Korfiatis P.
      • Akkus Z.
      • Kline T.L.
      Machine learning for medical imaging.
      ,
      • Feng M.
      • Valdes G.
      • Dixit N.
      • Solberg T.D.
      Machine learning in radiation oncology: opportunities, requirements, and needs.
      ,
      • Jarrett D.
      • Stride E.
      • Vallis K.
      • Gooding M.J.
      Applications and limitations of machine learning in radiation oncology.
      ]. The common machine learning applications include detection, segmentation, characterization, reconstruction, registration and synthesis [
      • Cui G.
      • Jeong J.J.
      • Lei Y.
      • Wang T.
      • Liu T.
      • Curran W.J.
      • et al.
      Machine-learning-based classification of Glioblastoma using MRI-based radiomic features.
      ,
      • Fu Y.
      • Lei Y.
      • Wang T.
      • Curran W.J.
      • Liu T.
      • Yang X.
      Deep learning in medical image registration: a review.
      ,
      • Lei Y.
      • Liu Y.
      • Wang T.
      • Tian S.
      • Dong X.
      • Jiang X.
      • et al.
      Brain MRI classification based on machine learning framework with auto-context model. SPIE Medical.
      ,
      • Lei Y.
      • Shu H.K.
      • Tian S.
      • Wang T.
      • Liu T.
      • Mao H.
      • et al.
      Pseudo CT estimation using patch-based joint dictionary learning.
      ,
      • Shafai-Erfani G.
      • Lei Y.
      • Liu Y.
      • Wang Y.
      • Wang T.
      • Zhong J.
      • et al.
      MRI-based proton treatment planning for base of skull tumors.
      ,
      • Wang T.
      • Lei Y.
      • Manohar N.
      • Tian S.
      • Jani A.B.
      • Shu H.-K.
      • et al.
      Dosimetric study on learning-based cone-beam CT correction in adaptive radiation therapy.
      ,

      Lei Y, Fu Y, Harms J, Wang T, Curran WJ, Liu T, et al. 4D-CT Deformable Image Registration Using an Unsupervised Deep Convolutional Neural Network. Workshop on Artificial Intelligence in Radiation Therapy. 2019;doi: 10.1007/978-3-030-32486-5_4:26-33.

      ]. Before the permeation of artificial intelligence into these sub-fields, conventional image processing methods have been developed for decades. The conventional algorithms are usually very different from each other in many aspects such as workflow, assumptions, complexity, requirements on prior knowledge, and the detailed implementation highly depends on the task [
      • Harms J.
      • Wang T.
      • Petrongolo M.
      • Niu T.
      • Zhu L.
      Noise suppression for dual-energy CT via penalized weighted least-square optimization with similarity-based regularization.
      ,
      • Harms J.
      • Wang T.
      • Petrongolo M.
      • Zhu L.
      Noise suppression for energy-resolved CT using similarity-based non-local filtration.
      ,
      • Wang T.
      • Zhu L.
      Dual energy CT with one full scan and a second sparse-view scan using structure preserving iterative reconstruction (SPIR).
      ,
      • Wang T.
      • Zhu L.
      Pixel-wise estimation of noise statistics on iterative CT reconstruction from a single scan.
      ,

      Wang T, Zhu L. Image-domain non-uniformity correction for cone-beam CT. 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). 2017;doi: 10.1109/ISBI.2017.7950611:680-3.

      ]. Compared with conventional methods, machine learning-based methods share a general framework using a data-driven approach. Supervised learning workflows usually consists of a training stage, i.e. a machine learning model is trained by the training datasets to find the patterns between the input and its training target; and a predication stage, where the trained model maps an input to an output. Unlike the conventional methods which usually require case-by-case parameter tuning for optimal performance, machine-learning based methods are more robust, of which the effectiveness largely depends on the representativeness of the training datasets used, the designed network architecture and hyperparameter settings.
      Recently, there are review papers generally summarizing the current studies of machine learning and PET scanning [
      • Gong K.
      • Berg E.
      • Cherry S.R.
      • Qi J.
      Machine learning in PET: from photon detection to quantitative image reconstruction.
      ,
      • Ravishankar S.
      • Ye J.C.
      • Fessler J.A.
      Image reconstruction: from sparsity to data-adaptive methods and machine learning.
      ]. In this paper, we reviewed the emerging machine learning-based methods for quantitative PET imaging. Specifically, we reviewed the general workflows and summarized the recently published studies of learning-based methods in dealing with PET AC and low-count PET reconstruction in the following sections respectively, and briefly introduced the representative ones among them with a discussion on the trend and future directions at the end of each section. Note that the degrading PET effects such as scattering and partial volume are not reviewed in this study but also play an important role in PET quantification accuracy. The PET scans in these reviewed literatures used glucose analogue 2-18F-fluoro-2-deoxy-D-glucose (FDG) for uptake if not explicitly stated otherwise.

      2. PET AC

      As mentioned in the introduction, CT is able to provide the attenuation coefficient maps to correct for the loss of annihilation photons due to the attenuation process in the patient body. However, in a PET-only scanner or a PET/MR scanner the absence of CT images causes difficulty in attenuation correction. Many studies have proposed schemes to address this issue and machine learning is involved in different forms with different purposes. The general workflow can be divided into two pathways depending on whether anatomical images were acquired (MR in PET/MR) or not (PET-only scanner). Most proposed methods have been implemented on PET brain scans, with a few on pelvis and whole body studies. The ground truth for training and evaluation was PET with AC by CT images. Relative bias in percentage from ground truth in selected volumes-of-interest (VOIs) were usually reported to evaluate the quantification accuracy. Several studies compared their proposed methods with conventional MR-based PET AC methods such as atlas-based methods and segmentation-based methods.

      2.1 Synthetic CT for PET/MR

      For a PET/MR scanner, the most common strategy is to generate synthetic CT (sCT) images from MR and use the sCT in the PET AC procedure The synthesis methods among images of different modalities have been developed in recent years [
      • Lei Y.
      • Harms J.
      • Wang T.
      • Tian S.
      • Zhou J.
      • Shu H.-K.
      • et al.
      MRI-based synthetic CT generation using semantic random forest with iterative refinement.
      ,
      • Lei Y.
      • Jeong J.J.
      • Wang T.
      • Shu H.-K.
      • Patel P.
      • Tian S.
      • et al.
      MRI-based pseudo CT synthesis using anatomical signature and alternating random forest with iterative refinement model.
      ,
      • Lei Y.
      • Shu H.-K.
      • Tian S.
      • Jeong J.J.
      • Liu T.
      • Shim H.
      • et al.
      Magnetic resonance imaging-based pseudo computed tomography using anatomic signature and joint dictionary learning.
      ,
      • Liu Y.
      • Lei Y.
      • Wang T.
      • Kayode O.
      • Tian S.
      • Liu T.
      • et al.
      MRI-based treatment planning for liver stereotactic body radiotherapy: validation of a deep learning-based synthetic CT generation method.
      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Shafai-Erfani G.
      • Wang T.
      • Tian S.
      • et al.
      Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning.
      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Wang T.
      • Ren L.
      • Lin L.
      • et al.
      MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method.
      ,
      • Shafai-Erfani G.
      • Wang T.
      • Lei Y.
      • Tian S.
      • Patel P.
      • Jani A.B.
      • et al.
      Dose evaluation of MRI-based synthetic CT generated using a machine learning method for prostate cancer radiotherapy.
      ,
      • Wang T.
      • Manohar N.
      • Lei Y.
      • Dhabaan A.
      • Shu H.-K.
      • Liu T.
      • et al.
      MRI-based treatment planning for brain stereotactic radiosurgery: dosimetric validation of a learning-based pseudo-CT generation method.
      ,
      • Lei Y.
      • Tang X.
      • Higgins K.
      • Lin J.
      • Jeong J.
      • Liu T.
      • et al.
      Learning-based CBCT correction using alternating random forest based on auto-context model.
      ,
      • Lei Y.
      • Tang X.
      • Higgins K.
      • Wang T.
      • Liu T.
      • Dhabaan A.
      • et al.
      Improving image quality of cone-beam CT using alternating regression forest.
      ,

      Lei Y, Wang T, Harms J, Fu Y, Dong X, Curran WJ, et al. CBCT-Based Synthetic MRI Generation for CBCT-Guided Adaptive Radiotherapy. Workshop on Artificial Intelligence in Radiation Therapy. 2019;doi: 10.1007/978-3-030-32486-5_19:154-61.

      ,
      • Lei Y.
      • Wang T.
      • Harms J.
      • Shafai-Erfani G.
      • Dong X.
      • Zhou J.
      • et al.
      Image quality improvement in cone-beam CT using deep learning.
      ,
      • Lei Y.
      • Wang T.
      • Harms J.
      • Shafai-Erfani G.
      • Tian S.
      • Higgins K.
      • et al.
      MRI-based pseudo CT generation using classification and regression random forest.
      ,
      • Lei Y.
      • Wang T.
      • Liu Y.
      • Higgins K.
      • Tian S.
      • Liu T.
      • et al.
      MRI-based synthetic CT generation using deep convolutional neural network.
      ]. Many studies have investigated the feasibility of using sCT for PET AC in brain and whole body imaging, which are summarized in Table 1. Examples of AC PET using learning-based methods for brain are shown in Fig. 1 which is redrawn based on the method proposed in reference [
      • Yang X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Shim H.
      • et al.
      MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning.
      ]. AC PET images by MR using learning-based methods show a very similar appearance and intensity scale with ground truth. The bias among 11 VOIs as selected in reference [
      • Yang X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Shim H.
      • et al.
      MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning.
      ] ranged from −1.72% to 3.70%, with a global mean absolute error of 2.41%. The AC PET by MR successfully maintained the relative contrast among different brain regions, which can be helpful for diagnostic purposes, although the improvement on diagnostic accuracy is difficult to quantify based on current studies [
      • Hofmann M.
      • Steinke F.
      • Scheel V.
      • Charpiat G.
      • Farquhar J.
      • Aschoff P.
      • et al.
      MRI-based attenuation correction for PET/MRI: a novel approach combining pattern recognition and atlas registration.
      ,
      • Zaidi H.
      • Montandon M.-L.
      Scatter compensation techniques in PET.
      ].
      Table 1Overview of learning-based PET AC methods.
      Methods and strategyPET or PET/MRSite# of patients in training/testing datasetsReported bias
      Numbers in parentheses indicate minimum and maximum values.
      (%)
      Authors
      Deep convolutional auto-encoder (CAE) network

      MR->tissue class (bone, soft-tissue, air)
      PET/MR (3 T Post-contrast T1-weighted)Brain30 train/10 test−0.7 ± 1.1 (−3.2, 0.4) among 23 VOIsLiu et al., 2018
      • Liu F.
      • Jang H.
      • Kijowski R.
      • Bradshaw T.
      • McMillan A.B.
      Deep learning MR imaging-based attenuation correction for PET/MR imaging.
      Gaussian mixture regression

      MR->sCT
      PET/MR (T2-weighted + ultrashort echo time)BrainN.A.*/9 test−1.9 ± 4.1 (−61, 34) in globalLarsson et al., 2013
      • Larsson A.
      • Johansson A.
      • Axelsson J.
      • Nyholm T.
      • Asklund T.
      • Riklund K.
      • et al.
      Evaluation of an attenuation correction method for PET/MR imaging of the head based on substitute CT images.
      Support Vector Regression

      MR->sCT
      PET/MR (UTE and Dixon-VIBE)Brain5 train/5 test2.16 ± 1.77 (1.32, 3.45) among 13 VOIsNavalpakkam et al., 2013
      • Navalpakkam B.K.
      • Braun H.
      • Kuwert T.
      • Quick H.H.
      Magnetic resonance-based attenuation correction for PET/MR hybrid imaging using continuous valued attenuation maps.
      Alternating random forests

      MR->sCT
      PET/MR (T1-weighted MP-RAGE)Brain17 leave-one-out(−1.61, 3.67) among 11 VOIsYang et al., 2019
      • Yang X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Shim H.
      • et al.
      MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning.
      Unet

      MR->sCT
      PET/MR(ZTE)Brain23 train/47 test−0.2(−1.8, 1.7) among 70 VOIsBlanc-Durand et al., 2019
      • Blanc-Durand P.
      • Khalife M.
      • Sgard B.
      • Kaushik S.
      • Soret M.
      • Tiss A.
      • et al.
      Attenuation correction using 3D deep convolutional neural network for brain 18F-FDG PET/MR: comparison with Atlas, ZTE and CT based attenuation correction.
      Unet

      MR->sCT
      PET/MR

      3 T ZTE and Dixon
      Pelvis10 train/26 test−1.11 ± 2.62 in globalLeynes et al., 2018
      • Leynes A.P.
      • Yang J.
      • Wiesinger F.
      • Kaushik S.S.
      • Shanbhag D.D.
      • Seo Y.
      • et al.
      Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI.
      Unet

      MR->sCT
      PET/MR

      Dixon
      PelvisN.A./19 patients with 28 scans in test.−0.95 ± 5.09 in bone

      −0.03 ± 2.98 in soft tissue

      0.27 ± 2.59 in fat
      Torrado-Carvajal et al., 2019
      • Torrado-Carvajal A.
      • Vera-Olmos J.
      • Izquierdo-Garcia D.
      • Catalano O.A.
      • Morales M.A.
      • Margolin J.
      • et al.
      Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuation correction.
      Unet

      MR->sCT
      PET/MR (1.5 T T1-weighted)Brain44 train/11 validation/11 test−0.49 ± 1.7 11C-WAY-100635 PET

      −1.52 ± 0.73 11C-DASB

      in global
      Spuhler et al., 2019
      • Spuhler K.D.
      • Gardus 3rd, J.
      • Gao Y.
      • DeLorenzo C.
      • Parsey R.
      • Huang C.
      Synthesis of patient-specific transmission data for PET attenuation correction for PET/MRI neuroimaging using a convolutional neural network.
      Unet

      MR->sCT
      PET/MR (ZTE and Dixon)Brain14 leave-two-outAbsolute error (1.5%-2.8%) among 8 VOIsGong et al., 2018
      • Gong K.
      • Yang J.
      • Kim K.
      • El Fakhri G.
      • Seo Y.
      • Li Q.
      Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images.
      Unet

      MR->sCT
      PET/MR (UTE)Brain79 (pediatric) 4-fold cross validation−0.1(−0.2, 0.5) in 95%CI among all tumor volumesLadefoged et al., 2019
      • Ladefoged C.N.
      • Marner L.
      • Hindsholm A.
      • Law I.
      • Hojgaard L.
      • Andersen F.L.
      Deep learning based attenuation correction of PET/MRI in pediatric brain tumor patients: evaluation in a clinical setting.
      Deep learning adversarial semantic structure (DL-AdvSS)

      MR->sCT
      PET/MR (3 T T1 MP-RAGE)Brain40 2-fold cross validation<4 Among 63 VOIsArabi et al., 2019
      • Arabi H.
      • Zeng G.
      • Zheng G.
      • Zaidi H.
      Novel adversarial semantic structure deep learning for MRI-guided attenuation correction in brain PET/MRI.
      Hybrid of CAE and Unet

      NAC PET->sCT->AC PET
      PET (18F-FP-CIT)Brain40 5-fold cross validationAbout (−8, −4) among 4 VOIsHwang et al., 2018
      • Hwang D.
      • Kim K.Y.
      • Kang S.K.
      • Seo S.
      • Paeng J.C.
      • Lee D.S.
      • et al.
      Improving the accuracy of simultaneously reconstructed activity and attenuation maps using deep learning.
      Unet

      NAC PET-> sCT->AC PET
      PETBrain100 train/28 test−0.64 ± 1.99 (−4.18, 2.22) among 21 regionsLiu et al., 2018
      • Liu F.
      • Jang H.
      • Kijowski R.
      • Zhao G.
      • Bradshaw T.
      • McMillan A.B.
      A deep learning approach for (18)F-FDG PET attenuation correction.
      Generative adversarial networks (GAN)

      NAC PET->sCT->AC PET
      PETBrain50 train/40 test(−2.5, 0.6) among 7 VOIsArmanious et al., 2019
      • Armanious K.
      • Küstner T.
      • Reimold M.
      • Nikolaou K.
      • La Fougère C.
      • Yang B.
      • et al.
      Independent brain 18F-FDG PET attenuation correction using a deep learning approach with Generative Adversarial Networks.
      Cycle-consistent GAN

      NAC PET->sCT->AC PET
      PETWhole body80 train/39 test(−1.06,10.72) among 7 VOIs

      1.07 ± 9.01 in lesion
      Dong et al., 2019
      • Dong X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging.
      Unet

      NAC PET->AC PET
      PETBrain25 train/10 test4.0 ± 15.4 among 116 VOIsYang et al., 2019
      • Yang J.
      • Park D.
      • Gullberg G.T.
      • Seo Y.
      Joint correction of attenuation and scatter in image space using deep convolutional neural networks for dedicated brain (18)F-FDG PET.
      Unet

      NAC PET->AC PET
      PETBrain91 train/18 test−0.10 ± 2.14 among 83 VOIsShiri et al., 2019
      • Shiri I.
      • Ghafarian P.
      • Geramifar P.
      • Leung K.-H.-Y.
      • Ghelichoghli M.
      • Oveisi M.
      • et al.
      Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC).
      Cycle-consistent Generative adversarial networks (GAN)

      NAC PET->AC PET
      PETWhole body25 leave-one-out+

      10 patients*3 sequential scan tests
      (−17.02,3.02) among 6 VOIs, 2.85 ± 5.21 in lesionsDong et al., 2019
      • Dong X.
      • Lei Y.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Deep learning-based attenuation correction in the absence of structural information for whole-body PET imaging.
      *N.A.: not available, i.e. not explicitly indicated in the publication.
      + Numbers in parentheses indicate minimum and maximum values.
      Figure thumbnail gr1
      Fig. 1Examples of PET images (1) without AC, (2) with AC by CT and (3) with AC by MRI using the learning-based method presented in reference
      [
      • Yang X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Shim H.
      • et al.
      MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning.
      ]
      at (a) axial, (b) sagittal and (c) coronal views. The window width for (1) is [0 5000], and for (2) and (3) is [0 30000] in units of Bq/ml.
      Liu et al. [
      • Liu F.
      • Jang H.
      • Kijowski R.
      • Bradshaw T.
      • McMillan A.B.
      Deep learning MR imaging-based attenuation correction for PET/MR imaging.
      ] proposed to generate the sCT by assigning the CT numbers to the corresponding tissue labels segmented on MR. They used convolutional auto-encoder (CAE), which consisted of a connected encoder and decoder network, to generate CT tissue labels from T1-weighted MR images. The encoder and decoder probe the image features and reconstruct piecewise tissue labels, respectively. They reported a mean bias of −0.7 ± 1.1% among all selected VOIs in brain with PET AC using their sCT, which was significantly better in most of the VOIs compared to the Dixon-based soft-tissue/air segmentation and anatomic CT-based template registration. A limitation of this strategy is its requirement of labelling on training datasets and the small number of classified tissue types.
      sCT can also be generated through a direct mapping from MR. Gaussian mixture regression [
      • Larsson A.
      • Johansson A.
      • Axelsson J.
      • Nyholm T.
      • Asklund T.
      • Riklund K.
      • et al.
      Evaluation of an attenuation correction method for PET/MR imaging of the head based on substitute CT images.
      ], support vector regression [
      • Navalpakkam B.K.
      • Braun H.
      • Kuwert T.
      • Quick H.H.
      Magnetic resonance-based attenuation correction for PET/MR hybrid imaging using continuous valued attenuation maps.
      ] and random forest [
      • Yang X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Shim H.
      • et al.
      MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning.
      ] have been used before deep learning was implemented. For example, Yang et al. proposed a random forest-based method to train a set of decision trees. Each decision tree learned the optimal way to separate a set of paired MRI and CT training patches into smaller and smaller subsets to predict CT intensity. When a new MRI patch is put into the model, the sCT intensity can be estimated as the combination of the predicted results of all decision trees. They used a sequence of alternating random forests under the framework of an iterative refinement model to consider both the global loss of the training model and the uncertainty of the training data falling into child nodes with a combination of discriminative feature selection [
      • Yang X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Shim H.
      • et al.
      MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning.
      ]. The authors reported good agreement of brain AC PET with bias ranging from −1.61% to 3.67% among all selected regions. Since deep learning methods have proven to be successful with image style transfer in the computer vision field, sCT from MR can be generated by many off-the-shelf algorithms. One of the most popular networks is Unet and its variants [

      Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv e-prints 2015. 1505.04597.

      ]. These convolutional neural network (CNN)-based methods aim to minimize the loss function that includes the pixel-intensity and pixel-gradient difference between training the target CT and generated sCT. The implementation and results of Unet in PET AC have been reported for brain in reference [
      • Blanc-Durand P.
      • Khalife M.
      • Sgard B.
      • Kaushik S.
      • Soret M.
      • Tiss A.
      • et al.
      Attenuation correction using 3D deep convolutional neural network for brain 18F-FDG PET/MR: comparison with Atlas, ZTE and CT based attenuation correction.
      ], for pelvis [
      • Leynes A.P.
      • Yang J.
      • Wiesinger F.
      • Kaushik S.S.
      • Shanbhag D.D.
      • Seo Y.
      • et al.
      Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI.
      ,
      • Torrado-Carvajal A.
      • Vera-Olmos J.
      • Izquierdo-Garcia D.
      • Catalano O.A.
      • Morales M.A.
      • Margolin J.
      • et al.
      Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuation correction.
      ], for pediatric brain tumor patients [
      • Ladefoged C.N.
      • Marner L.
      • Hindsholm A.
      • Law I.
      • Hojgaard L.
      • Andersen F.L.
      Deep learning based attenuation correction of PET/MRI in pediatric brain tumor patients: evaluation in a clinical setting.
      ] and for neuroimaging with 11C-labeled tracers [
      • Spuhler K.D.
      • Gardus 3rd, J.
      • Gao Y.
      • DeLorenzo C.
      • Parsey R.
      • Huang C.
      Synthesis of patient-specific transmission data for PET attenuation correction for PET/MRI neuroimaging using a convolutional neural network.
      ]. In order to incorporate multiple MR images from different sequences as input, Gong et al. modified Unet by replacing the traditional convolution module using a group convolutional module to preserve the network capacity while restricting the network complexity [
      • Gong K.
      • Yang J.
      • Kim K.
      • El Fakhri G.
      • Seo Y.
      • Li Q.
      Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images.
      ]. This modification successfully lowered the systematic bias and errors in all regions, which enabled the proposed method to outperform a conventional Unet.
      Apart from CNN-based methods, the generative adversarial network (GAN) has also been explored in generating sCT for PET AC. GAN has a generative network and a discriminative network that are trained simultaneously. With MR as input and sCT as output, the discriminative network distinguishes the sCT from training CT images. It can be formulated as solving a minimization problem of the discriminative loss, in addition to pixel-intensity loss and pixel-gradient loss to improve sCT image quality. Arabi et al. proposed a deep learning adversarial semantic structure that combined a synthesis GAN and a segmentation GAN [
      • Arabi H.
      • Zeng G.
      • Zheng G.
      • Zaidi H.
      Novel adversarial semantic structure deep learning for MRI-guided attenuation correction in brain PET/MRI.
      ]. The segmentation GAN segmented the generated sCT into air cavities, soft tissue, bone and background air to regularize the sCT generation process by back-propagating gradients. The proposed method reported a mean bias of less than 4% in selected VOIs with brain PET, which was slightly worse than atlas-based methods (up to 3%) but better than segmentation-based methods (up to −10%).

      2.2 PET AC for PET-only

      In a PET-only scanner, anatomical images such as MR are not available. The maximum-likelihood reconstruction of activity and attenuation (MLAA) algorithms are able to simultaneously reconstruct activity and attenuation distributions from the PET emission data by the incorporation of time-of-flight information [
      • Heußer T.
      • Rank C.M.
      • Berker Y.
      • Freitag M.T.
      • Kachelrieß M.
      MLAA-based attenuation correction of flexible hardware components in hybrid PET/MR imaging.
      ]. However, insufficient timing resolution of clinical PET systems leads to slow convergence, high noise levels the in attenuation maps, and substantial noise propagation between activity and attenuation maps [
      • Hwang D.
      • Kang S.K.
      • Kim K.Y.
      • Seo S.
      • Paeng J.C.
      • Lee D.S.
      • et al.
      Generation of PET attenuation map for whole-body time-of-flight 18F-FDG PET/MRI using a deep neural network trained with simultaneously reconstructed activity and attenuation maps.
      ]. Recently, it has been shown that the non-attenuation-corrected (NAC) PET could be used to generate sCT by the powerful image style transfer ability of deep learning methods. Similar to the sCT generation from MR, the sCT images are generated from the NAC PET images using a machine learning model trained by pairs of NAC PET and CT images that are acquired from a PET/CT scanner. Again, Unet and GAN are the two common networks adopted for this application [
      • Hwang D.
      • Kim K.Y.
      • Kang S.K.
      • Seo S.
      • Paeng J.C.
      • Lee D.S.
      • et al.
      Improving the accuracy of simultaneously reconstructed activity and attenuation maps using deep learning.
      ,
      • Liu F.
      • Jang H.
      • Kijowski R.
      • Zhao G.
      • Bradshaw T.
      • McMillan A.B.
      A deep learning approach for (18)F-FDG PET attenuation correction.
      ,
      • Armanious K.
      • Küstner T.
      • Reimold M.
      • Nikolaou K.
      • La Fougère C.
      • Yang B.
      • et al.
      Independent brain 18F-FDG PET attenuation correction using a deep learning approach with Generative Adversarial Networks.
      ,
      • Dong X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging.
      ]. Among these studies, learning-based PET AC from whole body PET imaging was investigated by Dong et al. for the first time [
      • Dong X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging.
      ]. A cycle-consistent GAN (CycleGAN) method that combined a self-attention Unet for the generator architecture and a fully convolutional network for the discriminator architecture was employed. The method learned a transformation that minimized the difference between sCT, generated from NAC PET, and the original CT. It also learned an inverse transformation such that the cycle NAC PET images generated from the sCT were close to original NAC PET images. A self-attention strategy was utilized to identify the most informative component and mitigate the disturbance of noise. The average bias of the proposed method on selected organs and lesions ranged from −1.06% to 3.57% except for lungs (10.72%).
      The above strategy still requires the step of PET reconstruction using the sCT following the mapping step. Thus, attempts were made to directly map the AC PET from the NAC PET images by exploiting the deep learning methods to bypass the PET reconstruction step. Yang et al. and Shiri et al. both proposed Unet-based methods in brain PET AC and demonstrated the feasibility [
      • Yang J.
      • Park D.
      • Gullberg G.T.
      • Seo Y.
      Joint correction of attenuation and scatter in image space using deep convolutional neural networks for dedicated brain (18)F-FDG PET.
      ,
      • Shiri I.
      • Ghafarian P.
      • Geramifar P.
      • Leung K.-H.-Y.
      • Ghelichoghli M.
      • Oveisi M.
      • et al.
      Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC).
      ]. Dong et al. again investigated whole body PET direct AC using a supervised 3D patch-based CycleGAN [
      • Dong X.
      • Lei Y.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Deep learning-based attenuation correction in the absence of structural information for whole-body PET imaging.
      ]. The CycleGAN had a NAC-to-AC PET mapping and an inverse AC-to-NAC PET mapping strategy in order to constrain the NAC-to-AC mapping to approach a one-to-one mapping. Since NAC PET images have similar anatomical structures to the AC PET images but lack contrast information, residual blocks were integrated into the network to learn the differences between NAC PET and AC PET. They reported the average bias of the proposed method on selected organs and lesions ranging from 2.11% to 3.02% except for lungs (-17.02%). A representative result is shown in Fig. 2 which is redrawn based on the method presented in reference [
      • Dong X.
      • Lei Y.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Deep learning-based attenuation correction in the absence of structural information for whole-body PET imaging.
      ]. When comparing with the results of Unet and GAN training and testing on the same datasets, the proposed CycleGAN method achieved a superior performance in most metrics evaluated and less bias in lesions.
      Figure thumbnail gr2
      Fig. 2Examples of whole body (a) CT images and PET images (b) without AC, (c) with AC by CT in (a), and (d) synthetic AC PET by learning-based method presented in reference
      [
      • Dong X.
      • Lei Y.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Deep learning-based attenuation correction in the absence of structural information for whole-body PET imaging.
      ]
      . The window width for (a) are [-300 300] HU, (b) are [0 1000] and for (c) and (d) are [0 10000] with PET units in Bq/ml.

      2.3 Discussion

      Although it is difficult to specify the tolerance level of quantification errors before they affect clinical judgement, the general consensus is that quantification errors of 10% or less typically do not affect diagnosis [
      • Hofmann M.
      • Steinke F.
      • Scheel V.
      • Charpiat G.
      • Farquhar J.
      • Aschoff P.
      • et al.
      MRI-based attenuation correction for PET/MRI: a novel approach combining pattern recognition and atlas registration.
      ]. Based on the average relative bias represented by these studies, almost all of the proposed methods in these studies meet this criterion. However, it should be noted that due to the variation among study objects, the bias in some VOIs on a per patient basis may exceed 10% [
      • Blanc-Durand P.
      • Khalife M.
      • Sgard B.
      • Kaushik S.
      • Soret M.
      • Tiss A.
      • et al.
      Attenuation correction using 3D deep convolutional neural network for brain 18F-FDG PET/MR: comparison with Atlas, ZTE and CT based attenuation correction.
      ,
      • Leynes A.P.
      • Yang J.
      • Wiesinger F.
      • Kaushik S.S.
      • Shanbhag D.D.
      • Seo Y.
      • et al.
      Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI.
      ,
      • Hwang D.
      • Kim K.Y.
      • Kang S.K.
      • Seo S.
      • Paeng J.C.
      • Lee D.S.
      • et al.
      Improving the accuracy of simultaneously reconstructed activity and attenuation maps using deep learning.
      ,
      • Armanious K.
      • Küstner T.
      • Reimold M.
      • Nikolaou K.
      • La Fougère C.
      • Yang B.
      • et al.
      Independent brain 18F-FDG PET attenuation correction using a deep learning approach with Generative Adversarial Networks.
      ,
      • Dong X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging.
      ,
      • Dong X.
      • Lei Y.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Deep learning-based attenuation correction in the absence of structural information for whole-body PET imaging.
      ,
      • Yang J.
      • Park D.
      • Gullberg G.T.
      • Seo Y.
      Joint correction of attenuation and scatter in image space using deep convolutional neural networks for dedicated brain (18)F-FDG PET.
      ]. As reported by Armanious et al., under-estimation of about 10% to 20% was observed in small volumes around air and bone structures such as paranasal sinuses and mastoid cells [
      • Armanious K.
      • Küstner T.
      • Reimold M.
      • Nikolaou K.
      • La Fougère C.
      • Yang B.
      • et al.
      Independent brain 18F-FDG PET attenuation correction using a deep learning approach with Generative Adversarial Networks.
      ]. Hwang et al. reported under-estimation over 10% for a patient around occipital cortex [
      • Hwang D.
      • Kim K.Y.
      • Kang S.K.
      • Seo S.
      • Paeng J.C.
      • Lee D.S.
      • et al.
      Improving the accuracy of simultaneously reconstructed activity and attenuation maps using deep learning.
      ], which may result in a loss of sensitivity for diagnosis of neurodegenerative diseases if cortical hypometabolism occurred in that region. Leynes et al. also reported under-estimation of a soft tissue lesion over 10% for a patient [
      • Leynes A.P.
      • Yang J.
      • Wiesinger F.
      • Kaushik S.S.
      • Shanbhag D.D.
      • Seo Y.
      • et al.
      Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI.
      ]. The under-estimation on the uptake in the lesion may result in misidentification and introduce errors in the definition of shape and position of the lesion for radiation therapy treatment planning, as well as in the evaluation of treatment response. It suggests that special attention should be given to the standard deviation of bias as well as its mean when interpreting results since the proposed methods may have poor local performance that could affect some patients. On the other hand, listing or plotting the results of every data point measure, or at least their range, instead of simply giving a mean and standard deviation would be more informative in demonstrating the performance of the proposed methods. Moreover, compared with static PET, dynamic PET may require higher uptake quantification accuracy since the kinetic parameters derived from it can be more sensitive to uptake reconstruction error than the conventional use of static PET due to the nonlinearity and identifiability of the kinetic models [
      • Wang G.
      • Qi J.
      Analysis of penalized likelihood image reconstruction for dynamic PET quantification.
      ].
      Various learning-based approaches have been proposed in the reviewed studies. However, the reported errors among these studies cannot be fairly compared because of the use of different datasets, evaluation methodology, and reconstruction parameters. Thus, it is impracticable to choose the best method based on the reported performance in these studies. Some studies compared their proposed methods with others using same datasets, which may reveal the advantages and limitations of the methods selected in comparison. For example, many studies compared their proposed methods with conventional segmentation-based or atlas-based methods, and almost all the learning-based methods gain the upper hand exhibiting less bias on average with less variation among patients and VOIs. The comparison among different learning-based methods is much less common probably because these methods have been just published in the last two years. As mentioned above, Dong et al. compared the CycleGAN, GAN and Unet in direct NAC PET-AC PET mapping on whole body PET images and demonstrated the superiority of CycleGAN over other learning-based approaches owing to the addition of inverse mapping from AC PET to NAC PET that degenerated the ill-posed interconversion between AC PET and NAC PET to be a one-to-one mapping [
      • Dong X.
      • Lei Y.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Deep learning-based attenuation correction in the absence of structural information for whole-body PET imaging.
      ].
      Among the above PET/MR studies, different types of MR sequences have been attempted for sCT generation. The specific MR sequence used in each study usually depends on the accessibility. The optimal sequence yielding the best PET AC performance has not been studied. T1-weighted and T2-weighted sequences are the two most common MR sequences used in standard of care. Due to their wide availability, sCT model can be trained from a relatively large number of datasets with CT and co-registered T1- or T2-weighted MR images regardless of PET acquisition. Thus, they have been utilized in multiple studies where the MR images in training datasets are not acquired with PET [
      • Yang X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Shim H.
      • et al.
      MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning.
      ,
      • Liu F.
      • Jang H.
      • Kijowski R.
      • Bradshaw T.
      • McMillan A.B.
      Deep learning MR imaging-based attenuation correction for PET/MR imaging.
      ,
      • Larsson A.
      • Johansson A.
      • Axelsson J.
      • Nyholm T.
      • Asklund T.
      • Riklund K.
      • et al.
      Evaluation of an attenuation correction method for PET/MR imaging of the head based on substitute CT images.
      ,
      • Spuhler K.D.
      • Gardus 3rd, J.
      • Gao Y.
      • DeLorenzo C.
      • Parsey R.
      • Huang C.
      Synthesis of patient-specific transmission data for PET attenuation correction for PET/MRI neuroimaging using a convolutional neural network.
      ,
      • Arabi H.
      • Zeng G.
      • Zheng G.
      • Zaidi H.
      Novel adversarial semantic structure deep learning for MRI-guided attenuation correction in brain PET/MRI.
      ]. However, air and bone have little contrast in T1- or T2-weighted MR images, which may impede the extraction of the corresponding features in learning-based methods. The two-point Dixon sequence can separate water and fat, which is suitable for segmentation. It has already been applied in commercial PET/MR scanners integrated with volume-interpolated breath-hold examination (VIBE) for Dixon-based soft-tissue and air segmentation of PET AC attenuation maps [
      • Freitag M.T.
      • Fenchel M.
      • Baumer P.
      • Heusser T.
      • Rank C.M.
      • Kachelriess M.
      • et al.
      Improved clinical workflow for simultaneous whole-body PET/MRI using high-resolution CAIPIRINHA-accelerated MR-based attenuation correction.
      ,
      • Izquierdo-Garcia D.
      • Hansen A.E.
      • Förster S.
      • Benoit D.
      • Schachoff S.
      • Fürst S.
      • et al.
      An SPM8-based approach for attenuation correction combining segmentation and nonrigid template formation: application to simultaneous PET/MR brain imaging.
      ]. Its drawback is again the poor contrast of bone, which results in the misclassification of bone as fat. Learning-based approaches have been employed on Dixon MR images by Torrado-Carvajal et al. [
      • Torrado-Carvajal A.
      • Vera-Olmos J.
      • Izquierdo-Garcia D.
      • Catalano O.A.
      • Morales M.A.
      • Margolin J.
      • et al.
      Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuation correction.
      ] They compared the performance of a Unet-based method with the vendor-provided segmentation based method on the same datasets. The relative bias of the proposed Unet-based method was 0.27 ± 2.59%, −0.03 ± 2.98% and −0.95 ± 5.09% for fat, soft-tissue and bone, respectively, while it was 1.48 ± 6.51%, −0.34 ± 10.00% and −25.1 ± 12.71% for segmentation-based methods. The learning-based methods significantly reduced the bias in bone with a reduced variance in all VOIs among all patients, which indicates its superior capability in PET AC due to improved bone identification. In order to enhance the bone contrast to facilitate the feature extraction in learning-based methods, ultrashort echo time (UTE)– and/or zero echo time (ZTE) MR sequences have been recently highlighted due to their capability to generate positive image contrast from bone [
      • Liu F.
      • Jang H.
      • Kijowski R.
      • Bradshaw T.
      • McMillan A.B.
      Deep learning MR imaging-based attenuation correction for PET/MR imaging.
      ]. Ladefoged et al. and Blanc-Durand et al. demonstrated the feasibility of UTE and ZTE MR sequences using Unet in PET/MR AC, respectively [
      • Blanc-Durand P.
      • Khalife M.
      • Sgard B.
      • Kaushik S.
      • Soret M.
      • Tiss A.
      • et al.
      Attenuation correction using 3D deep convolutional neural network for brain 18F-FDG PET/MR: comparison with Atlas, ZTE and CT based attenuation correction.
      ,
      • Ladefoged C.N.
      • Marner L.
      • Hindsholm A.
      • Law I.
      • Hojgaard L.
      • Andersen F.L.
      Deep learning based attenuation correction of PET/MRI in pediatric brain tumor patients: evaluation in a clinical setting.
      ]. The former study reported −0.1% (95%CI: −0.2 to 0.5%) bias on tumors without statistical significance from ground truth using Unet on UTE, which was superior to a significant bias of 2.2% (95%CI: 1.5 to 2.8%) using the vendor-provided segmentation-based method with Dixon. The latter study reported the bias of the Unet AC method on ZTE with a mean of −0.2% and a range from 1.7% in vertex to −1.8% in temporal lobe, and compared with a vendor-provided segmentation-based AC method on ZTE [
      • Wiesinger F.
      • Sacolick L.I.
      • Menini A.
      • Kaushik S.S.
      • Ahn S.
      • Veit-Haibach P.
      • et al.
      Zero TE MR bone imaging in the head.
      ] which underestimated in most VOIs with a mean bias of −2.2% that ranged from 0.1% to −4.5%. It should be noted that neither of the studies compared the use of UTE/ZTE or conventional MR sequence under the same deep learning network. Thus, the advantage of this specialized sequence has not been validated. Moreover, compared with conventional T1-/T2-weighted MR images, the UTE/ZTE MR images have little diagnostic value in soft tissue and add to the acquisition time, which may hinder its utility in time-sensitive cases such as whole-body PET/MR scans. Other studies attempted to use multiple MR sequences as inputs in training and sCT generation since it is believed to be superior to a single MR sequence in sCT accuracy [
      • Larsson A.
      • Johansson A.
      • Axelsson J.
      • Nyholm T.
      • Asklund T.
      • Riklund K.
      • et al.
      Evaluation of an attenuation correction method for PET/MR imaging of the head based on substitute CT images.
      ,
      • Navalpakkam B.K.
      • Braun H.
      • Kuwert T.
      • Quick H.H.
      Magnetic resonance-based attenuation correction for PET/MR hybrid imaging using continuous valued attenuation maps.
      ,
      • Leynes A.P.
      • Yang J.
      • Wiesinger F.
      • Kaushik S.S.
      • Shanbhag D.D.
      • Seo Y.
      • et al.
      Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI.
      ,
      • Gong K.
      • Yang J.
      • Kim K.
      • El Fakhri G.
      • Seo Y.
      • Li Q.
      Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images.
      ]. The most common combination is UTE/ZTE and Dixon, which provides contrast in bone against air and fat as well as soft tissue, respectively. However, so far there is no study that has quantitatively investigated the additional gain in PET AC accuracy from a multi-sequence input to that of a single-sequence input. Thus, the necessity of multiple MR sequences for PET AC needs to be further evaluated and balanced with the extra acquisition time.
      In the above PET/MR studies, the CT and MR images in the training datasets were acquired on different machines. The reviewed learning-based sCT methods all require image registration between the CT and MR to create CT-MR pairs for training. For the machine learning-based sCT methods such as random forest, the performance is sensitive to the registration error in the training pairs due to its one-to-one mapping strategy. Deep learning methods such as Unet and GAN-based methods are also susceptible to registration error if using a pixel-to-pixel loss. Kazemifar et al. showed that using mutual information as the loss function in the generator of GAN can bypass the registration step in the training [
      • Kazemifar S.
      • McGuire S.
      • Timmerman R.
      • Wardak Z.
      • Nguyen D.
      • Park Y.
      • et al.
      MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
      ]. CycleGAN-based methods feature higher robustness to registration error since it introduces cycle consistence loss to enforce the structural consistency between the original and cycle images, (e.g., force the cycle MRI generated images from synthetic CT to be the same as the original MRI) [
      • Dong X.
      • Lei Y.
      • Tian S.
      • Wang T.
      • Patel P.
      • Curran W.J.
      • et al.
      Synthetic MRI-aided multi-organ segmentation on male pelvic CT using cycle consistent deep attention network.
      ,
      • Harms J.
      • Lei Y.
      • Wang T.
      • Zhang R.
      • Zhou J.
      • Tang X.
      • et al.
      Paired cycle-GAN-based image correction for quantitative cone-beam computed tomography.
      ,
      • Lei Y.
      • Dong X.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks.
      ,
      • Lei Y.
      • Harms J.
      • Wang T.
      • Liu Y.
      • Shu H.K.
      • Jani A.B.
      • et al.
      MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
      ]. These techniques help reduce the requirement on registration accuracy in MR-sCT generation studies and are worth exploration in the context of PET AC.
      Whole-body PET scanning has been an important imaging modality in finding tumor metastasis. Almost all of the reviewed studies developed their proposed methods using brain PET datasets. Although in learning-based methods which are data-driven, the network and architecture are not body part specific, the feasibility accepted for brain images may not translate to whole body due to the high anatomical heterogeneities and inter-subject variability, as can be seen from Fig. 2. The only two studies on learning-based whole-body PET AC are from Dong et al. who proposed the CycleGAN-based method and investigated both the sCT generation strategy and direct mapping strategy [
      • Dong X.
      • Wang T.
      • Lei Y.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging.
      ,
      • Dong X.
      • Lei Y.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Deep learning-based attenuation correction in the absence of structural information for whole-body PET imaging.
      ]. They reported average bias within 5% in all selected organs except >10% in lungs in both studies. The authors attributed the poor performance on lung to the tissue inhomogeneity and insufficient representative training datasets. Both studies are performed in PET without the need for CT and so far there is no learning-based methods developed for PET/MR whole body scanners. The integration of whole-body MR into PET AC could be more challenging than brain datasets since the MR acquisition may have a limited field of view (FOV), longer scan times that introduces more motion, and degraded image quality due to a larger inhomogeneous-field region.

      3. Low-count PET reconstruction

      Low-count PET has important applications in pediatric PET and radiotherapy response evaluation with advantages of better motion control and lower patient dose. However, the low statistics result in increased image noise, reduced contrast-to-noise ratio, and bias in uptake measurements. The reconstruction of diagnostic quality images similar to standard- or full-count PET from the low-count PET cannot be achieved by simple post-processing operations such as denoising since lowering radiation dose changes the noise but also local uptake values [
      • An L.
      • Zhang P.
      • Adeli E.
      • Wang Y.
      • Ma G.
      • Shi F.
      • et al.
      Multi-level canonical correlation analysis for standard-dose PET image estimation.
      ]. Moreover, even with the same administered dose, the uptake distribution and signal level can vary greatly among patients. Recently, learning-based low-count PET reconstruction methods have been proposed to take advantage of their powerful data-driven feature extraction capabilities between two image datasets. These studies are summarized in Table 2. As was described with PET AC methods, the general workflow can be divided into two groups depending on whether anatomical image information was used as an additional input (i.e. both PET and MR images in PET/MR scanner are input) or not (only PET images are input). To the best of our knowledge, there is no study investigating the usage of both PET and CT as inputs. A potential reason could be that CT images do not show high contrast among soft tissues, thus it cannot provide soft tissue structure information to guide PET denoising. Most proposed methods are implemented on PET brain scans, with a few on lung and whole body. Again, the ground truth for training and evaluation is full-count PET. Compared with those evaluations in PET AC, which focus on relative bias, the evaluations in the reviewed studies of low-count PET reconstruction focus on image quality and similarity between the predicted result and its corresponding ground truth. The most common metrics include PSNR (peak signal-to-noise ratio), SSIM (structure similarity index), NMSE (normalized mean square error) and CV (coefficient of variation). Several studies also compared their proposed methods with other competing learning-based methods and conventional denoising methods.
      Table 2Overview of learning-based low-count PET reconstruction methods.
      Methods and strategyPET or PET + other modalitiesSite# of patients in training/testing datasetsCount fraction (low-count/full-count)Reported image quality metrics
      Numbers in parentheses indicate minimum and maximum values.
      : low-count/predicted full-count/full-count
      Authors
      Random forestPET + MR(T1-weighted)Brain11, leave-one-out1/4Coefficient of variation

      0.38/0.33/0.31
      Kang et al., 2015
      • Kang J.
      • Gao Y.
      • Shi F.
      • Lalush D.S.
      • Lin W.
      • Shen D.
      Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F]FDG PET images.
      Multi-level canonical correlation analysisPET, PET + MR(T1-weighted, DTI)Brain11, leave-one-out1/4PSNR:

      19.5/23.9/N.A.*
      An et al., 2016
      • An L.
      • Zhang P.
      • Adeli E.
      • Wang Y.
      • Ma G.
      • Shi F.
      • et al.
      Multi-level canonical correlation analysis for standard-dose PET image estimation.
      Sparse representationPET + MR(T1-weighted, DTI)Brain8, leave-one-out1/4PSNR:

      19.02 (15.74, 22.41)/19.98 (16.92, 23.24)/N.A.
      Wang et al., 2016
      • Wang Y.
      • Zhang P.
      • An L.
      • Ma G.
      • Kang J.
      • Shi F.
      • et al.
      Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation.
      Dictionary learningPET + MR(T1-weighted, DTI)Brain18, leave-one-out (only 8 used for test)1/4

      PSNR
      Estimated from figures.
      :

      (16, 20)/(24, 27)/N.A.
      Wang et al., 2017
      • Wang Y.
      • Ma G.
      • An L.
      • Shi F.
      • Zhang P.
      • Lalush D.S.
      • et al.
      Semisupervised tripled dictionary learning for standard-dose PET image prediction using low-dose PET and multimodal MRI.
      CNNPET or PET + MR(T1- and T2-weighted)Brain40, five-fold cross-validation 18F-florbetaben1/100PSNR
      Estimated from figures.
      : 31/36(PET-only), 38(PET + MR)/N.A.
      Chen et al., 2019
      • Chen K.T.
      • Gong E.
      • de Carvalho Macruz F.B.
      • Xu J.
      • Boumis A.
      • Khalighi M.
      • et al.
      Ultra-Low-Dose (18)F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs.
      CNNPETBrain, lungBrain: 2 training/1 testing

      Lung: 5 training/1 testing
      Brain: 1/5

      Lung: 1/10
      N.A.Gong et al., 2019
      • Gong K.
      • Guan J.
      • Liu C.
      • Qi J.
      PET image denoising using a deep neural network through fine tuning.
      Unet (without training datasets)PET + MR (T1-weighted)Brain0 training/1 testing1/8N.A.Gong et al., 2018
      • Gong K.
      • Catana C.
      • Qi J.
      • Li Q.
      PET image reconstruction using deep image prior.
      CNN (without training datasets)PETBrain0 training/1 test (monkey)1/5 and 1/10CNR:

      N.A./10.74 (1/5 counts), 7.44 (1/10 counts)/11.71
      Hashimoto et al., 2019
      • Hashimoto F.
      • Ohba H.
      • Ote K.
      • Teramoto A.
      • Tsukada H.
      Dynamic PET image denoising using deep convolutional neural networks without prior training datasets.
      Deep auto-context CNNPET + MR (T1-weighted)Brain16, leave-one-out1/4PSNR:

      N.A./24.76/N.A.
      Xiang et al., 2017
      • Xiang L.
      • Qiao Y.
      • Nie D.
      • An L.
      • Wang Q.
      • Shen D.
      Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI.
      Iterative Reconstruction using Denoising CNNPETBrain27 training (11C-DASB), 1 testing1/6SSIM: N.A./0.496/N.A.Kim et al., 2018
      • Kim K.
      • Wu D.
      • Gong K.
      • Dutta J.
      • Kim J.H.
      • Son Y.D.
      • et al.
      Penalized PET reconstruction using deep learning prior and local linear fitting.
      Iterative Reconstruction using CNNPETBrain, lungBrain: 15 training/1 validation/1 testing

      Lung: 5 training/1 testing
      1/10N.A.Gong et al., 2019
      • Gong K.
      • Guan J.
      • Kim K.
      • Zhang X.
      • Yang J.
      • Seo Y.
      • et al.
      Iterative PET image reconstruction using convolutional neural network representation.
      Deep convolutional encoder-decoderPET (sinogram)Whole body (simulated)245 training, 52 validation, 53 testingN/APSNR: 34.69Häggström et al., 2019
      • Häggström I.
      • Schmidtlein C.R.
      • Campanella G.
      • Fuchs T.J.
      DeepPET: a deep encoder–decoder network for directly solving the PET image reconstruction inverse problem.
      GANPETBrain16, leave-one-out1/4PSNR
      Estimated from figures.
      :

      20/23/N.A. (normal)

      21/24/N.A. (mild cognitive impairment)

      Wang et al., 2018
      • Wang Y.
      • Yu B.
      • Wang L.
      • Zu C.
      • Lalush D.S.
      • Lin W.
      • et al.
      3D conditional generative adversarial networks for high-quality PET image estimation at low dose.
      GANPET + MR(T1, DTI)Brain16, leave-one-out1/4PSNR:

      19.88 ± 2.34/24.61 ± 1.79/N.A. (normal),

      21.33 ± 2.53/25.19 ± 1.98/N.A. (mild cognitive impairment)
      Wang et al., 2019
      • Wang Y.
      • Zhou L.
      • Yu B.
      • Wang L.
      • Zu C.
      • Lalush D.S.
      • et al.
      3D auto-context-based locality adaptive multi-modality GANs for PET synthesis.
      GANPETBrain40, four-fold cross validation1/100PSNR
      Estimated from figures.
      :

      24/30/N.A.
      Ouyang et al., 2019
      • Ouyang J.
      • Chen K.T.
      • Gong E.
      • Pauly J.
      • Zaharchuk G.
      Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss.
      Comparison of CAE, Unet and GANPETLung10, five-fold cross validation1/10N.A.Lu et al., 2019
      • Lu W.
      • Onofrey J.A.
      • Lu Y.
      • Shi L.
      • Ma T.
      • Liu Y.
      • et al.
      An investigation of quantitative accuracy for deep learning based denoising in oncological PET.
      CycleGANPETWhole body25 leave-one-out + 10 hold-out tests1/8PSNR:

      39.4 ± 3.1/46.0 ± 3.8/N.A. (leave-one-out)

      38.1 ± 3.4/41.5 ± 2.5 (hold-out)
      Lei et al., 2019
      • Lei Y.
      • Dong X.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks.
      *N.A.: not available, i.e. not explicitly indicated in the publication.
      # Estimated from figures.
      + Numbers in parentheses indicate minimum and maximum values.

      3.1 Machine learning-based methods

      Learning-based methods for low dose PET reconstruction have been developed before deep learning-based methods were introduced into this research area. Kang et al. employed a random forest to predict the full dose PET from low dose PET and MR images [
      • Kang J.
      • Gao Y.
      • Shi F.
      • Lalush D.S.
      • Lin W.
      • Shen D.
      Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F]FDG PET images.
      ]. Their method first extracted features from patches of low-count PET and MR images based on segmentation to build tissue-specific models for initialization. Then, an iterative refinement strategy was used to further improve the predication accuracy. The method was evaluated on brain PET images with 1/4th of full dose and showed promising results of quantification accuracy and enhanced image quality.
      Wang et al. proposed a sparse representation (SR) framework for low dose PET reconstruction [
      • Wang Y.
      • Zhang P.
      • An L.
      • Ma G.
      • Kang J.
      • Shi F.
      • et al.
      Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation.
      ]. In order to incorporate multi-sequence MR images as inputs with low-count PET, they used a mapping strategy to ensure that the sparse coefficients estimated from the multimodal MR images and low-count PET images could be applied directly to the prediction of full dose PET images. An incremental refinement framework was also added to further improve the performance. A patch selection-based dictionary construction method was used to speed up the prediction process. Their evaluation on brain PET scans with 1/4th of full dose demonstrated significantly improved PSNR and NMSE compared with the original low-count PET. The authors also compared the performance of the proposed mapping-based SR with that of the conventional SR on the same dataset and demonstrated that the proposed method consistently achieved higher PSNR and lower NMSE across all subjects, which indicates that the mapping strategy does help the SR enhance the prediction quality.
      An et al. also proposed to formulate full-count estimation as a sparse representation problem using a multi-level canonical correlation analysis-based data-driven scheme [
      • An L.
      • Zhang P.
      • Adeli E.
      • Wang Y.
      • Ma G.
      • Shi F.
      • et al.
      Multi-level canonical correlation analysis for standard-dose PET image estimation.
      ]. The rationale is that the intra-data relationships between the low-count PET and full-count PET data spaces are different, which results in lower efficacy of directly applying learned coefficients from the low-count PET dictionary to the full-count PET dictionary for estimation. Canonical correlation analysis was then used to learn global mapping with the original coupled low-count PET and full-count PET dictionaries and then mapped both kinds of data into their common space. The multi-level scheme can further improve the learning of the common space by passing on the low-count PET dictionary atoms with non-zero coefficients of patches from the first level to the next level with the corresponding full-count PET dictionary subset, rather than immediately estimating the target full-count PET at the first level. This framework is capable of using PET alone or a combination of PET and multi-sequence MR images as inputs. The evaluation study on low-count brain PET patients showed that the proposed method successfully improved the visual quality and quantification accuracy of the SUV (standardized update value) and significantly outperformed the competing learning-based methods such as SR and random forest as well as the conventional denoising methods such as Block-matching and 3D filtering (BM3D) [
      • Dabov K.
      • Foi A.
      • Katkovnik V.
      • Egiazarian K.
      Image denoising by sparse 3-D transform-domain collaborative filtering.
      ] and Optimized Blockwise Nonlocal Means (OBNM) [
      • Coupe P.
      • Yger P.
      • Prima S.
      • Hellier P.
      • Kervrann C.
      • Barillot C.
      An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images.
      ].
      Wang et al. pointed out that the performance of the SR approach depends on the completeness of the dictionary, which means that it typically requires coupled samples in the training dataset [
      • Wang Y.
      • Ma G.
      • An L.
      • Shi F.
      • Zhang P.
      • Lalush D.S.
      • et al.
      Semisupervised tripled dictionary learning for standard-dose PET image prediction using low-dose PET and multimodal MRI.
      ]. However, for low-count PET reconstruction with multi-sequence MR images as additional inputs, it is hard to collect all the MR images of different contrasts for each sample in training. They then proposed to use an efficient semi-supervised tripled dictionary learning method to more effectively utilize all the available training samples including the incomplete ones to predict the full-count PET. The proposed framework enforced node-to-node and edge-to-edge matching between the patches of low-count PET/MR and full-count PET, which can reduce the requirements on their similarity. They validated the proposed method on 18 brain PET patients, only 8 among of which had complete datasets (i.e. having all low-count PET, full-count PET and MR images of three contrasts). It was observed that the proposed method outperformed traditional SR and random forest in PSNR and NMSE. It also compared the performance of the proposed method with and without the incomplete datasets included in the training datasets and the results suggested that the addition of incomplete datasets benefited the performance rather than harmed it and should be used if available.

      3.2 Deep learning-based methods

      The sparse-learning-based methods reviewed above usually include several steps such as patch extraction, encoding, and reconstruction. It would be time-consuming when testing new cases since it involves a large number of optimization problems, which might not be appropriate for clinical practice. Secondly, these methods tend to over-smooth the image due to its patch-based processing, which may limit the clinical usability in fine structure detection. For example, studies show that over-smoothing on PET images can adversely affect the signal-to-noise ratio on low-contrast lesions, thus degrade lesion detectability [
      • Wangerin K.
      • Ahn S.
      • Wollenweber S.
      • Ross S.
      • Kinahan P.
      • Manjeshwar R.
      Evaluation of lesion detectability in positron emission tomography when using a convergent penalized likelihood image reconstruction method.
      ,
      • Qi J.
      Theoretical evaluation of the detectability of random lesions in Bayesian emission reconstruction.
      ].
      Xiang et al. proposed a deep auto-context CNN to predict full-count PET images based on local patches in low-count PET and MR images [
      • Xiang L.
      • Qiao Y.
      • Nie D.
      • An L.
      • Wang Q.
      • Shen D.
      Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI.
      ]. This regression method integrated multiple CNN modules by the auto-context strategy in order to iteratively improve the estimated PET image. A basic four-layer CNN was first used to build a model to estimate the full-count PET from low-count PET and MR images. The estimated full-count PET was treated as the source of the context information and was used as inputs along with the low-count PET and MR images to a new four-layer CNN. Thus, the multiple CNNs were gradually concatenated into a deeper network and were optimized altogether with back-propagation. Validations on brain PET/MR datasets showed the proposed method provided comparative quality to full-count PET, as did the SR-based method proposed by An et al. [
      • An L.
      • Zhang P.
      • Adeli E.
      • Wang Y.
      • Ma G.
      • Shi F.
      • et al.
      Multi-level canonical correlation analysis for standard-dose PET image estimation.
      ], with much less time in prediction. A potential limitation of this study is that the axial slices extracted from the 3D images were treated as separate 2D images independently for training the deep architecture. This could cause the loss of information in sagittal and coronal directions and discontinuous in estimating results across slices [
      • Coupe P.
      • Yger P.
      • Prima S.
      • Hellier P.
      • Kervrann C.
      • Barillot C.
      An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images.
      ]. This patch-based workflow also tends to ignore the global spatial information in the prediction results.
      Recent studies have shown that low-noise training datasets are unnecessary for CNNs to produce denoised image since the CNN architecture has an intrinsic ability to solve inverse problems [

      Lehtinen J, Munkberg J, Hasselgren J, Laine S, Karras T, Aittala M, et al. Noise2Noise: Learning Image Restoration without Clean Data. arXiv e-prints 2018. 1803.04189.

      ]. The proposed deep image prior approach iteratively learns from a pair of random noise and corrupted images, and a denoised image is then obtained as output with moderate iterations [
      • Hashimoto F.
      • Ohba H.
      • Ote K.
      • Teramoto A.
      • Tsukada H.
      Dynamic PET image denoising using deep convolutional neural networks without prior training datasets.
      ]. Hashimoto et al. applied the deep image prior approach in a CNN for dynamic brain PET imaging. The static PET images acquired from the start to end were used as the network input and the dynamic PET images were used as the label. They reported the proposed method maintained the CNR (contrast to noise ratio) and outperformed the conventional model-based denoising methods on a dynamic brain PET study in monkeys [
      • Hashimoto F.
      • Ohba H.
      • Ote K.
      • Teramoto A.
      • Tsukada H.
      Dynamic PET image denoising using deep convolutional neural networks without prior training datasets.
      ]. Similarly, Gong et al. also used the deep image prior framework with a modified Unet structure for PET/MR imaging [
      • Gong K.
      • Catana C.
      • Qi J.
      • Li Q.
      PET image reconstruction using deep image prior.
      ]. In their method, the MR images served as patient specific prior information and network inputs. The network was then updated during the iterative reconstruction process based on the MR and acquired low-count PET. Results demonstrated that the proposed method was able to recover more cortex details and suppress more noise in white matter than a Gaussian post-filtering method and a pre-trained neural network in the penalty, and provided higher uptake values in tumor than a kernel method.
      To fully utilize the 3D information from image volumes, Wang et al. proposed an end-to-end framework based on a 3D conditional GAN to predict full-count PET from low-count PET. They used both convolutional and up-convolutional layers in the generator architecture to ensure the same size between the input and output. The generator network was a 3D Unet-like deep architecture with skip connection strategy, which was more efficient than voxel-wise estimation. Compared with 2D axial-based conditional GAN, the proposed 3D scheme was shown to produce better visual quality with sharper texture on sagittal and coronal views. Compared with the SR-based methods and CNN-based methods proposed in reference [
      • Wang Y.
      • Zhang P.
      • An L.
      • Ma G.
      • Kang J.
      • Shi F.
      • et al.
      Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation.
      ,
      • Wang Y.
      • Ma G.
      • An L.
      • Shi F.
      • Zhang P.
      • Lalush D.S.
      • et al.
      Semisupervised tripled dictionary learning for standard-dose PET image prediction using low-dose PET and multimodal MRI.
      ,
      • Xiang L.
      • Qiao Y.
      • Nie D.
      • An L.
      • Wang Q.
      • Shen D.
      Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI.
      ], the proposed method featured better spatial resolution and quantitative accuracy. To incorporate multi-sequence MR images as inputs for better performance, Wang et al. proposed an auto-context-based “locality adaptive” multi-modality GANs (LA-GANs) model to synthesize the full-count PET image from both the low-count PET and the accompanying multi-sequence MRI [
      • Wang Y.
      • Zhou L.
      • Yu B.
      • Wang L.
      • Zu C.
      • Lalush D.S.
      • et al.
      3D auto-context-based locality adaptive multi-modality GANs for PET synthesis.
      ]. The “locality adaptive” means that the weight of each image contrast depends on the image locations. 1 × 1 × 1 kernel was used to learn locality-adaptive fusion mechanisms to minimize the number of parameters selected by the input of multi-sequence images. Compared with their previous method, it can achieve better performance with better PSNR while incurring a smaller number of additional parameters.
      Lei et al. proposed a CycleGAN model to estimate diagnostic quality PET images using low count data on whole body PET scans [
      • Lei Y.
      • Dong X.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks.
      ]. A representative result of low-count PET reconstruction for whole body is shown in Fig. 3 which was redrawn based on the method of Lei et al. CycleGAN learns a transformation to synthesize full-count PET images using low-count PET that would be indistinguishable from a standard clinical protocol. The algorithm also learns an inverse transformation such that cycle low-count PET data (inverse of the synthetic estimate) generated from synthetic full-count PET would be close to the true low-count PET. This approach improves the GAN model’s prediction and uniqueness of the synthetic dataset. Residual blocks were also integrated to the architecture to better capture the difference of low- and full-count images and enhance convergence. Evaluation studies showed that the average bias among all 10 hold-out test patients was less than 5% for all selected organs and the comparison with Unet and GAN-based methods indicated a higher quantitative accuracy and PSNR in all selected organs.
      Figure thumbnail gr3
      Fig. 3Examples of whole body PET images of (a) low counts, (b) full counts and (c) predicted full counts from (a) low counts using the learning-based method proposed in reference
      [
      • Lei Y.
      • Dong X.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks.
      ]
      . Window width of [0 10000] with PET units in Bq/ml.
      In addition to the direct mapping from low-count PET to full-count PET, some studies investigated the feasibility of embedding the deep learning network into the conventional iterative reconstruction framework. Kim et al. modified the denoising CNN and trained with patient datasets of full-count PET and low-count PET with a preset noise level by simulation [
      • Kim K.
      • Wu D.
      • Gong K.
      • Dutta J.
      • Kim J.H.
      • Son Y.D.
      • et al.
      Penalized PET reconstruction using deep learning prior and local linear fitting.
      ]. The predicted results from the denoising CNN then served as a prior in the iterative reconstruction setting with a local linear fitting function to correct the unwanted bias caused by the noise level changes in each iteration. Gong et al. used a CNN trained with pairs of high-dose PET and iterative reconstructions of low-count PET [
      • Gong K.
      • Guan J.
      • Kim K.
      • Zhang X.
      • Yang J.
      • Seo Y.
      • et al.
      Iterative PET image reconstruction using convolutional neural network representation.
      ]. Then, instead of directly feeding a noisy image into the CNN and estimating its output, they used the CNN to define a feasible set of valid PET images as the initial images in the iterative reconstruction. Both methods were compared with conventional denoising and iterative reconstruction algorithms and demonstrated advantages in less noise when quantitative accuracy was kept the same. In contrast to these methods where CNN served as the regularizer in the reconstruction loop, Häggström et al. exploited the deep convolutional encoder-decoder network to take the low-count PET sinogram as input and directly output the high-dose PET images thus the network implicitly learned the inverse problem [
      • Häggström I.
      • Schmidtlein C.R.
      • Campanella G.
      • Fuchs T.J.
      DeepPET: a deep encoder–decoder network for directly solving the PET image reconstruction inverse problem.
      ]. On simulated whole-body PET images, the proposed method outperformed the conventional ordered subset expectation maximization (OSEM) method by preserving fine details, reducing noise and reducing reconstruction time.

      3.3 Discussion

      As categorized above, the reviewed studies can be mainly divided into machine learning-based and deep learning-based approaches. Although the above machine learning-based methods showed good estimation performance, most of them have limitations since they depend on the handcraft features extracted from prior knowledge. Handcrafted features have limited ability in effectively representing images. The voxel-wise estimation strategy usually involves solving a large number of optimization problems online, which is also time-consuming when applied on new test datasets. Moreover, the small-patch-based methods could average the overlapped patches as the last step, thus it tends to generate over-smoothed and blurred images resulting in loss of texture information in PET images. In contrast, the deep learning-based methods can learn representative features directly from the training datasets. The recent advances in GPUs also allow the deep learning-based methods to be efficiently implemented with a large number of parameters and trained with full-sized images. To speed up the convergence and avoid overfitting in training, techniques such as batch normalization, additional bias and ReLU can be integrated into the network. Compared to CNN, which has one generator network, GAN has a generator network and a discriminator network. The purpose of the additional discriminator is to learn to distinguish between the prediction from generator and the real inputs while the generator is optimized to predict samples such that they are indistinguishable from real data by the discriminator. The two networks are trained alternatively to respectively minimize and maximize an objective function.
      Comparisons among several reviewed methods have been included in a few studies. For example, Wang et al. [
      • Wang Y.
      • Zhou L.
      • Yu B.
      • Wang L.
      • Zu C.
      • Lalush D.S.
      • et al.
      3D auto-context-based locality adaptive multi-modality GANs for PET synthesis.
      ,
      • Wang Y.
      • Yu B.
      • Wang L.
      • Zu C.
      • Lalush D.S.
      • Lin W.
      • et al.
      3D conditional generative adversarial networks for high-quality PET image estimation at low dose.
      ] compared SR-based, dictionary learning-based and CNN-based methods [
      • An L.
      • Zhang P.
      • Adeli E.
      • Wang Y.
      • Ma G.
      • Shi F.
      • et al.
      Multi-level canonical correlation analysis for standard-dose PET image estimation.
      ,
      • Wang Y.
      • Zhang P.
      • An L.
      • Ma G.
      • Kang J.
      • Shi F.
      • et al.
      Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation.
      ,
      • Wang Y.
      • Ma G.
      • An L.
      • Shi F.
      • Zhang P.
      • Lalush D.S.
      • et al.
      Semisupervised tripled dictionary learning for standard-dose PET image prediction using low-dose PET and multimodal MRI.
      ,
      • Xiang L.
      • Qiao Y.
      • Nie D.
      • An L.
      • Wang Q.
      • Shen D.
      Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI.
      ] with their proposed GAN-based method. It was observed that the predicted full-count PET images by SR-based method are more likely to generate over-smoothed images than CNN– and GAN-based methods. GAN-based methods are better than CNN-based methods in preserving the detailed information and it is argued by the authors that it is because CNN-based methods do not consider the varying contributions across image locations. The proposed GAN-based methods also outperforms other competing methods in quantification metrics of image quality and accuracy with statistical significance. The CNN-based and GAN-based methods also feature high processing speed of around several seconds, as compared to minutes and even hours for machine learning-based methods. Note that these comparison studies were from the same group that also proposed the comparison methods. Independent studies in the performance evaluation of different methods are encouraged. Lu et al. [
      • Lu W.
      • Onofrey J.A.
      • Lu Y.
      • Shi L.
      • Ma T.
      • Liu Y.
      • et al.
      An investigation of quantitative accuracy for deep learning based denoising in oncological PET.
      ] compared the performance of CAE, Unet and GAN on lung nodule quantification. They reported that Unet and GAN were slightly better than CAE with less difference between the predicted results and the ground truth, and the prediction of CAE was smoother. Quantitatively, Unet and GAN had significantly lower SUV bias than CAE, especially for small nodules. Unet and GAN had comparable performance while Unet had a lower computational cost.
      The effectiveness of a 3D model over a 2D model is confirmed in a few publications [
      • Wang Y.
      • Yu B.
      • Wang L.
      • Zu C.
      • Lalush D.S.
      • Lin W.
      • et al.
      3D conditional generative adversarial networks for high-quality PET image estimation at low dose.
      ,
      • Lu W.
      • Onofrey J.A.
      • Lu Y.
      • Shi L.
      • Ma T.
      • Liu Y.
      • et al.
      An investigation of quantitative accuracy for deep learning based denoising in oncological PET.
      ]. In a 2D model, the whole 2D slices, either at sagittal, coronal or axial view, are used as inputs, while a 3D model uses large 3D image volume patches as inputs. Results show that the advantage of a 3D model is the better performance in all the three views, while a 2D model has obvious discontinuity and blur in the views different from that used in training as well as substantial underestimation in quantification.
      The benefit of multi-sequence MR has also been indicated by the reviewed work. Wang et al. compared the performance of their proposed method using T1-weighted and FA (fractional anisotropy) and MD (mean diffusivity) from DTI (diffusion tensor imaging) brain MR images jointly or separately [
      • Wang Y.
      • Ma G.
      • An L.
      • Shi F.
      • Zhang P.
      • Lalush D.S.
      • et al.
      Semisupervised tripled dictionary learning for standard-dose PET image prediction using low-dose PET and multimodal MRI.
      ]. They found that improvement over low-count PET alone was achieved by including any one MR contrast type as input. Using all MR contrast images with low-count PET jointly can enhance the performance further. It was also observed that the general results by single contrast MR image is similar to each other across all three types of contrast, which suggests that these contrasts contribute similarly in most of the brain areas. Similar conclusions were also drawn in other studies with slight differences in T1-weighting, which may be more helpful than DTI in full-count PET prediction [
      • Wang Y.
      • Ma G.
      • An L.
      • Shi F.
      • Zhang P.
      • Lalush D.S.
      • et al.
      Semisupervised tripled dictionary learning for standard-dose PET image prediction using low-dose PET and multimodal MRI.
      ,
      • Wang Y.
      • Zhou L.
      • Yu B.
      • Wang L.
      • Zu C.
      • Lalush D.S.
      • et al.
      3D auto-context-based locality adaptive multi-modality GANs for PET synthesis.
      ]. This is likely because the T1-weighted image can show both white and grey matter more clearly. For the DTI image, it contributes more in the white matter regions than in the grey matter regions.
      Lei et al. presented the first study of learning-based low-count PET reconstruction on whole body datasets [
      • Lei Y.
      • Dong X.
      • Wang T.
      • Higgins K.
      • Liu T.
      • Curran W.J.
      • et al.
      Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks.
      ]. Similar to the cases in the PET AC studies, the application of these data-driven approaches to brain data are often less complicated due to less inter-patient anatomical variation in brain images as compared to that on whole body images. Whole body PET images also have higher intra-patient uptake variation, i.e. tracer concentration is much higher in brain and bladder than anywhere else, which may decrease the relative contrast among other organs and introduce difficulty in extracting features. Moreover, considering the challenges in PET/MR with whole body, the benefit from MR images in full-count prediction are worthy of re-evaluation.

      4. Summary and outlook

      Recent years have witnessed a trend that machine learning, especially deep learning, is increasingly used in the application of PET imaging. Various types of machine learning networks have been borrowed from computer vision field and adapted to specific clinical tasks for PET imaging. As reviewed in this paper, the most common applications are PET AC and low-count PET reconstruction. It is also an emerging field since all of these reviewed studies were published within five years. With the development in both machine learning algorithm and computing hardware, more learning-based methods are expected to facilitate the clinical workflow of PET imaging with more potential in applications of quantification.
      In addition to PET AC and low-count reconstruction, there are other topics in PET imaging where machine learning can be exploited. For example, high resolution PET has great potential in visualizing and accurately measuring the radiotracer concentration in structures with dimensions on the order of millimeters, where the partial volume effect substantially limits the activity discriminating of the scanner [
      • Stickel J.R.
      • Cherry S.R.
      High-resolution PET detector design: modelling components of intrinsic spatial resolution.
      ,
      • Funck T.
      • Paquette C.
      • Evans A.
      • Thiel A.
      Surface-based partial-volume correction for high-resolution PET.
      ]. In addition to the advancement in state-of-art PET detectors that have achieved sub-nanosecond coincident timing resolution and sub-millimeter coincident spatial resolution [
      • Stickel J.R.
      • Cherry S.R.
      High-resolution PET detector design: modelling components of intrinsic spatial resolution.
      ,
      • Vandenbroucke A.
      • Foudray A.M.
      • Olcott P.D.
      • Levin C.S.
      Performance characterization of a new high resolution PET scintillation detector.
      ,
      • Yang Y.
      • Bec J.
      • Zhou J.
      • Zhang M.
      • Judenhofer M.S.
      • Bai X.
      • et al.
      A prototype high-resolution small-animal PET scanner dedicated to mouse brain imaging.
      ], the partial volume effect caused by the residual spatial blurring can be further suppressed by image-based correction methods with the aid of a second image which has substantially reduced partial volume effect such as MR or CT [
      • Meltzer C.C.
      • Kinahan P.E.
      • Greer P.J.
      • Nichols T.E.
      • Comtat C.
      • Cantwell M.N.
      • et al.
      Comparative evaluation of MR-based partial-volume correction schemes for PET.
      ,
      • Meltzer C.C.
      • Zubieta J.K.
      • Links J.M.
      • Brakeman P.
      • Stumpf M.J.
      • Frost J.J.
      MR-based correction of brain PET measurements for heterogeneous gray matter radioactivity distribution.
      ,
      • Rousset O.G.
      • Ma Y.
      • Evans A.C.
      Correction for partial volume effects in PET: Principle and validation.
      ,
      • Sattarivand M.
      • Kusano M.
      • Poon I.
      • Caldwell C.
      Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness.
      ,
      • Erlandsson K.
      • Buvat I.
      • Pretorius P.H.
      • Thomas B.A.
      • Hutton B.F.
      A review of partial volume correction techniques for emission tomography and their applications in neurology, cardiology and oncology.
      ]. Machine learning can be promising since it has excellent performance of direct end-to-end mapping shown in other applications. Encouraging results have been shown by Song et al. [

      Song T-A, Chowdhury SR, Yang F, Dutta J. Super-resolution PET imaging using convolutional neural networks. arXiv e-prints 2019. 1906.03645.

      ] in a preliminary study using a CNN. In addition to the improvement on image quality and quantification accuracy, machine learning methods are also attractive to other advanced applications such as segmentation and radiomics, especially when its success in other imaging modalities has been demonstrated [
      • Lei Y.
      • Liu Y.
      • Dong X.
      • Tian S.
      • Wang T.
      • Jiang X.
      • et al.
      Automatic multi-organ segmentation in thorax CT images using U-Net-GAN.
      ,
      • Lei Y.
      • Wang T.
      • Wang B.
      • He X.
      • Tian S.
      • Jani A.B.
      • et al.
      Ultrasound prostate segmentation based on 3D V-Net with deep supervision.
      ,
      • Wang B.
      • Lei Y.
      • Jeong J.J.
      • Wang T.
      • Liu Y.
      • Tian S.
      • et al.
      Automatic MRI prostate segmentation using 3D deeply supervised FCN with concatenated atrous convolution.
      ,
      • Wang B.
      • Lei Y.
      • Wang T.
      • Dong X.
      • Tian S.
      • Jiang X.
      • et al.
      Automated prostate segmentation of volumetric CT images using 3D deeply supervised dilated FCN.
      ,
      • Wang T.
      • Lei Y.
      • Shafai-Erfani G.
      • Jiang X.
      • Dong X.
      • Zhou J.
      • et al.
      Learning-based automatic segmentation on arteriovenous malformations from contract-enhanced CT images.
      ,
      • Wang T.
      • Lei Y.
      • Tang H.
      • Harms J.
      • Wang C.
      • Liu T.
      • et al.
      A learning-based automatic segmentation method on left ventricle in SPECT imaging.
      ,
      • Dong X.
      • Lei Y.
      • Wang T.
      • Thomas M.
      • Tang L.
      • Curran W.J.
      • et al.
      Automatic multiorgan segmentation in thorax CT images using U-net-GAN.
      ,
      • Lei Y.
      • Tian S.
      • He X.
      • Wang T.
      • Wang B.
      • Patel P.
      • et al.
      Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net.
      ,
      • Wang B.
      • Lei Y.
      • Tian S.
      • Wang T.
      • Liu Y.
      • Patel P.
      • et al.
      Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic MRI prostate segmentation.
      ,
      • Wang T.
      • Lei Y.
      • Tang H.
      • He Z.
      • Castillo R.
      • Wang C.
      • et al.
      A learning-based automatic segmentation and quantification method on left ventricle in gated myocardial perfusion SPECT imaging: a feasibility study.
      ,
      • Wang T.
      • Lei Y.
      • Tian S.
      • Jiang X.
      • Zhou J.
      • Liu T.
      • et al.
      Learning-based automatic segmentation of arteriovenous malformations on contrast CT images in brain stereotactic radiosurgery.
      ,
      • Hatt M.
      • Laurent B.
      • Ouahabi A.
      • Fayad H.
      • Tan S.
      • Li L.
      • et al.
      The first MICCAI challenge on PET tumor segmentation.
      ,
      • Lei Y.
      • Dong X.
      • Tian Z.
      • Liu Y.
      • Tian S.
      • Wang T.
      • et al.
      CT prostate segmentation based on synthetic MRI-aided deep attention fully convolution network.
      ,
      • Lei Y.
      • Wang T.
      • Tian S.
      • Dong X.
      • Jani A.B.
      • Schuster D.
      • et al.
      Male pelvic multi-organ segmentation aided by CBCT-based synthetic MRI.
      ]. Although machine learning has been available for decades, all of these applications are still new to this field. Therefore, it is expected that an increasing number of publications will be coming out in the next few years.
      The reviewed studies in the paper are all feasibility studies with small to intermediate number of patients in training/testing. The clinical utility and its potential impact of these learning-based methods cannot be comprehensively evaluated until a large number of clinical datasets are involved in the study due to the data-driven property of machine learning as well as prospective evaluation on a clinical outcome measure. Moreover, the representativeness of training/testing datasets needs special attention in clinical studies where patient cohorts can be heterogeneous. The missing of diverse demographics and pathological abnormalities may reduce the robustness and generality in the performance of these proposed methods. Care has to be taken when the model is trained and applied on data from different center/scanner/protocol, which may lead to unpredictable performance.

      Acknowledgementa

      This research was supported in part by the National Cancer Institute of the National Institutes of Health under Award Number R01CA215718 and Emory Winship Cancer Institute pilot grant.

      References

        • Schrevens L.
        • Lorent N.
        • Dooms C.
        • Vansteenkiste J.
        The role of PET scan in diagnosis, staging, and management of non-small cell lung cancer.
        Oncologist. 2004; 9: 633-643
        • Sugiyama M.
        • Sakahara H.
        • Torizuka T.
        • Kanno T.
        • Nakamura F.
        • Futatsubashi M.
        • et al.
        18F-FDG PET in the detection of extrahepatic metastases from hepatocellular carcinoma.
        J Gastroenterol. 2004; 39: 961-968
        • Ma S.-Y.
        • See L.-C.
        • Lai C.-H.
        • Chou H.-H.
        • Tsai C.-S.
        • Ng K.-K.
        • et al.
        Delayed 18F-FDG PET for detection of paraaortic lymph node metastases in cervical cancer patients.
        J Nucl Med. 2003; 44: 1775-1783
        • Strobel K.
        • Dummer R.
        • Husarik D.B.
        • Lago M.P.
        • Hany T.F.
        • Steinert H.C.
        High-risk melanoma: accuracy of FDG PET/CT with added CT morphologic information for detection of metastases.
        Radiology. 2007; 244: 566-574
        • Adler L.P.
        • Faulhaber P.F.
        • Schnur K.C.
        • Al-Kasi N.L.
        • Shenk R.R.
        Axillary lymph node metastases: screening with [F-18]2-deoxy-2-fluoro-D-glucose (FDG) PET.
        Radiology. 1997; 203: 323-327
        • Abdel-Nabi H.
        • Doerr R.J.
        • Lamonica D.M.
        • Cronin V.R.
        • Galantowicz P.J.
        • Carbone G.M.
        • et al.
        Staging of primary colorectal carcinomas with fluorine-18 fluorodeoxyglucose whole-body PET: correlation with histopathologic and CT findings.
        Radiology. 1998; 206: 755-760
        • Taira A.V.
        • Herfkens R.J.
        • Gambhir S.S.
        • Quon A.
        Detection of bone metastases: assessment of integrated FDG PET/CT imaging1.
        Radiology. 2007; 243: 204-211
        • Ohta M.
        • Tokuda Y.
        • Suzuki Y.
        • Kubota M.
        • Makuuchi H.
        • Tajima T.
        • et al.
        Whole body PET for the evaluation of bony metastases in patients with breast cancer: comparison with 99Tcm-MDP bone scintigraphy.
        Nucl Med Commun. 2001; 22: 875-879
        • Czernin J.
        • Allen-Auerbach M.
        • Schelbert H.R.
        Improvements in cancer staging with PET/CT: literature-based evidence as of September 2006.
        J Nucl Med. 2007; 48: 78S-88S
        • Biehl K.J.
        • Kong F.M.
        • Dehdashti F.
        • Jin J.Y.
        • Mutic S.
        • El Naqa I.
        • et al.
        18F-FDG PET definition of gross tumor volume for radiotherapy of non-small cell lung cancer: is a single standardized uptake value threshold approach appropriate?.
        J Nucl Med. 2006; 47: 1808-1812
        • Paulino A.C.
        • Koshy M.
        • Howell R.
        • Schuster D.
        • Davis L.W.
        Comparison of CT- and FDG-PET-defined gross tumor volume in intensity-modulated radiotherapy for head-and-neck cancer.
        Int J Radiat Oncol Biol Phys. 2005; 61: 1385-1392
        • Nestle U.
        • Kremp S.
        • Schaefer-Schuler A.
        • Sebastian-Welsch C.
        • Hellwig D.
        • Rübe C.
        • et al.
        Comparison of different methods for delineation of 18F-FDG PET–positive tissue for target volume definition in radiotherapy of patients with non-small cell lung cancer.
        J Nucl Med. 2005; 46: 1342-1348
        • Schwaiger M.
        • Ziegler S.
        • Nekolla S.G.
        PET/CT: challenge for nuclear cardiology.
        J Nucl Med. 2005; 46: 1664-1678
        • Parker M.W.
        • Iskandar A.
        • Limone B.
        • Perugini A.
        • Kim H.
        • Jones C.
        • et al.
        Diagnostic accuracy of cardiac positron emission tomography versus single photon emission computed tomography for coronary artery disease: a bivariate meta-analysis.
        Circul Cardiovasc Imag. 2012; 5: 700-707
        • Politis M.
        • Piccini P.
        Positron emission tomography imaging in neurological disorders.
        J Neurol. 2012; 259: 1769-1780
        • Boellaard R.
        Standards for PET image acquisition and quantitative data analysis.
        J Nucl Med. 2009; 50: 11s-20s
        • Ben-Haim S.
        • Ell P.
        18F-FDG PET and PET/CT in the evaluation of cancer treatment response.
        J Nucl Med. 2009; 50: 88-99
        • Shankar L.K.
        • Hoffman J.M.
        • Bacharach S.
        • Graham M.M.
        • Karp J.
        • Lammertsma A.A.
        • et al.
        Consensus recommendations for the use of 18F-FDG PET as an indicator of therapeutic response in patients in national cancer institute trials.
        J Nucl Med. 2006; 47: 1059-1066
        • Wahl R.L.
        • Jacene H.
        • Kasamon Y.
        • Lodge M.A.
        From RECIST to PERCIST: evolving considerations for PET response criteria in solid tumors.
        J Nucl Med. 2009; 50: 122s-150s
        • Naqa I.E.
        The role of quantitative PET in predicting cancer treatment outcomes.
        Clin Transl Imag. 2014; 2: 305-320
        • Kinahan P.E.
        • Townsend D.W.
        • Beyer T.
        • Sashin D.
        Attenuation correction for a combined 3D PET/CT scanner.
        Med Phys. 1998; 25: 2046-2053
        • Watson C.C.
        New, faster, image-based scatter correction for 3D PET.
        IEEE Trans Nucl Sci. 2000; 47: 1587-1594
        • Lodge M.A.
        • Chaudhry M.A.
        • Wahl R.L.
        Noise considerations for PET quantification using maximum and peak standardized uptake value.
        J Nucl Med. 2012; 53: 1041-1047
        • Slifstein M.
        • Laruelle M.
        Effects of statistical noise on graphic analysis of PET neuroreceptor studies.
        J Nucl Med. 2000; 41: 2083-2088
        • Soret M.
        • Bacharach S.L.
        • Buvat I.
        Partial-volume effect in PET tumor imaging.
        J Nucl Med. 2007; 48: 932-945
        • Sureshbabu W.
        • Mawlawi O.
        PET/CT imaging artifacts.
        J Nucl Med Technol. 2005; 33: 156-161
        • Blodgett T.M.
        • Mehta A.S.
        • Mehta A.S.
        • Laymon C.M.
        • Carney J.
        • Townsend D.W.
        PET/CT artifacts.
        Clin Imaging. 2011; 35: 49-63
        • Miwa K.
        • Wagatsuma K.
        • Iimori T.
        • Sawada K.
        • Kamiya T.
        • Sakurai M.
        • et al.
        Multicenter study of quantitative PET system harmonization using NIST-traceable 68Ge/68Ga cross-calibration kit.
        Physica Med. 2018; 52: 98-103
        • Burger C.
        • Goerres G.
        • Schoenes S.
        • Buck A.
        • Lonn A.H.
        • Von Schulthess G.K.
        PET attenuation coefficients from CT images: experimental evaluation of the transformation of CT into PET 511-keV attenuation coefficients.
        Eur J Nucl Med Mol Imaging. 2002; 29: 922-927
        • Witoszynskyj S.
        • Andrzejewski P.
        • Georg D.
        • Hacker M.
        • Nyholm T.
        • Rausch I.
        • et al.
        Attenuation correction of a flat table top for radiation therapy in hybrid PET/MR using CT- and 68Ge/68Ga transmission scan-based μ-maps.
        Physica Med. 2019; 65: 76-83
        • Fei B.
        • Yang X.
        • Nye J.A.
        • Aarsvold J.N.
        • Raghunath N.
        • Cervo M.
        • et al.
        MR/PET quantification tools: registration, segmentation, classification, and MR-based attenuation correction.
        Med Phys. 2012; 39: 6443-6454
        • Yang X.
        • Fei B.
        Multiscale segmentation of the skull in MR images for MRI-based attenuation correction of combined MR/PET.
        J Am Med Inform Assoc. 2013; 20: 1037-1045
      1. Goff-Rougetet RL, Frouin V, Mangin J-F, Bendriem B. Segmented MR images for brain attenuation correction in PET. Medical Imaging 1994: SPIE; 1994. 12.

        • El Fakhri G.
        • Kijewski M.F.
        • Johnson K.A.
        • Syrkin G.
        • Killiany R.J.
        • Becker J.A.
        • et al.
        MRI-guided SPECT perfusion measures and volumetric MRI in prodromal Alzheimer disease.
        Arch Neurol. 2003; 60: 1066-1072
        • Hofmann M.
        • Pichler B.
        • Scholkopf B.
        • Beyer T.
        Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.
        Eur J Nucl Med Mol Imaging. 2009; 36: S93-S104
        • Zaidi H.
        • Montandon M.L.
        • Slosman D.O.
        Magnetic resonance imaging-guided attenuation and scatter corrections in three-dimensional brain positron emission tomography.
        Med Phys. 2003; 30: 937-948
        • Kops E.R.
        • Herzog H.
        Alternative methods for attenuation correction for PET images in MR-PET scanners.
        IEEE Nucl Sci Symp Conf Rec. 2007; 6: 4327-4330
        • Hofmann M.
        • Steinke F.
        • Scheel V.
        • Charpiat G.
        • Farquhar J.
        • Aschoff P.
        • et al.
        MRI-based attenuation correction for PET/MRI: a novel approach combining pattern recognition and atlas registration.
        J Nucl Med. 2008; 49: 1875-1883
        • Catana C.
        The dawn of a new era in low-dose PET imaging.
        Radiology. 2019; 290: 657-658
        • Lei Y.
        • Xu D.
        • Zhou Z.
        • Wang T.
        • Dong X.
        • Liu T.
        • et al.
        A denoising algorithm for CT image using low-rank sparse coding.
        SPIE Medical Imaging. 2018; 10574
        • Wang T.
        • Lei Y.
        • Tian Z.
        • Dong X.
        • Liu Y.
        • Jiang X.
        • et al.
        Deep learning-based image quality improvement for low-dose computed tomography simulation in radiation therapy.
        J Med Imaging. 2019; 6: 1-10
        • Erdi Y.E.
        • Macapinlac H.
        • Rosenzweig K.E.
        • Humm J.L.
        • Larson S.M.
        • Erdi A.K.
        • et al.
        Use of PET to monitor the response of lung cancer to radiation treatment.
        Eur J Nucl Med. 2000; 27: 861-866
        • Cliffe H.
        • Patel C.
        • Prestwich R.
        • Scarsbrook A.
        Radiotherapy response evaluation using FDG PET-CT-established and emerging applications.
        Brit J Radiol. 2017; 90: 20160764
        • Das S.K.
        • McGurk R.
        • Miften M.
        • Mutic S.
        • Bowsher J.
        • Bayouth J.
        • et al.
        Task Group 174 report: utilization of [18F]Fluorodeoxyglucose positron emission tomography ([18F]FDG-PET) in radiation therapy.
        Med Phys. 2019; 46: e706-e725
        • Rahmim A.
        • Lodge M.A.
        • Karakatsanis N.A.
        • Panin V.Y.
        • Zhou Y.
        • McMillan A.
        • et al.
        Dynamic whole-body PET imaging: principles, potentials and applications.
        Eur J Nucl Med Mol Imaging. 2019; 46: 501-518
        • Borjesson P.K.
        • Jauw Y.W.
        • de Bree R.
        • Roos J.C.
        • Castelijns J.A.
        • Leemans C.R.
        • et al.
        Radiation dosimetry of 89Zr-labeled chimeric monoclonal antibody U36 as used for immuno-PET in head and neck cancer patients.
        J Nucl Med. 2009; 50: 1828-1836
        • Jauw Y.W.S.
        • Heijtel D.F.
        • Zijlstra J.M.
        • Hoekstra O.S.
        • de Vet H.C.W.
        • Vugts D.J.
        • et al.
        Noise-induced variability of immuno-PET with zirconium-89-labeled antibodies: an analysis based on count-reduced clinical images.
        Mol Imag Biol. 2018; 20: 1025-1034
        • Nguyen N.C.
        • Vercher-Conejero J.L.
        • Sattar A.
        • Miller M.A.
        • Maniawski P.J.
        • Jordan D.W.
        • et al.
        Image quality and diagnostic performance of a digital PET prototype in patients with oncologic diseases: initial experience and comparison with analog PET.
        J Nucl Med. 2015; 56: 1378-1385
        • Karp J.S.
        • Surti S.
        • Daube-Witherspoon M.E.
        • Muehllehner G.
        Benefit of time-of-flight in PET: experimental and clinical results.
        J Nucl Med. 2008; 49: 462-470
        • Qi J.
        • Leahy R.M.
        A theoretical study of the contrast recovery and variance of MAP reconstructions from PET data.
        IEEE Trans Med Imaging. 1999; 18: 293-305
        • Chan C.
        • Fulton R.
        • Barnett R.
        • Feng D.D.
        • Meikle S.
        Postreconstruction nonlocal means filtering of whole-body PET with an anatomical prior.
        IEEE Trans Med Imaging. 2014; 33: 636-650
        • Christian B.T.
        • Vandehey N.T.
        • Floberg J.M.
        • Mistretta C.A.
        Dynamic PET denoising with HYPR processing.
        J Nucl Med. 2010; 51: 1147-1154
        • Balcerzyk M.
        • Moszynski M.
        • Kapusta M.
        • Wolski D.
        • Pawelke J.
        • Melcher C.L.Y.S.O.
        • et al.
        A study of energy resolution and nonproportionality.
        IEEE Trans Nucl Sci. 2000; 47: 1319-1323
        • Herbert D.J.
        • Moehrs S.
        • D’Ascenzo N.
        • Belcari N.
        • Del Guerra A.
        • Morsani F.
        • et al.
        The silicon photomultiplier for application to high-resolution positron emission tomography.
        Nucl Instrum Methods Phys Res, Sect A. 2007; 573: 84-87
        • Sahiner B.
        • Pezeshk A.
        • Hadjiiski L.M.
        • Wang X.
        • Drukker K.
        • Cha K.H.
        • et al.
        Deep learning in medical imaging and radiation therapy.
        Med Phys. 2019; 46: e1-e36
        • Giger M.L.
        Machine learning in medical imaging.
        J Am College Radiol. 2018; 15: 512-520
        • Erickson B.J.
        • Korfiatis P.
        • Akkus Z.
        • Kline T.L.
        Machine learning for medical imaging.
        RadioGraphics. 2017; 37: 505-515
        • Feng M.
        • Valdes G.
        • Dixit N.
        • Solberg T.D.
        Machine learning in radiation oncology: opportunities, requirements, and needs.
        Front Oncol. 2018; 8: 110
        • Jarrett D.
        • Stride E.
        • Vallis K.
        • Gooding M.J.
        Applications and limitations of machine learning in radiation oncology.
        Brit J Radiol. 2019; 92: 20190001
        • Cui G.
        • Jeong J.J.
        • Lei Y.
        • Wang T.
        • Liu T.
        • Curran W.J.
        • et al.
        Machine-learning-based classification of Glioblastoma using MRI-based radiomic features.
        SPIE Med Imag. 2019; 10950
        • Fu Y.
        • Lei Y.
        • Wang T.
        • Curran W.J.
        • Liu T.
        • Yang X.
        Deep learning in medical image registration: a review.
        Phys Med Biol. 2020; https://doi.org/10.1088/1361-6560/ab843e
        • Lei Y.
        • Liu Y.
        • Wang T.
        • Tian S.
        • Dong X.
        • Jiang X.
        • et al.
        Brain MRI classification based on machine learning framework with auto-context model. SPIE Medical.
        Imaging. 2019; 10953
        • Lei Y.
        • Shu H.K.
        • Tian S.
        • Wang T.
        • Liu T.
        • Mao H.
        • et al.
        Pseudo CT estimation using patch-based joint dictionary learning.
        in: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 2018https://doi.org/10.1109/EMBC.2018.8513475:5150-3
        • Shafai-Erfani G.
        • Lei Y.
        • Liu Y.
        • Wang Y.
        • Wang T.
        • Zhong J.
        • et al.
        MRI-based proton treatment planning for base of skull tumors.
        Int J Particle Therapy. 2019; 6: 12-25
        • Wang T.
        • Lei Y.
        • Manohar N.
        • Tian S.
        • Jani A.B.
        • Shu H.-K.
        • et al.
        Dosimetric study on learning-based cone-beam CT correction in adaptive radiation therapy.
        Med Dosim. 2019; 44: e71-e79
      2. Lei Y, Fu Y, Harms J, Wang T, Curran WJ, Liu T, et al. 4D-CT Deformable Image Registration Using an Unsupervised Deep Convolutional Neural Network. Workshop on Artificial Intelligence in Radiation Therapy. 2019;doi: 10.1007/978-3-030-32486-5_4:26-33.

        • Harms J.
        • Wang T.
        • Petrongolo M.
        • Niu T.
        • Zhu L.
        Noise suppression for dual-energy CT via penalized weighted least-square optimization with similarity-based regularization.
        Med Phys. 2016; 43: 2676-2686
        • Harms J.
        • Wang T.
        • Petrongolo M.
        • Zhu L.
        Noise suppression for energy-resolved CT using similarity-based non-local filtration.
        SPIE Med Imag. 2016; 9783: 8
        • Wang T.
        • Zhu L.
        Dual energy CT with one full scan and a second sparse-view scan using structure preserving iterative reconstruction (SPIR).
        Phys Med Biol. 2016; 61: 6684
        • Wang T.
        • Zhu L.
        Pixel-wise estimation of noise statistics on iterative CT reconstruction from a single scan.
        Med Phys. 2017; 44: 3525-3533
      3. Wang T, Zhu L. Image-domain non-uniformity correction for cone-beam CT. 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). 2017;doi: 10.1109/ISBI.2017.7950611:680-3.

        • Gong K.
        • Berg E.
        • Cherry S.R.
        • Qi J.
        Machine learning in PET: from photon detection to quantitative image reconstruction.
        Proc IEEE. 2020; 108: 51-68
        • Ravishankar S.
        • Ye J.C.
        • Fessler J.A.
        Image reconstruction: from sparsity to data-adaptive methods and machine learning.
        Proc IEEE. 2020; 108: 86-109
        • Lei Y.
        • Harms J.
        • Wang T.
        • Tian S.
        • Zhou J.
        • Shu H.-K.
        • et al.
        MRI-based synthetic CT generation using semantic random forest with iterative refinement.
        Phys Med Biol. 2019; 64085001
        • Lei Y.
        • Jeong J.J.
        • Wang T.
        • Shu H.-K.
        • Patel P.
        • Tian S.
        • et al.
        MRI-based pseudo CT synthesis using anatomical signature and alternating random forest with iterative refinement model.
        J Med Imaging. 2018; 5: 1-12
        • Lei Y.
        • Shu H.-K.
        • Tian S.
        • Jeong J.J.
        • Liu T.
        • Shim H.
        • et al.
        Magnetic resonance imaging-based pseudo computed tomography using anatomic signature and joint dictionary learning.
        J Med Imaging. 2018; 5034001
        • Liu Y.
        • Lei Y.
        • Wang T.
        • Kayode O.
        • Tian S.
        • Liu T.
        • et al.
        MRI-based treatment planning for liver stereotactic body radiotherapy: validation of a deep learning-based synthetic CT generation method.
        Brit J Radiol. 2019; 92: 20190067
        • Liu Y.
        • Lei Y.
        • Wang Y.
        • Shafai-Erfani G.
        • Wang T.
        • Tian S.
        • et al.
        Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning.
        Phys Med Biol. 2019; 64205022
        • Liu Y.
        • Lei Y.
        • Wang Y.
        • Wang T.
        • Ren L.
        • Lin L.
        • et al.
        MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method.
        Phys Med Biol. 2019; 64145015
        • Shafai-Erfani G.
        • Wang T.
        • Lei Y.
        • Tian S.
        • Patel P.
        • Jani A.B.
        • et al.
        Dose evaluation of MRI-based synthetic CT generated using a machine learning method for prostate cancer radiotherapy.
        Med Dosim. 2019; 44: e64-e70
        • Wang T.
        • Manohar N.
        • Lei Y.
        • Dhabaan A.
        • Shu H.-K.
        • Liu T.
        • et al.
        MRI-based treatment planning for brain stereotactic radiosurgery: dosimetric validation of a learning-based pseudo-CT generation method.
        Med Dosim. 2019; 44: 199-204
        • Lei Y.
        • Tang X.
        • Higgins K.
        • Lin J.
        • Jeong J.
        • Liu T.
        • et al.
        Learning-based CBCT correction using alternating random forest based on auto-context model.
        Med Phys. 2019; 46: 601-618
        • Lei Y.
        • Tang X.
        • Higgins K.
        • Wang T.
        • Liu T.
        • Dhabaan A.
        • et al.
        Improving image quality of cone-beam CT using alternating regression forest.
        SPIE Med Imag. 2018; 10573
      4. Lei Y, Wang T, Harms J, Fu Y, Dong X, Curran WJ, et al. CBCT-Based Synthetic MRI Generation for CBCT-Guided Adaptive Radiotherapy. Workshop on Artificial Intelligence in Radiation Therapy. 2019;doi: 10.1007/978-3-030-32486-5_19:154-61.

        • Lei Y.
        • Wang T.
        • Harms J.
        • Shafai-Erfani G.
        • Dong X.
        • Zhou J.
        • et al.
        Image quality improvement in cone-beam CT using deep learning.
        SPIE Medical Imaging. 2019; 10948
        • Lei Y.
        • Wang T.
        • Harms J.
        • Shafai-Erfani G.
        • Tian S.
        • Higgins K.
        • et al.
        MRI-based pseudo CT generation using classification and regression random forest.
        SPIE Medical Imaging. 2019; 10948
        • Lei Y.
        • Wang T.
        • Liu Y.
        • Higgins K.
        • Tian S.
        • Liu T.
        • et al.
        MRI-based synthetic CT generation using deep convolutional neural network.
        SPIE Medical Imaging. 2019; 10949
        • Yang X.
        • Wang T.
        • Lei Y.
        • Higgins K.
        • Liu T.
        • Shim H.
        • et al.
        MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning.
        Phys Med Biol. 2019; 64025001
        • Zaidi H.
        • Montandon M.-L.
        Scatter compensation techniques in PET.
        PET Clin. 2007; 2: 219-234
        • Liu F.
        • Jang H.
        • Kijowski R.
        • Bradshaw T.
        • McMillan A.B.
        Deep learning MR imaging-based attenuation correction for PET/MR imaging.
        Radiology. 2018; 286: 676-684
        • Larsson A.
        • Johansson A.
        • Axelsson J.
        • Nyholm T.
        • Asklund T.
        • Riklund K.
        • et al.
        Evaluation of an attenuation correction method for PET/MR imaging of the head based on substitute CT images.
        Magma (New York, NY). 2013; 26: 127-136
        • Navalpakkam B.K.
        • Braun H.
        • Kuwert T.
        • Quick H.H.
        Magnetic resonance-based attenuation correction for PET/MR hybrid imaging using continuous valued attenuation maps.
        Invest Radiol. 2013; 48: 323-332
      5. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv e-prints 2015. 1505.04597.

        • Blanc-Durand P.
        • Khalife M.
        • Sgard B.
        • Kaushik S.
        • Soret M.
        • Tiss A.
        • et al.
        Attenuation correction using 3D deep convolutional neural network for brain 18F-FDG PET/MR: comparison with Atlas, ZTE and CT based attenuation correction.
        PLoS ONE. 2019; 14e0223141
        • Leynes A.P.
        • Yang J.
        • Wiesinger F.
        • Kaushik S.S.
        • Shanbhag D.D.
        • Seo Y.
        • et al.
        Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI.
        J Nucl Med. 2018; 59: 852-858
        • Torrado-Carvajal A.
        • Vera-Olmos J.
        • Izquierdo-Garcia D.
        • Catalano O.A.
        • Morales M.A.
        • Margolin J.
        • et al.
        Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuation correction.
        J Nucl Med. 2019; 60: 429-435
        • Ladefoged C.N.
        • Marner L.
        • Hindsholm A.
        • Law I.
        • Hojgaard L.
        • Andersen F.L.
        Deep learning based attenuation correction of PET/MRI in pediatric brain tumor patients: evaluation in a clinical setting.
        Front Neurosci. 2018; 12: 1005
        • Spuhler K.D.
        • Gardus 3rd, J.
        • Gao Y.
        • DeLorenzo C.
        • Parsey R.
        • Huang C.
        Synthesis of patient-specific transmission data for PET attenuation correction for PET/MRI neuroimaging using a convolutional neural network.
        J Nucl Med. 2019; 60: 555-560
        • Gong K.
        • Yang J.
        • Kim K.
        • El Fakhri G.
        • Seo Y.
        • Li Q.
        Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images.
        Phys Med Biol. 2018; 63125011
        • Arabi H.
        • Zeng G.
        • Zheng G.
        • Zaidi H.
        Novel adversarial semantic structure deep learning for MRI-guided attenuation correction in brain PET/MRI.
        Eur J Nucl Med Mol Imaging. 2019; 46: 2746-2759
        • Heußer T.
        • Rank C.M.
        • Berker Y.
        • Freitag M.T.
        • Kachelrieß M.
        MLAA-based attenuation correction of flexible hardware components in hybrid PET/MR imaging.
        EJNMMI Phys. 2017; 4: 12
        • Hwang D.
        • Kang S.K.
        • Kim K.Y.
        • Seo S.
        • Paeng J.C.
        • Lee D.S.
        • et al.
        Generation of PET attenuation map for whole-body time-of-flight 18F-FDG PET/MRI using a deep neural network trained with simultaneously reconstructed activity and attenuation maps.
        J Nucl Med. 2019; 60: 1183-1189
        • Hwang D.
        • Kim K.Y.
        • Kang S.K.
        • Seo S.
        • Paeng J.C.
        • Lee D.S.
        • et al.
        Improving the accuracy of simultaneously reconstructed activity and attenuation maps using deep learning.
        J Nucl Med. 2018; 59: 1624-1629
        • Liu F.
        • Jang H.
        • Kijowski R.
        • Zhao G.
        • Bradshaw T.
        • McMillan A.B.
        A deep learning approach for (18)F-FDG PET attenuation correction.
        EJNMMI Phys. 2018; 5: 24
        • Armanious K.
        • Küstner T.
        • Reimold M.
        • Nikolaou K.
        • La Fougère C.
        • Yang B.
        • et al.
        Independent brain 18F-FDG PET attenuation correction using a deep learning approach with Generative Adversarial Networks.
        Hellenic J Nucl Med. 2019; 22: 179-186
        • Dong X.
        • Wang T.
        • Lei Y.
        • Higgins K.
        • Liu T.
        • Curran W.J.
        • et al.
        Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging.
        Phys Med Biol. 2019; 64215016
        • Dong X.
        • Lei Y.
        • Wang T.
        • Higgins K.
        • Liu T.
        • Curran W.J.
        • et al.
        Deep learning-based attenuation correction in the absence of structural information for whole-body PET imaging.
        Phys Med Biol. 2019; https://doi.org/10.1088/1361-6560/ab652c
        • Yang J.
        • Park D.
        • Gullberg G.T.
        • Seo Y.
        Joint correction of attenuation and scatter in image space using deep convolutional neural networks for dedicated brain (18)F-FDG PET.
        Phys Med Biol. 2019; 64075019
        • Shiri I.
        • Ghafarian P.
        • Geramifar P.
        • Leung K.-H.-Y.
        • Ghelichoghli M.
        • Oveisi M.
        • et al.
        Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC).
        Eur Radiol. 2019; 29: 6867-6879
        • Wang G.
        • Qi J.
        Analysis of penalized likelihood image reconstruction for dynamic PET quantification.
        IEEE Trans Med Imaging. 2009; 28: 608-620
        • Freitag M.T.
        • Fenchel M.
        • Baumer P.
        • Heusser T.
        • Rank C.M.
        • Kachelriess M.
        • et al.
        Improved clinical workflow for simultaneous whole-body PET/MRI using high-resolution CAIPIRINHA-accelerated MR-based attenuation correction.
        Eur Radiol. 2017; 96: 12-20
        • Izquierdo-Garcia D.
        • Hansen A.E.
        • Förster S.
        • Benoit D.
        • Schachoff S.
        • Fürst S.
        • et al.
        An SPM8-based approach for attenuation correction combining segmentation and nonrigid template formation: application to simultaneous PET/MR brain imaging.
        J Nucl Med. 2014; 55: 1825-1830
        • Wiesinger F.
        • Sacolick L.I.
        • Menini A.
        • Kaushik S.S.
        • Ahn S.
        • Veit-Haibach P.
        • et al.
        Zero TE MR bone imaging in the head.
        Magn Reson Med. 2016; 75: 107-114
        • Kazemifar S.
        • McGuire S.
        • Timmerman R.
        • Wardak Z.
        • Nguyen D.
        • Park Y.
        • et al.
        MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
        Radiother Oncol. 2019; 136: 56-63
        • Dong X.
        • Lei Y.
        • Tian S.
        • Wang T.
        • Patel P.
        • Curran W.J.
        • et al.
        Synthetic MRI-aided multi-organ segmentation on male pelvic CT using cycle consistent deep attention network.
        Radiother Oncol. 2019; 141: 192-199
        • Harms J.
        • Lei Y.
        • Wang T.
        • Zhang R.
        • Zhou J.
        • Tang X.
        • et al.
        Paired cycle-GAN-based image correction for quantitative cone-beam computed tomography.
        Med Phys. 2019; 46: 3998-4009
        • Lei Y.
        • Dong X.
        • Wang T.
        • Higgins K.
        • Liu T.
        • Curran W.J.
        • et al.
        Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks.
        Phys Med Biol. 2019; 64215017
        • Lei Y.
        • Harms J.
        • Wang T.
        • Liu Y.
        • Shu H.K.
        • Jani A.B.
        • et al.
        MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
        Med Phys. 2019; 46: 3565-3581
        • An L.
        • Zhang P.
        • Adeli E.
        • Wang Y.
        • Ma G.
        • Shi F.
        • et al.
        Multi-level canonical correlation analysis for standard-dose PET image estimation.
        IEEE Trans Image Process. 2016; 25: 3303-3315
        • Kang J.
        • Gao Y.
        • Shi F.
        • Lalush D.S.
        • Lin W.
        • Shen D.
        Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F]FDG PET images.
        Med Phys. 2015; 42: 5301-5309
        • Wang Y.
        • Zhang P.
        • An L.
        • Ma G.
        • Kang J.
        • Shi F.
        • et al.
        Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation.
        Phys Med Biol. 2016; 61: 791-812
        • Dabov K.
        • Foi A.
        • Katkovnik V.
        • Egiazarian K.
        Image denoising by sparse 3-D transform-domain collaborative filtering.
        IEEE Trans Image Process. 2007; 16: 2080-2095
        • Coupe P.
        • Yger P.
        • Prima S.
        • Hellier P.
        • Kervrann C.
        • Barillot C.
        An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images.
        IEEE Trans Med Imaging. 2008; 27: 425-441
        • Wang Y.
        • Ma G.
        • An L.
        • Shi F.
        • Zhang P.
        • Lalush D.S.
        • et al.
        Semisupervised tripled dictionary learning for standard-dose PET image prediction using low-dose PET and multimodal MRI.
        IEEE Trans Bio-medical Eng. 2017; 64: 569-579
        • Wangerin K.
        • Ahn S.
        • Wollenweber S.
        • Ross S.
        • Kinahan P.
        • Manjeshwar R.
        Evaluation of lesion detectability in positron emission tomography when using a convergent penalized likelihood image reconstruction method.
        J Med Imag. 2016; 4011002
        • Qi J.
        Theoretical evaluation of the detectability of random lesions in Bayesian emission reconstruction.
        Inform Process Med Imag. 2003; 18: 354-365
        • Xiang L.
        • Qiao Y.
        • Nie D.
        • An L.
        • Wang Q.
        • Shen D.
        Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI.
        Neurocomputing. 2017; 267: 406-416