Advertisement

Using a deep neural network for four-dimensional CT artifact reduction in image-guided radiotherapy

Published:August 18, 2019DOI:https://doi.org/10.1016/j.ejmp.2019.08.008

      Highlights

      • We developed a deep neural network (DNN)-based artifact reduction method.
      • Our developed DNN successfully reduced the artifacts in the respective CT image sections.
      • Additional information such as multiple phase images or artifact map increased the artifact reduction accuracy.

      Abstract

      Introduction

      Breathing artifact may affect the quality of four-dimensional computed tomography (4DCT) images. We developed a deep neural network (DNN)-based artifact reduction method.

      Methods

      We used 857 thoracoabdominal 4DCT data sets scanned with 320-section CT with no 4DCT artifact within any volume (ground-truth image). The limitations of graphics processing unit (GPU) memory prevent importation of CT volume data into the DNN. To simulate 4DCT artifact, we interposed 4DCT images from other breathing phases at selected couch positions.
      Two DNNs, DNN1 and DNN2, were trained to maintain the quality of the output image to that of the ground truth by importing a single and 10 CT images, respectively. A third DNN consisting of an artifact classifier and image generator networks was added. The classifier network was based on residual networks and trained to detect CT section interposition-caused artifacts (artifact map). The generator network reduced artifacts by importing the coronal image data and the artifact map.

      Results

      By repeating the 4DCT artifact reduction with coronal images, the geometrical accuracy in the sagittal sections could be improved, especially with DNN3. Diaphragm position was most accurate when DNN3 was applied. DNN2 corrected artifacts by using CT images from other phases, but DNN2 also modified artifact-free regions.

      Conclusions

      Additional information related to the 4DCT artifact, including information from other respiratory phases (DNN2) and/or artifact regions (DNN3), provided substantial improvement over DNN1. Interposition-related artifacts were reduced by use of an artifact positional map (DNN3).

      Keywords

      To read this article in full you will need to make a payment

      Purchase one-time access:

      Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online access
      One-time access price info
      • For academic or personal research use, select 'Academic and Personal'
      • For corporate R&D use, select 'Corporate R&D Professionals'

      Subscribe:

      Subscribe to Physica Medica: European Journal of Medical Physics
      Already a print subscriber? Claim online access
      Already an online subscriber? Sign in
      Institutional Access: Sign in to ScienceDirect

      References

        • Mori S.
        • Zenklusen S.
        • Knopf A.C.
        Current status and future prospects of multi-dimensional image-guided particle therapy.
        Radiol Phys Technol. 2013; 6: 249-272
        • Bert C.
        • Herfarth K.
        Management of organ motion in scanned ion beam therapy.
        Radiat Oncol. 2017; 12: 170
        • Keall P.J.
        • Mageras G.S.
        • Balter J.M.
        • Emery R.S.
        • Forster K.M.
        • Jiang S.B.
        • et al.
        The management of respiratory motion in radiation oncology report of AAPM Task Group 76.
        Med Phys. 2006; 33: 3874-3900
        • Mageras G.S.
        • Pevsner A.
        • Yorke E.D.
        • Rosenzweig K.E.
        • Ford E.C.
        • Hertanto A.
        • et al.
        Measurement of lung tumor motion using respiration-correlated CT.
        Int J Radiat Oncol Biol Phys. 2004; 60: 933-941
        • Yamamoto T.
        • Langner U.
        • Loo Jr., B.W.
        • Shen J.
        • Keall P.J.
        Retrospective analysis of artifacts in four-dimensional CT images of 50 abdominal and thoracic radiotherapy patients.
        Int J Radiat Oncol Biol Phys. 2008; 72: 1250-1258
        • Mori S.
        • Kumagai M.
        • Karube M.
        • Yamamoto N.
        Dosimetric impact of 4DCT artifact in carbon-ion scanning beam treatment: worst case analysis in lung and liver treatments.
        Phys Med. 2016; 32: 787-794
        • Mori S.
        • Karube M.
        • Shirai T.
        • Tajiri M.
        • Takekoshi T.
        • Miki K.
        • et al.
        Carbon-ion pencil beam scanning treatment with gated markerless tumor tracking: an analysis of positional accuracy.
        Int J Radiat Oncol Biol Phys. 2016; 95: 258-266
        • Hirai R.
        • Sakata Y.
        • Taguchi Y.
        • Mori S.
        Regression model of tumor and diaphragm position for marker-less tumor tracking in carbon ion scanning therapy for hepatocellular carcinoma.
        Elsevier, American Society for Radiation Oncology (ASTRO)2016: E639-E640
        • Richter A.
        • Wilbert J.
        • Baier K.
        • Flentje M.
        • Guckenberger M.
        Feasibility study for markerless tracking of lung tumors in stereotactic body radiotherapy.
        Int J Radiat Oncol Biol Phys. 2010; 78: 618-627
        • Li R.
        • Lewis J.H.
        • Cervino L.I.
        • Jiang S.B.
        A feasibility study of markerless fluoroscopic gating for lung cancer radiotherapy using 4DCT templates.
        Phys Med Biol. 2009; 54: N489-N500
        • Zhang Y.
        • Yang J.
        • Zhang L.
        • Court L.E.
        • Balter P.A.
        • Dong L.
        Modeling respiratory motion for reducing motion artifacts in 4D CT images.
        Med Phys. 2013; 40041716
        • Li T.
        • Schreibmann E.
        • Yang Y.
        • Xing L.
        Motion correction for improved target localization with on-board cone-beam computed tomography.
        Phys Med Biol. 2006; 51: 253-267
      1. Sergey I, Christian S. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In: The 32nd International Conference on Machine Learning. 2015:pp. 448–456.

        • Rumelhart E.D.
        • Hinton E.G.
        • Williams J.R.
        Learning representations by back-propagating errors.
        Nature. 1986; 323: 533-536
      2. Simonya K, Zisserman A. Very deep convolutional networks for large-scale image recognition. International Conference for Learning Representations 2015.

        • Vincent P.
        • Larochelle H.
        • Lajoie I.
        • Bengio Y.
        • Manzagol P.A.
        Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion.
        J Mach Learn Res. 2010; 11: 3371-3408
        • Jonathan M.
        • Ueli M.
        • Dan C.
        • Jürgen S.
        Stacked convolutional auto-encoders for hierarchical feature extraction.
        in: International Conference on Artificial Neural Networks. Springer, Heidelberg2011: 52-59
        • Yang W.
        • Chen Y.
        • Liu Y.
        • Zhong L.
        • Qin G.
        • Lu Z.
        • et al.
        Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain.
        Med Image Anal. 2017; 35: 421-433
        • Xiao H.
        MR-based synthetic CT generation using a deep convolutional neural network method.
        Med Phys. 2017; 44: 1408-1419
        • Mori S.
        Deep architecture neural network-based real-time image processing for image-guided radiotherapy.
        Phys Med. 2017; 40: 79-87
        • Junyuan X.
        • Linli X.
        • Enhong C.
        Image denoising and inpainting with deep neural networks.
        Adv Neural Inform Process Syst. 2012; : 341-349
      3. Gao R, Grauman K. On-Demand Learning for Deep Image Restoration. arXiv. 2017:1612.01380v3.

        • Mao X.-J.
        • Shen A.C.
        • Yang A.Y.-B.
        Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections.
        in: Proceedings of the 30th International Conference on Neural Information Processing Systems. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. Curran Associates Inc., Spain2016: 2810-2818
        • Dobashi S.
        • Mori S.
        Evaluation of respiratory pattern during respiratory-gated radiotherapy.
        Australas Phys Eng Sci Med. 2014; 37: 731-742
        • Jain V.
        • Seung S.
        Natural image denoising with convolutional networks.
        Adv Neural Inform Process Syst. 2009; : 769-776
        • Zhang K.
        • Zuo W.
        • Chen Y.
        • Meng D.
        • Zhang L.
        Beyond a gaussian denoiser: residual learning of deep CNN for image denoising.
        IEEE Trans Image Process. 2017; 99: 1
      4. Vincent P, Larochelle H, Bengio Y, Manzagol P-A. Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on Machine learning. 2008:pp. 1096–1103.

        • Ronneberger O.
        • Fischer P.
        • Brox T.
        U-Net: convolutional networks for biomedical image segmentation.
        in: Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer, 2015: 234-241
      5. Radford A, Metz L, Chintala S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. International Conference on Learning Representations 2016.

      6. Shi W, Caballero J, Huszár F, Totz J, Aitken AP, Bishop R, et al. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016. pp. 1874–1883.

      7. He K, Zhang X, Ren S, Sun J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE; 2015. pp. 1026–1034.

        • Bishop C.M.
        Neural Networks for Pattern Recognition.
        Oxford University Press, 1995
      8. Lin M, Chen Q, Yan S. Network In Network. International Conference on Learning Representations2013.

      9. Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, et al. Caffe: Convolutional Architecture for Fast Feature Embedding. In: Proceedings of the 22nd ACM international conference on Multimedia. Orlando, Florida, USA: ACM; 2014. p. 675-8.

        • Wang Z.
        • Bovik A.C.
        • Sheikh H.R.
        • Simoncelli E.P.
        Image quality assessment: from error visibility to structural similarity.
        IEEE Trans Image Process. 2004; 13: 600-612
      10. Pathak D, Kra ̈henbu ̈hl P, Donahue J, Darrell T, E AA. Context Encoders: Feature Learning by Inpainting. arXiv. 2016:1604.07379.

      11. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative Adversarial Networks. arXiv. 2014:1406.2661.

      12. Tomczak JM, Welling M. VAE with a VampPrior. arXiv. 2017:1705.07120v3.

      13. [Application of navigation system ExacTrac in radiation therapy of a female patient with disseminated pineoblastoma]. Zh Vopr Neirokhir Im N N Burdenko. 2010:pp. 43–46.

        • Mori S.
        • Shirai T.
        • Takei Y.
        • Furukawa T.
        • Inaniwa T.
        • Matsuzaki Y.
        • et al.
        Patient handling system for carbon ion beam scanning therapy.
        J Appl Clin Med Phys. 2012; 13: 3926