Advertisement

Dual-domain sparse-view CT reconstruction with Transformers

      Highlights

      • A dual-domain sparse-view CT algorithm CT Transformer is presented.
      • Algorithm treats sinograms as sentences.
      • Performance is better than the CNN-based method.
      • Prove the feasibility of Transformers in CT image processing.

      Abstract

      Purpose:

      Computed Tomography (CT) has been widely used in the medical field. Sparse-view CT is an effective and feasible method to reduce the radiation dose. However, the conventional filtered back projection (FBP) algorithm will suffer from severe artifacts in sparse-view CT. Iterative reconstruction algorithms have been adopted to remove artifacts, but they are time-consuming due to repeated projection and back projection and may cause blocky effects. To overcome the difficulty in sparse-view CT, we proposed a dual-domain sparse-view CT algorithm CT Transformer (CTTR) and paid attention to sinogram information.

      Methods:

      CTTR treats sinograms as sentences and enhances reconstructed images with sinogram’s characteristics. We qualitatively evaluate the CTTR, an iterative method TVM-POCS, a convolutional neural network based method FBPConvNet in terms of a reduction in artifacts and a preservation of details. Besides, we also quantitatively evaluate these methods in terms of RMSE, PSNR and SSIM.

      Results:

      We evaluate our method on the Lung Image Database Consortium image collection with different numbers of projection views and noise levels. Experiment studies show that, compared with other methods, CTTR can reduce more artifacts and preserve more details on various scenarios. Specifically, CTTR improves the FBPConvNet performance of PSNR by 0.76dB with 30 projections.

      Conclusions:

      The performance of our proposed CTTR is better than the method based on CNN in the case of extremely sparse views both on visual results and quantitative evaluation. Our proposed method provides a new idea for the application of Transformers to CT image processing.

      Keywords

      To read this article in full you will need to make a payment

      Purchase one-time access:

      Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online access
      One-time access price info
      • For academic or personal research use, select 'Academic and Personal'
      • For corporate R&D use, select 'Corporate R&D Professionals'

      Subscribe:

      Subscribe to Physica Medica: European Journal of Medical Physics
      Already a print subscriber? Claim online access
      Already an online subscriber? Sign in
      Institutional Access: Sign in to ScienceDirect

      References

        • Zeng G.L.
        Medical image reconstruCTion: A conceptual tutorial.
        Medical Image ReconstruCTion: A Conceptual Tutorial. 2010https://doi.org/10.1007/978-3-642-05368-9
        • Bevelacqua J.J.
        PraCTical and effeCTive ALARA.
        Health Phys. 2010; 98https://doi.org/10.1097/HP.0B013E3181D18D63
        • Bian J.
        • Siewerdsen J.H.
        • Han X.
        • Sidky E.Y.
        • Prince J.L.
        • Pelizzari C.A.
        • Pan X.
        Evaluation of sparse-view reconstruCTion from flat-panel-deteCTor cone-beam CT.
        Phys Med Biol. 2010; 55: 6575https://doi.org/10.1088/0031-9155/55/22/001
        • Pan X.
        • Sidky E.Y.
        • Vannier M.
        Why do commercial CT scanners still employ traditional, filtered back-projeCTion for image reconstruCTion?.
        Inverse Problems. 2009; 25123009https://doi.org/10.1088/0266-5611/25/12/123009
        • Donoho D.L.
        Compressed sensing.
        IEEE Trans Inform Theory. 2006; 52: 1289-1306https://doi.org/10.1109/TIT.2006.871582
        • Sidky E.Y.
        • Kao C.M.
        • Pan X.
        Accurate image reconstruCTion from few-views and limited-angle data in divergent-beam CT.
        J X-Ray Sci Technol. 2006; 14: 119-139
        • Sidky E.Y.
        • Pan X.
        Image reconstruCTion in circular cone-beam computed tomography by constrained, total-variation minimization.
        Phys Med Biol. 2008; 53: 4777https://doi.org/10.1088/0031-9155/53/17/021
        • Cho S.
        • Lim S.
        • Kim C.
        • Wi S.
        • Kwon T.
        • Youn W.S.
        • Lee S.H.
        • Kang B.S.
        • Cho S.
        Enhancement of soft-tissue contrast in cone-beam CT using an anti-scatter grid with a sparse sampling approach.
        Phys. Medica. 2020; 70: 1-9https://doi.org/10.1016/J.EJMP.2020.01.004
        • Liu Y.
        • Ma J.
        • Fan Y.
        • Liang Z.
        Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruCTion.
        Phys Med Biol. 2012; 57: 7923https://doi.org/10.1088/0031-9155/57/23/7923
        • Chang M.
        • Li L.
        • Chen Z.
        • Xiao Y.
        • Zhang L.
        • Wang G.
        A few-view reweighted sparsity hunting (FRESH) method for CT image reconstruCTion.
        J X-Ray Sci Technol. 2013; 21: 161-176https://doi.org/10.3233/XST-130370
        • Liu Y.
        • Liang Z.
        • Ma J.
        • Lu H.
        • Wang K.
        • Zhang H.
        • Moore W.
        Total variation-stokes strategy for sparse-view x-ray CT image reconstruCTion.
        IEEE Trans Med Imaging. 2014; 33: 749-763https://doi.org/10.1109/TMI.2013.2295738
        • Krizhevsky A.
        • Sutskever I.
        • Hinton G.E.
        Imagenet classification with deep convolutional neural networks.
        Adv Neural Inf Process Syst. 2012; 25
      1. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015, p. 3431–40.

      2. Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA. Context encoders: Feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, p. 2536–44.

      3. Kim J, Lee JK, Lee KM. Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, p. 1637–45.

        • Wu Y.
        • Zhang L.
        • Guo S.
        • Zhang L.
        • Gao F.
        • Jia M.
        • Zhou Z.
        Enhanced phase retrieval via deep concatenation networks for in-line X-ray phase contrast imaging.
        Phys. Medica. 2022; 95: 41-49https://doi.org/10.1016/J.EJMP.2021.12.017
        • Jin K.H.
        • McCann M.T.
        • Froustey E.
        • Unser M.
        Deep convolutional neural network for inverse problems in imaging.
        IEEE Trans Image Process. 2017; 26: 4509-4522https://doi.org/10.1109/TIP.2017.2713099
        • Ronneberger O.
        • Fischer P.
        • Brox T.
        U-net: Convolutional networks for biomedical image segmentation.
        in: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015: 234-241
        • Han Y.
        • Ye J.C.
        Framing U-net via deep convolutional framelets: Application to sparse-view CT.
        IEEE Trans Med Imaging. 2018; 37: 1418-1429https://doi.org/10.1109/TMI.2018.2823768
        • Lee M.
        • Kim H.
        • Kim H.J.
        Sparse-view CT reconstruCTion based on multi-level wavelet convolution neural network.
        Phys. Medica. 2020; 80: 352-362https://doi.org/10.1016/J.EJMP.2020.11.021
        • Liang K.
        • Yang H.
        • Kang K.
        • Xing Kaichao Liang Y.
        • Xing Y.
        • Yang ab H.
        • Kang ab K.
        • Xing ab Y.
        Improve angular resolution for sparse-view CT with residual convolutional neural network.
        DOI: 10.1117/12.2293319. 2018; 10573: 382-392https://doi.org/10.1117/12.2293319
        • Lee H.
        • Lee J.
        • Kim H.
        • Cho B.
        • Cho S.
        Deep-neural-network-based sinogram synthesis for sparse-view CT image reconstruCTion.
        IEEE Trans Radiat Plasma Med Sci. 2019; 3: 109-119https://doi.org/10.1109/TRPMS.2018.2867611
        • Zhu B.
        • Liu J.Z.
        • Cauley S.F.
        • Rosen B.R.
        • Rosen M.S.
        Image reconstruCTion by domain-transform manifold learning.
        Nature. 2018; 555: 487-492https://doi.org/10.1038/NATURE25988
        • Li Y.
        • Li K.
        • Zhang C.
        • Montoya J.
        • Chen G.H.
        Learning to reconstruCT computed tomography images direCTly from sinogram data under a variety of data acquisition conditions.
        IEEE Trans Med Imaging. 2019; 38: 2469-2481https://doi.org/10.1109/TMI.2019.2910760
        • Chen H.
        • Zhang Y.
        • Chen Y.
        • Zhang J.
        • Zhang W.
        • Sun H.
        • Lv Y.
        • Liao P.
        • Zhou J.
        • Wang G.
        LEARN: Learned experts’ assessment-based reconstruCTion network for sparse-data CT.
        IEEE Trans Med Imaging. 2018; 37: 1333-1347https://doi.org/10.1109/TMI.2018.2805692
        • Vaswani A.
        • Shazeer N.
        • Parmar N.
        • Uszkoreit J.
        • Jones L.
        • Gomez A.N.
        • Kaiser Ł.
        • Polosukhin I.
        Attention is all you need.
        Adv Neural Inf Process Syst. 2017; 30
        • Devlin J.
        • Chang M.W.
        • Lee K.
        • Toutanova K.
        BERT: Pre-training of deep bidireCTional transformers for language understanding.
        in: NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, Vol. 1. Association for Computational Linguistics (ACL), 2018: 4171-4186https://doi.org/10.48550/arxiv.1810.04805
        • Carion N.
        • Massa F.
        • Synnaeve G.
        • Usunier N.
        • Kirillov A.
        • Zagoruyko S.
        End-to-end objeCT deteCTion with transformers.
        in: European Conference on Computer Vision. Springer, 2020: 213-229
      4. Chen H, Wang Y, Guo T, Xu C, Deng Y, Liu Z, Ma S, Xu C, Xu C, Gao W. Pre-trained image processing transformer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, p. 12299–310.

        • Dosovitskiy A.
        • Beyer L.
        • Kolesnikov A.
        • Weissenborn D.
        • Zhai X.
        • Unterthiner T.
        • Dehghani M.
        • Minderer M.
        • Heigold G.
        • Gelly S.
        • Uszkoreit J.
        • Houlsby N.
        An image is worth 16x16 words: Transformers for image recognition at scale.
        2020 (arXiv:2010.11929)
        • Zhang Z.
        • Yu L.
        • Liang X.
        • Zhao W.
        • Xing L.
        TransCT: Dual-path transformer for low dose computed tomography.
        in: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2021: 55-64
        • Wang C.
        • Shang K.
        • Zhang H.
        • Li Q.
        • Hui Y.
        • Zhou S.K.
        DuDoTrans: DUal-domain transformer provides more attention for sinogram restoration in sparse-view CT reconstruCTion.
        2021 (arXiv:2111.10790)
        • Elbakri I.A.
        • Fessler J.A.
        Statistical image reconstruCTion for polyenergetic X-ray computed tomography.
        IEEE Trans Med Imaging. 2002; 21: 89-99https://doi.org/10.1109/42.993128
        • Armato S.G.
        • McLennan G.
        • Bidaut L.
        • McNitt-Gray M.F.
        • Meyer C.R.
        • Reeves A.P.
        • Zhao B.
        • Aberle D.R.
        • Henschke C.I.
        • Hoffman E.A.
        • Kazerooni E.A.
        • MacMahon H.
        • Van Beek E.J.
        • Yankelevitz D.
        • Biancardi A.M.
        • Bland P.H.
        • Brown M.S.
        • Engelmann R.M.
        • Laderach G.E.
        • Max D.
        • Pais R.C.
        • Qing D.P.
        • Roberts R.Y.
        • Smith A.R.
        • Starkey A.
        • Batra P.
        • Caligiuri P.
        • Farooqi A.
        • Gladish G.W.
        • Jude C.M.
        • Munden R.F.
        • Petkovska I.
        • Quint L.E.
        • Schwartz L.H.
        • Sundaram B.
        • Dodd L.E.
        • Fenimore C.
        • Gur D.
        • Petrick N.
        • Freymann J.
        • Kirby J.
        • Hughes B.
        • Vande Casteele A.
        • Gupte S.
        • Sallam M.
        • Heath M.D.
        • Kuhn M.H.
        • Dharaiya E.
        • Burns R.
        • Fryd D.S.
        • Salganicoff M.
        • Anand V.
        • Shreter U.
        • Vastagh S.
        • Croft B.Y.
        • Clarke L.P.
        The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed reference database of lung nodules on CT scans.
        Med. Phys. 2011; 38: 915-931https://doi.org/10.1118/1.3528204
        • Yin X.
        • Coatrieux J.L.
        • Zhao Q.
        • Liu J.
        • Yang W.
        • Yang J.
        • Quan G.
        • Chen Y.
        • Shu H.
        • Luo L.
        Domain progressive 3D residual convolution network to improve low-dose CT imaging.
        IEEE Trans Med Imaging. 2019; 38: 2903-2913https://doi.org/10.1109/TMI.2019.2917258
        • Paszke A.
        • Gross S.
        • Massa F.
        • Lerer A.
        • Bradbury J.
        • Chanan G.
        • Killeen T.
        • Lin Z.
        • Gimelshein N.
        • Antiga L.
        Pytorch: An imperative style, high-performance deep learning library.
        Adv Neural Inf Process Syst. 2019; 32
        • Kingma D.P.
        • Ba J.L.
        Adam: A method for stochastic optimization.
        in: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. International Conference on Learning Representations, ICLR, 2014https://doi.org/10.48550/arxiv.1412.6980
        • Glorot X.
        • Bengio Y.
        Understanding the difficulty of training deep feedforward neural networks.
        in: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, 2010: 249-256
        • Wang Z.
        • Bovik A.C.
        • Sheikh H.R.
        • Simoncelli E.P.
        Image quality assessment: From error visibility to struCTural similarity.
        IEEE Trans Image Process. 2004; 13: 600-612https://doi.org/10.1109/TIP.2003.819861