Skip to main content
placeholder image

Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation

Journal Article


Abstract


  • To leverage the correlated information between modalities to benefit the cross-modal segmentation, we propose a novel cross-modal attention-guided convolutional network for multi-modal cardiac segmentation. In particular, we first employed the cycle-consistency generative adversarial networks to complete the bidirectional image generation (i.e., MR to CT, CT to MR) to help reduce the modal-level inconsistency. Then, with the generated and original MR and CT images, a novel convolutional network is proposed where (1) two encoders learn individual features separately and (2) a common decoder learns shareable features between modalities for a final consistent segmentation. Also, we propose a cross-modal attention module between the encoders and decoder in order to leverage the correlated information between modalities. Our model can be trained in an end-to-end manner. With extensive evaluation on the unpaired CT and MR cardiac images, our method outperforms the baselines in terms of the segmentation performance.

Authors


  •   Zhou, Ziqi (external author)
  •   Guo, Xinna (external author)
  •   Yang, Wanqi (external author)
  •   Shi, Yinghuan (external author)
  •   Zhou, Luping
  •   Wang, Lei
  •   Yang, Ming (external author)

Publication Date


  • 2019

Citation


  • Zhou, Z., Guo, X., Yang, W., Shi, Y., Zhou, L., Wang, L. & Yang, M. (2019). Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11861 LNCS 601-610.

Scopus Eid


  • 2-s2.0-85075689732

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers1/3522

Number Of Pages


  • 9

Start Page


  • 601

End Page


  • 610

Volume


  • 11861 LNCS

Place Of Publication


  • Germany

Abstract


  • To leverage the correlated information between modalities to benefit the cross-modal segmentation, we propose a novel cross-modal attention-guided convolutional network for multi-modal cardiac segmentation. In particular, we first employed the cycle-consistency generative adversarial networks to complete the bidirectional image generation (i.e., MR to CT, CT to MR) to help reduce the modal-level inconsistency. Then, with the generated and original MR and CT images, a novel convolutional network is proposed where (1) two encoders learn individual features separately and (2) a common decoder learns shareable features between modalities for a final consistent segmentation. Also, we propose a cross-modal attention module between the encoders and decoder in order to leverage the correlated information between modalities. Our model can be trained in an end-to-end manner. With extensive evaluation on the unpaired CT and MR cardiac images, our method outperforms the baselines in terms of the segmentation performance.

Authors


  •   Zhou, Ziqi (external author)
  •   Guo, Xinna (external author)
  •   Yang, Wanqi (external author)
  •   Shi, Yinghuan (external author)
  •   Zhou, Luping
  •   Wang, Lei
  •   Yang, Ming (external author)

Publication Date


  • 2019

Citation


  • Zhou, Z., Guo, X., Yang, W., Shi, Y., Zhou, L., Wang, L. & Yang, M. (2019). Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11861 LNCS 601-610.

Scopus Eid


  • 2-s2.0-85075689732

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers1/3522

Number Of Pages


  • 9

Start Page


  • 601

End Page


  • 610

Volume


  • 11861 LNCS

Place Of Publication


  • Germany