Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1674 2023-12-18 09:59:38 |
2 format change Meta information modification 1674 2023-12-18 10:06:38 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Wang, D.; Wang, Z.; Chen, L.; Xiao, H.; Yang, B. Cross-Parallel Vision Transformers for Medical Image Segmentation. Encyclopedia. Available online: https://encyclopedia.pub/entry/52859 (accessed on 19 November 2024).
Wang D, Wang Z, Chen L, Xiao H, Yang B. Cross-Parallel Vision Transformers for Medical Image Segmentation. Encyclopedia. Available at: https://encyclopedia.pub/entry/52859. Accessed November 19, 2024.
Wang, Dong, Zixiang Wang, Ling Chen, Hongfeng Xiao, Bo Yang. "Cross-Parallel Vision Transformers for Medical Image Segmentation" Encyclopedia, https://encyclopedia.pub/entry/52859 (accessed November 19, 2024).
Wang, D., Wang, Z., Chen, L., Xiao, H., & Yang, B. (2023, December 18). Cross-Parallel Vision Transformers for Medical Image Segmentation. In Encyclopedia. https://encyclopedia.pub/entry/52859
Wang, Dong, et al. "Cross-Parallel Vision Transformers for Medical Image Segmentation." Encyclopedia. Web. 18 December, 2023.
Cross-Parallel Vision Transformers for Medical Image Segmentation
Edit

Medical image segmentation primarily utilizes a hybrid model consisting of a Convolutional Neural Network and sequential Transformers. The latter leverage multi-head self-attention mechanisms to achieve comprehensive global context modelling. However, despite their success in semantic segmentation, the feature extraction process is inefficient and demands more computational resources, which hinders the network’s robustness. To address this issue, this research presents two innovative methods: PTransUNet (PT model) and C-PTransUNet (C-PT model). The C-PT module refines the Vision Transformer by substituting a sequential design with a parallel one. This boosts the feature extraction capabilities of Multi-Head Self-Attention via self-correlated feature attention and channel feature interaction, while also streamlining the Feed-Forward Network to lower computational demands.

medical image segmentation activation function cross parallel vision Transformers Multi-Head Self-Attention

1. Introduction

Medical image segmentation is a vital research area due to its distinct properties that differentiate it from RGB image segmentation. Its significance in medical applications further highlights its necessity. The encoder–decoder structure, based on the Convolutional Neural Network (CNN), was a pioneering aspect of this field [1][2]. It exhibited excellent receptive fields and contextual information in the deep layers of the network, making it adaptable to multiscale input images. Additionally, it represented an end-to-end training model that received significant attention at the time. This innovation gave rise to the foundational U-Net framework [3], based on the “U”-shaped network structure, sparking a wave of research enthusiasm. The U-Net network structure is characterized by its simplicity, featuring a fully symmetric encoder–decoder architecture with skip connections. Due to its outstanding network performance, it has dominated the field of medical image segmentation. However, CNNs with local receptive fields have limitations in extracting global features for tasks with long-term relationship dependencies, making them unable to fully capture global information. This restricts the CNN’s ability to realize its full potential.
Recently, network models based on the Transformer architecture [4] have been challenging the dominant position of CNN and gaining prominence, primarily due to their self-attention mechanism, which possesses the capability to model long-range contextual information. This addresses the limitations of CNNs, making them shine in the field of medical imaging. The concept of incorporating Transformer modules into the network architecture of U-Net has reignited a research wave centered around Transformer-based approaches in the domain of medical image segmentation. On one hand, most researchers have been exploring how to embed serial Transformer modules into the U-Net structure, leading to a series of classic networks such as TransUNet [5], Swin-Unet [6], UNETR [7] and so on [8][9][10][11][12]. Undeniably, serial Transformer network models have significantly improved the accuracy of medical image segmentation. Notably, the TransUNet [5] model was the first to apply the Vision Transformer (ViT) [13] to the field of medical image segmentation, leveraging the global contextual modeling capabilities of Transformers in conjunction with the local feature extraction characteristics of CNN. This has provided highly effective solutions for the medical domain. It is evident that their competitive advantage has been achieved through increased model complexity, which inevitably comes with high computational and memory costs, potentially impacting the practical application of these models in clinical medical segmentation [14]. On the other hand, there has been relatively less research on the application of parallel Transformer modules in medical image segmentation [15][16][17]. This is because traditional parallel modes tend to increase network parameters and feature dimensions, which can impact network efficiency and accuracy. However, the emergence of parallel ViT [18] has provided a new direction for applying parallel Transformers to medical image segmentation research. Under the condition of maintaining the same parameter count, replacing serial ViT [13] with parallel ViT can increase network width while reducing network depth. Parallel ViT achieves this by reducing the depth of the modules, optimizing network training, and making network training less challenging compared with serial ViT. However, despite maintaining the same parameter count, it still incurs high computational costs. Additionally, the reduction in network depth weakens its semantic representation and contextual awareness, limiting the applicability of parallel ViT in medical image segmentation. Therefore, there is a need for an effective parallel structure that can simultaneously enhance the accuracy and efficiency of medical image segmentation, breaking the limitations of parallel ViT applications.

2. Vision Transformers Development

The ViT model, which was first designed for Natural Language Processing, has now been successfully applied to Computer Vision. It competes favourably with conventional CNN approaches in tasks including image classification, target detection, and semantic segmentation. Its success is attributable to its dynamic attention mechanism and long-range modelling capabilities, which prove its robust feature learning. The image is divided into multiple small patches by ViT. These patches are then turned into sequences that serve as input features. An N-layer Transformer processes these sequences to produce a thorough feature representation of the entire image. With the self-attention mechanism, the Transformer captures long-distance dependencies among image features and enables higher-order spatial information exchange. It excels in global relational modeling, expanding the receptive field and acquiring rich contextual details, which effectively complements the global modeling capabilities of CNN.
Recently, a variety of novel models based on the ViT backbone network have been born, which may be categorized into sequential and parallel Transformer architectures. The sequential ViT models include DeiT [19], CeiT [20], Swin Transformer [21], T2T-ViT [22], PVT [23], DeepViT [24], and others. As the Transformer module is concerned with global contextual information, it is concerned with building relationships between pixels throughout the whole image range. It is unable to capture local visual features like standard CNN utilizing inductive bias, increasing the difficulty of ViT training and delayed convergence. Touvron et al. [19] proposed the DeiT model, which attempts to learn the inductive bias of image data and distill the knowledge utilizing the teacher model CNN, which is then passed to the student model Transformer, which may enhance feature extraction via convolutional bias and also accelerates model convergence. For sequence input, ViT will partition the input image into numerous patch blocks; these fixedly divided patch blocks will lose the image’s local features. The Swin Transformer [21] model adopts the idea of dynamic attention to neighbouring pixels, using sliding windows to model globally in the spatial dimension, while performing self-attention operations in each patch block and attention computation across blocks; this dynamic generation of attention weights reduces the computational complexity of self-attention and improves local feature extraction. Each ViT Transformer layer has the same resolution image characteristics, resulting in a high computational cost. Yuan et al. [22] proposed a T2T-ViT model which utilizes a deep and narrow hierarchical Transformer architecture to enhance the features but at a high computational cost. Wang et al. [23] suggested a Pyramid Vision Transformer (PVT) model with an asymptotically shrinking feature pyramid hierarchical structure that can acquire multi-scale feature maps and an attention layer SRA to reduce the computational consumption of processing high-resolution feature maps. DeepViT [24] is designed with Re-Attention, which re-performs the self-attention operation with multiple features across layers at a cheaper computational cost, relieving the deep ViT feature saturation problem and allowing the network to learn more complicated representations.
Currently, serial Transformer structure research is thriving, but parallel Transformer structure research is limited [16][17][25]. Parallel ViT [18] is the first improvement proposed by the Meta team, in which the serial connected Transformer blocks are converted to parallel processing by decreasing the depth of the model while increasing the width of the model, and the residual portion becomes smaller as the network becomes deeper, and the parallel processing can be approximated to be equivalent to the sequential ViT [13]; at this time, the number of model parameters and FLOPs are not changed. Depth [26][27] and width [28] are two critical factors for neural network architecture. To boost performance, most ViT variants [19][20][21][22][23][24] increase the depth by concatenating the Transformer. Deep networks are difficult to optimize, and the model’s separability is affected by the size of the feature dimension. There are fewer studies on ViT width expansion now [18], and the main concern is that parallel ViT raises the computational cost of the network, increases the model’s complexity, and the feature dimensions are too high to be easily overfitted.

3. Transformer-Based Medical Image Segmentation Method

Replacing the convolutional block of the U-network with a Transformer module capable of global feature extraction is a promising avenue to investigate the application of the Transformer for medical image segmentation. The TransUNet [5] was the pioneer network model implementing the ViT for medical image segmentation. As CNN captures only local information, embedding the Transformer in the codec to extract global features of the CNN image-coding block can acquire long-distance model dependencies and rich spatial information. In order to achieve accurate segmentation, the decoder up-samples the coded features and performs feature localization with CNN low-level features. The TransUNet model incorporates the self-attention mechanism into the U-Net architecture to enhance contextual comprehension, but this comes with a notable computational expense. The Swin-Unet [6] model, inspired by the Swin Transformer [21] module, replaces the U-Net network’s convolutional layer directly, leading to the first pure Transformer structure for medical image segmentation. The input image undergoes a non-overlapping patch operation before being fed into the Transformer encoder to learn the global deep feature representation. The decoder then combines the encoded features with up-sampled features to recover the feature map and perform segmentation prediction. This approach resolves the issue that convolution struggles to learn global semantic information effectively. UNETR [7] aims to convert the 3D segmentation task into a sequence-to-sequence prediction problem. This model’s encoder learns semantic features over long distances using a pure Transformer architecture, while the decoder retrieves high-resolution features with a CNN structure. UNETR uses a hybrid Transformer-CNN approach because it recognizes that ViT, even though it is excellent at extracting global features, does not perform well in acquiring local semantic information, and that the Transformer has a greater computational overhead than CNN. As previously mentioned, conventional network architectures including TransUNet [5], Swin-Unet [6], and UNETR [7] utilize ViT or Swin Transformer modules to enhance feature extraction through the increase of network depth (i.e., the series connectivity pattern of the blocks [29]). It is evident that increasing the network depth has a greater impact on model performance, yet this may not be the ideal selection when considering network optimization, separability, and computational costs.

References

  1. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440.
  2. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495.
  3. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241.
  4. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30.
  5. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306.
  6. Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 205–218.
  7. Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H.R.; Xu, D. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2022; pp. 574–584.
  8. Xie, J.; Zhu, R.; Wu, Z.; Ouyang, J. FFUNet: A novel feature fusion makes strong decoder for medical image segmentation. IET Signal Process. 2022, 16, 501–514.
  9. Zhou, H.-Y.; Guo, J.; Zhang, Y.; Yu, L.; Wang, L.; Yu, Y. nnformer: Interleaved transformer for volumetric segmentation. arXiv 2021, arXiv:2109.03201.
  10. Wang, H.; Cao, P.; Wang, J.; Zaiane, O.R. Uctransnet: Rethinking the skip connections in U-Net from a channel-wise perspective with transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–2 March 2022; pp. 2441–2449.
  11. Gao, Y.; Zhou, M.; Metaxas, D.N. UTNet: A hybrid transformer architecture for medical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; pp. 61–71.
  12. Peiris, H.; Hayat, M.; Chen, Z.; Egan, G.; Harandi, M. A robust volumetric transformer for accurate 3D tumor segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; pp. 162–172.
  13. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16×16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929.
  14. Ansari, M.Y.; Abdalla, A.; Ansari, M.Y.; Ansari, M.I.; Malluhi, B.; Mohanty, S.; Mishra, S.; Singh, S.S.; Abinahed, J.; Al-Ansari, A. Practical utility of liver segmentation methods in clinical surgeries and interventions. BMC Med. Imaging 2022, 22, 97.
  15. Liu, Z.; Shen, L. Medical image analysis based on transformer: A review. arXiv 2022, arXiv:2208.06643.
  16. Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in vision: A survey. ACM Comput. Surv. (CSUR) 2022, 54, 1–41.
  17. Shamshad, F.; Khan, S.; Zamir, S.W.; Khan, M.H.; Hayat, M.; Khan, F.S.; Fu, H. Transformers in medical imaging: A survey. Med. Image Anal. 2023, 88, 102802.
  18. Touvron, H.; Cord, M.; El-Nouby, A.; Verbeek, J.; Jégou, H. Three things everyone should know about vision transformers. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 497–515.
  19. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. Proc. Int. Conf. Mach. Learn. 2021, 139, 10347–10357.
  20. Yuan, K.; Guo, S.; Liu, Z.; Zhou, A.; Yu, F.; Wu, W. Incorporating convolution designs into visual transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 579–588.
  21. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 10012–10022.
  22. Yuan, L.; Chen, Y.; Wang, T.; Yu, W.; Shi, Y.; Jiang, Z.-H.; Tay, F.E.; Feng, J.; Yan, S. Tokens-to-token vit: Training vision transformers from scratch on imagenet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 558–567.
  23. Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 568–578.
  24. Zhou, D.; Kang, B.; Jin, X.; Yang, L.; Lian, X.; Jiang, Z.; Hou, Q.; Feng, J. Deepvit: Towards deeper vision transformer. arXiv 2021, arXiv:2103.11886.
  25. Lin, T.; Wang, Y.; Liu, X.; Qiu, X. A survey of transformers. AI Open 2022, 3, 111–132.
  26. Delalleau, O.; Bengio, Y. Shallow vs. deep sum-product networks. Adv. Neural Inf. Process. Syst. 2011, 24.
  27. Eldan, R.; Shamir, O. The power of depth for feedforward neural networks. In Proceedings of the Conference on Learning Theory, New York, NY, USA, 23–26 June 2016; pp. 907–940.
  28. Lu, Z.; Pu, H.; Wang, F.; Hu, Z.; Wang, L. The expressive power of neural networks: A view from the width. Adv. Neural Inf. Process. Syst. 2017, 30.
  29. Zhang, Y.; Liu, H.; Hu, Q. Transfuse: Fusing transformers and cnns for medical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 Septembe–1 October 2021; pp. 14–24.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 279
Revisions: 2 times (View History)
Update Date: 18 Dec 2023
1000/1000
ScholarVision Creations